![]() Sikk, Kaarel ![]() Doctoral thesis (2023) Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as ... [more ▼] Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as passive imprints of past human activities and spatial interactions they shape have not been studied as the driving force of historical processes. While archaeological knowledge has been used to construct geographical theories of evolution of settlement there still exist gaps in this knowledge. Currently no theoretical framework has been adopted to explore them as spatial systems emerging from micro-choices of small population units. The goal of this thesis is to propose a conceptual model of adaptive settlement systems based on complex adaptive systems framework. The model frames settlement system formation processes as an adaptive system containing spatial features, information flows, decision making population units (agents) and forming cross scale feedback loops between location choices of individuals and space modified by their aggregated choices. The goal of the model is to find new ways of interpretation of archaeological locational data as well as closer theoretical integration of micro-level choices and meso-level settlement structures. The thesis is divided into five chapters, the first chapter is dedicated to conceptualisation of the general model based on existing literature and shows that settlement systems are inherently complex adaptive systems and therefore require tools of complexity science for causal explanations. The following chapters explore both empirical and theoretical simulated settlement patterns based dedicated to studying selected information flows and feedbacks in the context of the whole system. Second and third chapters explore the case study of the Stone Age settlement in Estonia comparing residential location choice principles of different periods. In chapter 2 the relation between environmental conditions and residential choice is explored statistically. The results confirm that the relation is significant but varies between different archaeological phenomena. In the third chapter hunter-fisher-gatherer and early agrarian Corded Ware settlement systems were compared spatially using inductive models. The results indicated a large difference in their perception of landscape regarding suitability for habitation. It led to conclusions that early agrarian land use significantly extended land use potential and provided a competitive spatial benefit. In addition to spatial differences, model performance was compared and the difference was discussed in the context of proposed adaptive settlement system model. Last two chapters present theoretical agent-based simulation experiments intended to study effects discussed in relation to environmental model performance and environmental determinism in general. In the fourth chapter the central place foragingmodel was embedded in the proposed model and resource depletion, as an environmental modification mechanism, was explored. The study excluded the possibility that mobility itself would lead to modelling effects discussed in the previous chapter. The purpose of the last chapter is the disentanglement of the complex relations between social versus human-environment interactions. The study exposed non-linear spatial effects expected population density can have on the system and the general robustness of environmental inductive models in archaeology to randomness and social effect. The model indicates that social interactions between individuals lead to formation of a group agency which is determined by the environment even if individual cognitions consider the environment insignificant. It also indicates that spatial configuration of the environment has a certain influence towards population clustering therefore providing a potential pathway to population aggregation. Those empirical and theoretical results showed the new insights provided by the complex adaptive systems framework. Some of the results, including the explanation of empirical results, required the conceptual model to provide a framework of interpretation. [less ▲] Detailed reference viewed: 125 (17 UL)![]() van Zweel, Karl Nicolaus ![]() Doctoral thesis (2023) The general scope of the PhD research project falls within the framework of developing integrated catchment hydro-biogeochemical theories in the context of the Critical Zone (CZ). Significant advances in ... [more ▼] The general scope of the PhD research project falls within the framework of developing integrated catchment hydro-biogeochemical theories in the context of the Critical Zone (CZ). Significant advances in the understanding of water transit time theory, subsurface structure controls, and the quantification of catchment scale weathering rates have resulted in the convergence of classical biogeochemical and hydrological theories. This will potentially pave the way for a more mechanistic understanding of CZ because many challenges still exist. Perhaps the most difficult of all is a unifying hydro-biogeochemical theory that can compare catchments across gradients of climate, geology, and vegetation. Understanding the processes driving the evolution of chemical tracers as they move through space and time is of cardinal importance to validating mixing hypotheses and assisting in determining the residence time of water in CZ. The specific aim of the study is to investigate what physical and biogeochemical processes are driving variations in observable endmembers in stream discharge as a function of the hydrological state at headwater catchment scale. This requires looking beyond what can be observed in the stream and what is called ”unseen flowlines” in this thesis. The Weierbach Experimental Catchment (WEC) in Luxembourg provides a unique opportunity to study these processes, with an extensive biweekly groundwater chemistry dataset spanning over ten years. Additionally, WEC has been the subject of numerous published works in the domain of CZ science, adding to an already detailed hydrological and geochemical understanding of the system. Multivariate analysis techniques were used to identify the unseen flowlines in the catchment. Together with the excising hydrological perception model and a geochemical modelling approach, these flowlines were rigorously investigated to understand what processes drive their respective manifestations in the system. The existing perceptual model for WEC was updated by the new findings and tested on 27 flood events to assess if it could adequately explain the c − Q behaviour observed during these periods. The novelty of the study lies in the fact that it uses both data-driven modelling approaches and geochemical processbased modelling to look beyond what can be observed in the near-stream environment of headwaters. [less ▲] Detailed reference viewed: 23 (1 UL)![]() Setti Junior, Paulo de Tarso ![]() Doctoral thesis (2023) Understanding, quantifying and monitoring soil moisture is important for many applications, e.g., agriculture, weather forecasting, occurrence of heatwaves, droughts and floods, and human health. At a ... [more ▼] Understanding, quantifying and monitoring soil moisture is important for many applications, e.g., agriculture, weather forecasting, occurrence of heatwaves, droughts and floods, and human health. At a large scale, satellite microwave remote sensing has been used to retrieve soil moisture information. Surface water has also been detected and monitored through remote sensing orbital platforms equipped with passive microwave, radar, and optical sensors. The use of reflected L-band Global Navigation Satellite System (GNSS) signals represents an emerging remote sensing concept to retrieve geophysical parameters. In GNSS Reflectometry (GNSS-R) these signals are repurposed to infer properties of the surface from which they reflect as they are sensitive to variations in biogeophysical parameters. NASA's Cyclone GNSS (CYGNSS) is the first mission fully dedicated to spaceborne GNSS-R. The eight-satellite constellation measures Global Positioning System (GPS) reflected L1 (1575.42 MHz) signals. Spire Global, Inc. has also started developing their GNSS-R mission, with four satellites currently in orbit. In this thesis we propose and validate a method to retrieve large-scale near-surface soil moisture and a method to map and monitor inundations using spaceborne GNSS-R. Our soil moisture model is based on the assumption that variations in surface reflectivity are linearly related to variations in soil moisture and uses a new method to normalize the observations with respect to the angle of incidence. The normalization method accounts for the spatially varying effects of coherent and incoherent scattering. We found a median unbiased root-mean-square error (ubRMSE) of 0.042 cm3 cm-3 when comparing our method to two years of Soil Moisture Active Passive (SMAP) data and a median ubRMSE of 0.059 cm3 cm-3 compared to the observations of 207 in-situ stations. Our results also showed an improved temporal resolution compared to sensors traditionally used for this purpose. Assessing Spire and CYGNSS data over a region in south east Australia, we observed a similar behavior in terms of surface reflectivity and sensitivity to soil moisture. As Spire satellites collect data from multiple GNSS constellations, we found that it is important to differentiate the observations when calibrating a soil moisture model. The inundation mapping method that we propose is based on a track-wise approach. When classifying the reflections track by track the influence of the angle of incidence and the GNSS transmitted power are minimized or eliminated. With CYGNSS data we produced more than four years of monthly surface water maps over the Amazon River basin and the Pantanal wetlands complex with a spatial resolution of 4.5 km. With GNSS-R we could overcome some of the limitations of optical and microwave remote sensing methods for inundation mapping. We used a set of metrics commonly used to evaluate classification performance to assess our product and discussed the differences and similarities with other products. [less ▲] Detailed reference viewed: 61 (5 UL)![]() Deregnoncourt, Marine ![]() Doctoral thesis (2023) Entitled “The Figures of intimacy and extimacy: a Reflection on Marina Hands and Eric Ruf’s acting in Jean Racine's Phèdre and Paul Claudel's Partage de midi”, this PhD dissertation addresses the concepts ... [more ▼] Entitled “The Figures of intimacy and extimacy: a Reflection on Marina Hands and Eric Ruf’s acting in Jean Racine's Phèdre and Paul Claudel's Partage de midi”, this PhD dissertation addresses the concepts of “intimacy” and “extimacy” as witnessed through Marina Hands and Eric Ruf’s vocal and scenic acting in Patrice Chereau and Yves Beaunesne’s respective productions of Racine's and Claudel's works. In the frame of these two shows both actors manage to suggest “extimacy” with their body language, and to render “intimacy” thanks to their “singing” diction of Racine’s alexandrine and Claudel’s free verse. Throughout our development, we will display perpetual links between Racine and Claudel, as suggested by the singular acting of these two performers. The problematic axis of this dissertation is thus the following: “how does the combination of the “extimacy” of Marina Hands and Éric Ruf’s body-language and the “intimacy” of their “singing” diction reveal the musicality of Racine’s and Claudel’s languages?”. We shall see that “intimacy” turns out to be “extimacy” on stage, and that the two concepts are nothing but two sides of the same coin. “Intimacy” is the constant object of both Patrice Chereau's and Yves Beaunesne's research, to the extent that it constitutes the essence of their artistic creations. In order to become “extimacy”, “intimacy” has to be mediated by actors’ bodies if it means to serve the text actually heard on stage. The union between body and text is therefore a central issue. [less ▲] Detailed reference viewed: 30 (2 UL)![]() Hubai, Andrii ![]() Doctoral thesis (2023) Detailed reference viewed: 53 (5 UL)![]() Chakrapani, Neera ![]() Doctoral thesis (2023) Red meat allergy aka α-Gal allergy is a delayed allergic response occurring upon consumption of mammalian meat and by-products. Patients report eating meat without any problems for several years before ... [more ▼] Red meat allergy aka α-Gal allergy is a delayed allergic response occurring upon consumption of mammalian meat and by-products. Patients report eating meat without any problems for several years before developing the allergy. Although children can develop red meat allergy, it is more prevalent in adults. In addition to a delayed onset of reactions, immediate hypersensitivity is reported in case of contact with the allergen via intravenous route. Galactose-α-1,3-galactose (α-Gal) is the first highly allergenic carbohydrate that has been identified to cause allergy all across the world. In general, carbohydrates exhibit low immunogenicity and are not capable of inducing a strong immune response on their own. Although the α-Gal epitope is present in conjugation with both proteins and lipids, due to an overall accepted role of proteins in allergy, glycoproteins from mammalian food sources were first characterized. However, a unique feature of α-Gal allergy is the delayed occurrence of allergic symptoms upon ingestion of mammalian meat and an allergenic role of glycolipids has been proposed to explain these delayed responses. A second important feature of the disease is that the development of specific IgE to α-Gal has been associated with tick bites belonging to various tick species, depending on the geographical region. In this tick-mediated allergy an intriguing factor is the absence of an α-1,3-GalT gene in ticks, a gene coding for an enzyme capable of α-Gal synthesis, which raises questions on the source and identity of the sensitizing molecule within ticks, immune responses to tick bites, and effect of increased exposure. In this study, we sought to elucidate the origin of sensitization to α-Gal by investigating a cohort of individuals exposed to recurrent tick bites and by exploring the proteome of ticks in a longitudinal study. Furthermore, we analysed the allergenicity of glycoproteins and glycolipids in order to determine the food components responsible for the delayed onset of symptoms. The aim of the Chapter I was to determine IgG profiles and the prevalence rate of sensitization to α-Gal in a high-risk cohort of forestry employees from Luxembourg. The aim of Chapter II was to analyse the presence of host blood in Ixodes ricinus after moulting and upon prolonged starvation in order to support or reject the host blood transmission hypothesis. The aim of the Chapter III was to investigate and compare the allergenicity of glycolipids and glycoproteins to understand their role in the allergic response. Moreover, we have analysed the stability of glycoproteins and compared extracts from different food sources. This chapter is in the form of a published article. In Chapter IV, I have made an attempt to create mutant models with specified α-Gal glycosylation in order to study role of spatial distribution of α-Gal in IgE cross linking and effector cell activation. [less ▲] Detailed reference viewed: 32 (1 UL)![]() Hemedan, Ahmed ![]() Doctoral thesis (2023) Interpretation of omics data is needed to form meaningful hypotheses about disease mechanisms. Pathway databases give an overview of disease-related processes, while mathematical models give qualitative ... [more ▼] Interpretation of omics data is needed to form meaningful hypotheses about disease mechanisms. Pathway databases give an overview of disease-related processes, while mathematical models give qualitative and quantitative insights into their complexity. Similarly to pathway databases, mathematical models are stored and shared on dedicated platforms. Moreover, community-driven initiatives such as disease maps encode disease-specific mechanisms in both computable and diagrammatic form using dedicated tools for diagram biocuration and visualisation. To investigate the dynamic properties of complex disease mechanisms, computationally readable content can be used as a scaffold for building dynamic models in an automated fashion. The dynamic properties of a disease are extremely complex. Therefore, more research is required to better understand the complexity of molecular mechanisms, which may advance personalized medicine in the future. In this study, Parkinson’s disease (PD) is analyzed as an example of a complex disorder. PD is associated with complex genetic, environmental causes and comorbidities that need to be analysed in a systematic way to better understand the progression of different disease subtypes. Studying PD as a multifactorial disease requires deconvoluting the multiple and overlapping changes to identify the driving neurodegenerative mechanisms. Integrated systems analysis and modelling can enable us to study different aspects of a disease such as progression, diagnosis, and response to therapeutics. Therefore, more research is required to better understand the complexity of molecular mechanisms, which may advance personalized medicine in the future. Modelling such complex processes depends on the scope and it may vary depending on the nature of the process (e.g. signalling vs metabolic). Experimental design and the resulting data also influence model structure and analysis. Boolean modelling is proposed to analyse the complexity of PD mechanisms. Boolean models (BMs) are qualitative rather than quantitative and do not require detailed kinetic information such as Petri nets or Ordinary Differential equations (ODEs). Boolean modelling represents a logical formalism where available variables have binary values of one (ON) or zero (OFF), making it a plausible approach in cases where quantitative details and kinetic parameters 9 are not available. Boolean modelling is well validated in clinical and translational medicine research. In this project, the PD map was translated into BMs in an automated fashion using different methods. Therefore, the complexity of disease pathways can be analysed by simulating the effect of genomic burden on omics data. In order to make sure that BMs accurately represent the biological system, validation was performed by simulating models at different scales of complexity. The behaviour of the models was compared with expected behavior based on validated biological knowledge. The TCA cycle was used as an example of a well-studied simple network. Different scales of complex signalling networks were used including the Wnt-PI3k/AKT pathway, and T-cell differentiation models. As a result, matched and mismatched behaviours were identified, allowing the models to be modified to better represent disease mechanisms. The BMs were stratified by integrating omics data from multiple disease cohorts. The miRNA datasets from the Parkinson’s Progression Markers Initiative study (PPMI) were analysed. PPMI provides an important resource for the investigation of potential biomarkers and therapeutic targets for PD. Such stratification allowed studying disease heterogeneity and specific responses to molecular perturbations. The results can support research hypotheses, diagnose a condition, and maximize the benefit of a treatment. Furthermore, the challenges and limitations associated with Boolean modelling in general were discussed, as well as those specific to the current study. Based on the results, there are different ways to improve Boolean modelling applications. Modellers can perform exploratory investigations, gathering the associated information about the model from literature and data resources. The missing details can be inferred by integrating omics data, which identifies missing components and optimises model accuracy. Accurate and computable models improve the efficiency of simulations and the resulting analysis of their controllability. In parallel, the maintenance of model repositories and the sharing of models in easily interoperable formats are also important. [less ▲] Detailed reference viewed: 33 (7 UL)![]() Ansarinia, Morteza ![]() Doctoral thesis (2023) Cognitive control is essential to human cognitive functioning as it allows us to adapt and respond to a wide range of situations and environments. The possibility to enhance cognitive control in a way ... [more ▼] Cognitive control is essential to human cognitive functioning as it allows us to adapt and respond to a wide range of situations and environments. The possibility to enhance cognitive control in a way that transfers to real life situations could greatly benefit individuals and society. However, the lack of a formal, quantitative definition of cognitive control has limited progress in developing effective cognitive control training programs. To address this issue, the first part of the thesis focuses on gaining clarity on what cognitive control is and how to measure it. This is accomplished through a large-scale text analysis that integrates cognitive control tasks and related constructs into a cohesive knowledge graph. This knowledge graph provides a more quantitative definition of cognitive control based on previous research, which can be used to guide future research. The second part of the thesis aims at furthering a computational understanding of cognitive control, in particular to study what features of the task (i.e., the environment) and what features of the cognitive system (i.e., the agent) determine cognitive control, its functioning, and generalization. The thesis first presents CogEnv, a virtual cognitive assessment environment where artificial agents (e.g., reinforcement learning agents) can be directly compared to humans in a variety of cognitive tests. It then presents CogPonder, a novel computational method for general cognitive control that is relevant for research on both humans and artificial agents. The proposed framework is a flexible, differentiable end-to-end deep learning model that separates the act of control from the controlled act, and can be trained to perform the same cognitive tests that are used in cognitive psychology to assess humans. Together, the proposed cognitive environment and agent architecture offer unique new opportunities to enable and accelerate the study of human and artificial agents in an interoperable framework. Research on training cognition with complex tasks, such as video games, may benefit from and contribute to the broad view of cognitive control. The final part of the thesis presents a profile of cognitive control and its generalization based on cognitive training studies, in particular how it may be improved by using action video game training. More specifically, we contrasted the brain connectivity profiles of people that are either habitual action video game players or do not play video games at all. We focused in particular on brain networks that have been associated with cognitive control. Our results show that cognitive control emerges from a distributed set of brain networks rather than individual specialized brain networks, supporting the view that action video gaming may have a broad, general impact of cognitive control. These results also have practical value for cognitive scientists studying cognitive control, as they imply that action video game training may offer new ways to test cognitive control theories in a causal way. Taken together, the current work explores a variety of approaches from within cognitive science disciplines to contribute in novel ways to the fascinating and long tradition of research on cognitive control. In the age of ubiquitous computing and large datasets, bridging the gap between behavior, brain, and computation has the potential to fundamentally transform our understanding of the human mind and inspire the development of intelligent artificial agents. [less ▲] Detailed reference viewed: 64 (19 UL)![]() Fouillet, Thibault ![]() Doctoral thesis (2023) The capacity of small powers to think strategically remains a limited field of interest in historical thinking and international relations. Thus, beyond the debate concerning the capacity of small states ... [more ▼] The capacity of small powers to think strategically remains a limited field of interest in historical thinking and international relations. Thus, beyond the debate concerning the capacity of small states to be full-fledged actors in the international system, there appears to be a denial of the conceptualization and doctrinal innovation capacity of small powers despite the historical redundancy of the victory of the weak over the strong. However, small powers are by nature more sensitive to threats due to their limited response capabilities, and are therefore more inclined to rationalize their action over the long term in order to develop national (military, economic, diplomatic capabilities, etc.) and international (alliances, international organizations, etc.) mechanisms for containing these threats. In this respect, this thesis proposes to look at the construction of the strategic thinking of small powers in the face of perceived threats and the means used to try to contain them. The aim is therefore to study the mechanisms by which small powers found a Grand Strategy (transcribed in the form of doctrines) to deal with the security dilemmas they face. To this end, three case studies were analyzed (Luxembourg, Singapore, Lithuania), chosen for the diversity of their strategic and historical contexts offering a variety of security dilemmas. The Grand Strategy being in essence a conceptual construction with a prospective and applicative aim, a theoretical as well as a practical methodology (through the use of immediate history and wargaming) was then implemented. Two sets of lessons can be drawn from this thesis. The first is methodological, confirming the interest of doctrinal studies as a field of strategic reflection, and establishing wargaming as a prospective tool adapted to the conduct of fundamental research. The second, conceptual, allows for a better understanding of the capacity of small powers to create great and efficient strategies, which must be taken into account within the strategic genealogy because of their conceptual dynamism, which can be used to teach lessons even to great powers. [less ▲] Detailed reference viewed: 56 (4 UL)![]() Daoudi, Nadia ![]() Doctoral thesis (2023) Android offers plenty of services to mobile users and has gained significant popularity worldwide. The success of Android has resulted in attracting more mobile users but also malware authors. Indeed ... [more ▼] Android offers plenty of services to mobile users and has gained significant popularity worldwide. The success of Android has resulted in attracting more mobile users but also malware authors. Indeed, attackers target Android markets to spread their malicious apps and infect users’ devices. The consequences vary from displaying annoying ads to gaining financial benefits from users. To overcome the threat posed by Android malware, Machine Learning has been leveraged as a promising technique to automatically detect malware. The literature on Android malware detection lavishes with a huge variety of ML-based approaches that are designed to discriminate malware from legitimate samples. These techniques generally rely on manually engineered features that are extracted from the apps’ artefacts. Reported to be highly effective, Android malware detection approaches seem to be the magical solution to stop the proliferation of malware. Unfortunately, the gap between the promised and the actual detection performance is far from negligible. Despite the rosy excellent detection performance painted in the literature, the detection reports show that Android malware is still spreading and infecting mobile users. In this thesis, we investigate the reasons that impede state-of-the-art Android malware detection approaches to surround the spread of Android malware and propose solutions and directions to boost their detection performance. In the first part of this thesis, we focus on revisiting the state of the art in Android malware detection. Specifically, we conduct a comprehensive study to assess the reproducibility of state-of-the-art Android malware detectors. We consider research papers published at 16 major venues over a period of ten years and report our reproduction outcome. We also discuss the different obstacles to reproducibility and how they can be overcome. Then, we perform an exploratory analysis on a state-of-the-art malware detector, DREBIN, to gain an in-depth understanding of its inner working. Our study provides insights into the quality of DREBIN’s features and their effectiveness in discriminating Android malware. In the second part of this thesis, we investigate novel features for Android malware detection that do not involve manual engineering. Specifically, we propose an Android malware detection approach, DexRay, that relies on features extracted automatically from the apps. We convert the raw bytecode of the app DEX files into an image and train a 1-dimensional convolutional neural network to automatically learn the relevant features. Our approach stands out for the simplicity of its design choices and its high detection performance, which make it a foundational framework for further developing this domain. In the third part, we attempt to push the frontier of Android malware detection via enhancing the detection performance of the state of the art. We show through a large-scale evaluation of four state-of-the-art malware detectors that their detection performance is highly dependent on the experimental dataset. To solve this issue, we investigate the added value of combining their features and predictions using 22 combination methods. While it does not improve the detection performance reported by individual approaches, the combination of features and predictions maintains the highest detection performance independently of the dataset. We further propose a novel technique, Guided Retraining, that boosts the detection performance of state-of-the-art Android malware detectors. Guided Retraining uses contrastive learning to learn a better representation of the difficult samples to improve their prediction. [less ▲] Detailed reference viewed: 46 (15 UL)![]() Florent, Perrine Julie ![]() Doctoral thesis (2023) Albeit recent technological developments (e.g. field deployable instruments operating at high temporal frequencies), experimental hydrology is a discipline that remains measurement limited. From this ... [more ▼] Albeit recent technological developments (e.g. field deployable instruments operating at high temporal frequencies), experimental hydrology is a discipline that remains measurement limited. From this perspective, trans-disciplinary approaches may create valuable opportunities to enlarge the number of tools available for investigating hydrological processes. Tracing experiments are usually performed in order to investigate the water flow pathways and water sources in underground areas. Since the 19th century, researchers have worked with hydrological tracers to do this. Among them, the fluorescent dyes and the isotopes are the most commonly used to follow the water flow while others like salts or bacteriophages are employed as additional tracers to those mentioned above. Bacteriophages are the least known of all, but it has been studied since the 1960s as hydrological tracers, especially in karstic environments. The purpose is to evaluate the potential for bacteriophages naturally occurring in soils to serve as a new environmental tracer of hydrological processes. We hypothesize that such viral particles can be a promising tool in water tracing experiments since they are safe for ecosystems. In both fields of hydrology and virology, the knowledge regarding the fate of bacteriophages within the pedosphere is still limited. Their study would not only allow proposing potential new candidates to enlarge the hydrological tracers available, but also improving the current knowledge about the bacteriophage communities in soil and their interactions with certain environmental factors. For this purpose, we aim at describing the bacteriophage communities occurring in the soil through shotgun metagenomics analysis. Those viruses are widely spread in the pedosphere, and we assume that they have specific signatures according to the type of soil. Then, bacteriophage populations will be investigated in the soil water to analyse the dis/similarities between the two communities as well as their dynamics in the function of the precipitation events. This way, with a relatively high abundance of soil and soil water and a capacity of being mobilised, good bacteriophage candidates could be selected as hydrological tracers. [less ▲] Detailed reference viewed: 50 (1 UL)![]() Chen, Juntong ![]() Doctoral thesis (2023) Detailed reference viewed: 41 (5 UL)![]() Gubenko, Alla ![]() Doctoral thesis (2023) Arguably, embodiment is the most neglected aspect of cognitive psychology and creativity research. Whereas most existing theoretical frameworks are inspired by or implicitly imply “cognition as a ... [more ▼] Arguably, embodiment is the most neglected aspect of cognitive psychology and creativity research. Whereas most existing theoretical frameworks are inspired by or implicitly imply “cognition as a computer” metaphor, depicting creative thought as a disembodied idea generation and processing of amodal symbols, this thesis proposes that “cognition as a robot” may be a better metaphor to understand how creative cognition operates . In this thesis, I compare and investigate human creative cognition in relation to embodied artificial agents that has to learn to navigate and act in complex and changing material and social environments from a set of multimodal streams (e.g., vision, haptic). Instead of relying on divergent thinking or associative accounts of creativity, I attempt to elaborate an embodied and action-oriented vision of creativity grounded in the 4E cognition paradigm. Situated at the intersection of the psychology of creativity, technology, and embodied cognitive science, the thesis attempts to synthesize disparate lines of work and look at a complex problem of human creativity through interdisciplinary lenses. In this perspective, the study of creativity is no longer a prerogative of social scientists but a collective and synergistic endeavor of psychologists, engineers, designers, and computer scientists. [less ▲] Detailed reference viewed: 50 (9 UL)![]() Bellomo, Nicolas ![]() Doctoral thesis (2023) Climate change due to the increase in GHG emissions and energy crisis due to scarcity of fossil fuel availability are an ever growing issue for the planet and countries. The decarbonization and the ... [more ▼] Climate change due to the increase in GHG emissions and energy crisis due to scarcity of fossil fuel availability are an ever growing issue for the planet and countries. The decarbonization and the sustainability of the energy sector is one of the top priority to achieve a resilient system. Hydrogen has been considered for decades to be used as an alternative for fossil fuel and now is the time of development for an hydrogen based economy. Fuel cells are devices that convert the hydrogen chemical energy into electrical energy and is one the main component considered for the hydrogen economy. However, much is yet to be achieved to make their manufacturing as cheap and as efficient as possible. Chemical vapour deposition (CVD) is a technique used to synthesize solid materials from gaseous precursors which has the advantages, over wet chemistry, to reduce wastes of production, to be cheap, to make pure solid materials and to be easily scalable. In this thesis we investigated the possibility to use CVD to produce two major components of fuel cells, namely the gas diffusion layer and the proton exchange membrane. The results were highly promising regarding the elaboration of gas diffusion layers and a CVD prototype was assembled to make the highly complex copolymerization of proton exchange membrane a reality with promising initial results. [less ▲] Detailed reference viewed: 60 (6 UL)![]() Govzmann, Alisa ![]() Doctoral thesis (2023) Detailed reference viewed: 30 (2 UL)![]() Samhi, Jordan ![]() Doctoral thesis (2023) In general, software is unreliable. Its behavior can deviate from users’ expectations because of bugs, vulnerabilities, or even malicious code. Manually vetting software is a challenging, tedious, and ... [more ▼] In general, software is unreliable. Its behavior can deviate from users’ expectations because of bugs, vulnerabilities, or even malicious code. Manually vetting software is a challenging, tedious, and highly-costly task that does not scale. To alleviate excessive costs and analysts’ burdens, automated static analysis techniques have been proposed by both the research and practitioner communities making static analysis a central topic in software engineering. In the meantime, mobile apps have considerably grown in importance. Today, most humans carry software in their pockets, with the Android operating system leading the market. Millions of apps have been proposed to the public so far, targeting a wide range of activities such as games, health, banking, GPS, etc. Hence, Android apps collect and manipulate a considerable amount of sensitive information, which puts users’ security and privacy at risk. Consequently, it is paramount to ensure that apps distributed through public channels (e.g., the Google Play) are free from malicious code. Hence, the research and practitioner communities have put much effort into devising new automated techniques to vet Android apps against malicious activities over the last decade. Analyzing Android apps is, however, challenging. On the one hand, the Android framework proposes constructs that can be used to evade dynamic analysis by triggering the malicious code only under certain circumstances, e.g., if the device is not an emulator and is currently connected to power. Hence, dynamic analyses can -easily- be fooled by malicious developers by making some code fragments difficult to reach. On the other hand, static analyses are challenged by Android-specific constructs that limit the coverage of off-the-shell static analyzers. The research community has already addressed some of these constructs, including inter-component communication or lifecycle methods. However, other constructs, such as implicit calls (i.e., when the Android framework asynchronously triggers a method in the app code), make some app code fragments unreachable to the static analyzers, while these fragments are executed when the app is run. Altogether, many apps’ code parts are unanalyzable: they are either not reachable by dynamic analyses or not covered by static analyzers. In this manuscript, we describe our contributions to the research effort from two angles: ① statically detecting malicious code that is difficult to access to dynamic analyzers because they are triggered under specific circumstances; and ② statically analyzing code not accessible to existing static analyzers to improve the comprehensiveness of app analyses. More precisely, in Part I, we first present a replication study of a state-of-the-art static logic bomb detector to better show its limitations. We then introduce a novel hybrid approach for detecting suspicious hidden sensitive operations towards triaging logic bombs. We finally detail the construction of a dataset of Android apps automatically infected with logic bombs. In Part II, we present our work to improve the comprehensiveness of Android apps’ static analysis. More specifically, we first show how we contributed to account for atypical inter-component communication in Android apps. Then, we present a novel approach to unify both the bytecode and native in Android apps to account for the multi-language trend in app development. Finally, we present our work to resolve conditional implicit calls in Android apps to improve static and dynamic analyzers. [less ▲] Detailed reference viewed: 61 (9 UL)![]() Clees, Elisabeth ![]() Doctoral thesis (2023) Life in residential child and youth welfare facilities is a great challenge for the concerned children. This paper explains factors that contribute to an increase or decrease in the subjectively perceived ... [more ▼] Life in residential child and youth welfare facilities is a great challenge for the concerned children. This paper explains factors that contribute to an increase or decrease in the subjectively perceived well-being of children and adolescents in residential institutions. Qualitative content analysis was chosen to analyze the data collected in Luxembourg. The study shows that children's well-being is particularly influenced by structural conditions and concepts, by fellow residents, and by (pedagogical) professionals. Another result points to the presence of different forms of violence and to the danger of (re-) traumatization of the children and young people within the institutions of inpatient child and youth welfare in Luxembourg. [less ▲] Detailed reference viewed: 91 (0 UL)![]() Messias de Jesus Rufino Ribeiro, Mariana ![]() Doctoral thesis (2023) Detailed reference viewed: 31 (5 UL)![]() Forastiere, Danilo ![]() Doctoral thesis (2022) Detailed reference viewed: 41 (0 UL)![]() Delbrouck, Catherine Anne Lucie ![]() Doctoral thesis (2022) Metabolic rewiring is essential to enable cancer onset and progression. One important metabolic pathway that is often hijacked by cancer cells is the one-carbon (1C) cycle, in which the third carbon of ... [more ▼] Metabolic rewiring is essential to enable cancer onset and progression. One important metabolic pathway that is often hijacked by cancer cells is the one-carbon (1C) cycle, in which the third carbon of serine is oxidized to formate. It was previously shown that formate production in cancer cells often exceeds the anabolic demand, resulting in formate overflow. Furthermore, extracellular formate was described to promote the in vitro invasiveness of glioblastoma (GBM) cells. Nevertheless, the mechanism underlying the formate-induced invasion remains elusive. In this present study, we aimed to characterize formate-induced invasion in greater detail. At first, we studied the generalizability of formate-induced invasion in different GBM models as well as in different breast cancer models. We applied different in vitro assays, like the Boyden chamber assay to probe the impact of formate on different cancer cell lines. Then, we studied the in vivo relevance and the pro-invasive properties of formate in physiological models by using different ex vivo and in vivo models. Lastly, we investigated the mechanism underlying the formate-dependent pro-invasive phenotype. We applied a variety of different biochemical as well as cellular assays to investigate the underlying mechanism. In the present study, we underline that formate specifically promotes invasion and not migration in different cancer types. Furthermore, we now demonstrate that inhibition of formate overflow results in a decreased invasiveness of GBM cells ex vivo and in vivo. Using breast cancer models, we also obtain first evidence that formate does not only promote local cancer cell invasion but also metastasis formation in vivo, suggesting that locally increased formate concentrations within the tumour microenvironment promote cancer cell motility and dissemination. Mechanistically, we uncover a previously undescribed interplay where formate acts as a trigger to alter fatty acid metabolism, which in turn affects cancer cell invasiveness and metastatic potential via matrix metalloproteinase (MMP) release. Gaining a better mechanistic understanding of formate overflow, and how formate promotes invasion in cancer, may contribute to preventing cancer cell dissemination, one of the main reasons for cancer-related mortality. [less ▲] Detailed reference viewed: 32 (6 UL)![]() Lai, Adelene ![]() Doctoral thesis (2022) In most societies, using chemical products has become a part of daily life. Worldwide, over 350,000 chemicals have been registered for use in e.g., daily household consumption, industrial processes ... [more ▼] In most societies, using chemical products has become a part of daily life. Worldwide, over 350,000 chemicals have been registered for use in e.g., daily household consumption, industrial processes, agriculture, etc. However, despite the benefits chemicals may bring to society, their usage, production, and disposal, which leads to their eventual release into the environment has multiple implications. Anthropogenic chemicals have been detected in myriad ecosystems all over the planet, as well as in the tissues of wildlife and humans. The potential consequences of such chemical pollution are not fully understood, but links to the onset of human disease and threats to biodiversity have been attributed to the presence of chemicals in our environment. Mitigating the potential negative effects of chemicals typically involves regulatory steps and multiple stakeholders. One key aspect thereof is environmental monitoring, which consists of environmental sampling, measurement, data analysis, and reporting. In recent years, advancements in Liquid Chromatography-High Resolution Mass Spectrometry (LC-HRMS), open chemical databases, and software have enabled researchers to identify known (e.g., pesticides) as well as unknown environmental chemicals, commonly referred to as suspect or non-target compounds. However, identifying unknown chemicals, particularly non-targets, remains extremely challenging because of the lack of a priori knowledge on the analytes - all that is available are their mass spectrometry signals. In fact, the number of unknown features in a typical mass spectrum of an environmental sample is in the range of thousands to tens of thousands, and therefore requires feature prioritisation before identification within a suitable workflow. In this dissertation work, collaborations with two regulatory authorities responsible for environmental monitoring sought to identify relevant unknown compounds in the environment, specifically by developing computational workflows for unknown identification in LC-HRMS data. The first collaboration culminated in Publication A, which involved a joint project with the Zürcher Amt für Wasser, Energie und Luft. Environmental samples taken from wastewater treatment plant sites in Switzerland were retrospectively analysed using a pre-screening workflow that prioritised features suitable for non-target identification. For this purpose, a multi-step Quality Control algorithm that checks the quality of mass spectral data in terms of peak intensities, alignment, and signal-to-noise ratio was developed and used within pre-screening. This algorithm was incorporated into the R package Shinyscreen. Features that were prioritised by pre-screening then underwent identification using the in silico fragmentation tool MetFrag. To obtain these identifications, MetFrag was coupled to various open chemical information resources such as spectral databases like MassBank Europe and MassBank of North America, as well as suspect lists from the NORMAN Suspect List Exchange and the CompTox Chemicals Dashboard database. One confirmed and twenty-one tentative compound identifications were achieved and reported according to an established confidence level scheme. Comprehensive data interpretation and detailed communication of MetFrag’s results was performed as a means of formulating evidence-based recommendations that may inform future environmental monitoring campaigns. Building on the pre-screening and identification workflow developed in Publication A, Publication B resulted from a collaboration with the Luxembourgish Administration de la gestion de l’eau that sought to identify, and where possible quantify unknown chemicals in Luxembourgish surface waters. More specifically, surface water samples collected as part of a two-year national monitoring campaign were measured using LC-HRMS and screened for pharmaceutical parent compounds and their transformation products. Compared to pharmaceutical compound information, which is publicly available from local authorities (and was used in the suspect list), information on transformation products is relatively scarce. Therefore, new approaches were developed in this work to mine data from the PubChem database as well as from the literature in order to formulate a suspect list containing pharmaceutical transformation products, in addition to their parent compounds. Overall, 94 pharmaceuticals and 14 transformation products were identified, of which 88 and 2 were confirmed identifications respectively. The spatio-temporal occurrence and distribution of these compounds throughout the Luxembourgish environment were analysed using advanced data visualisations that highlighted patterns in certain regions and time periods of high incidence. These findings may support future chemicals management measures, particularly in environmental monitoring. Another challenging aspect of managing chemicals is that they mostly exist as complex mixtures within the environment as well as chemical products. Substances of Unknown or Variable composition, Complex reaction products or Biological materials (UVCBs) make up 20-40% of international chemical registries and include chlorinated paraffins, polymer mixtures, petroleum fractions, and essential oils. However, little is known about their chemical identities and/or compositions, which poses formidable obstacles to assessing their environmental fate and toxicity, let alone identification in the environment. Publication C addresses the challenges of UVCBs by taking an interdisciplinary approach in reviewing the literature that incorporates considerations of their chemical representations, toxicity, environmental fate, exposure, and regulatory approaches. Improved substance registration requirements, grouping techniques to simplify assessment, and the use of Mixture InChI to represent UVCBs in a findable, accessible, interoperable, and reusable (FAIR) way in databases are amongst the key recommendations of this work. A specific type of UVCB, mixtures of homologous compounds, are commonly detected in environmental samples, including many High Production Volume (HPV) compounds such as surfactants. Compounds forming homologous series are related by a common core fragment and repeating chemical subunit, and can be represented using general formulae (e.g., CnF2n+1COOH) and/or Markush structures. However, a significant identification bottleneck is the inability to match their characteristic analytical signals in LC-HRMS data with chemicals in databases; while comb-like elution patterns and constant differences in mass-to-charge ratio indicate the presence of homologous series in samples, most chemical databases do not contain annotated homologous series. To address this gap, Publication D introduces a cheminformatics algorithm, OngLai, to detect homologous series within compound datasets. OngLai, openly implemented in Python using the RDKit, detects homologous series based on two inputs: a list of compounds and the chemical structure of a repeating unit. OngLai was applied to three open datasets from environmental chemistry, exposomics, and natural products, in which thousands of homologous series with a CH2 repeating unit were detected. Classification of homologous series in compound datasets is expected to advance their analytical detection in samples. Overall, the work in this dissertation contributed to the advancement of identifying and managing unknown chemicals in the environment using cheminformatics and computational approaches. All work conducted followed Open Science and FAIR data principles: all code, datasets, analyses, and results generated, including the final peer-reviewed publications, are openly available to the public. These efforts are intended to spur further developments in unknown chemical identification and management towards protecting the environment and human health. [less ▲] Detailed reference viewed: 60 (3 UL)![]() Citeroni, Nicole ![]() Doctoral thesis (2022) Detailed reference viewed: 59 (2 UL)![]() Adeleye, Damilola ![]() Doctoral thesis (2022) Cu(In,Ga)S2 is a chalcopyrite material suitable as the higher bandgap top cell in tandem applications in next generation multijunction solar cells. This owes primarily to the tunability of its bandgap ... [more ▼] Cu(In,Ga)S2 is a chalcopyrite material suitable as the higher bandgap top cell in tandem applications in next generation multijunction solar cells. This owes primarily to the tunability of its bandgap from 1.5 eV in CuInS2 to 2.45 eV in CuGaS2, and its relative stability over time. Currently, a major hinderance to the potential use of Cu(In,Ga)S2 in tandem capacity remains a deficient single-junction device performance in the form of low open-circuit voltage (VOC) and low efficiency. Aside interfacial recombination which leads to losses in the completed Cu(In,Ga)S2 solar cell, deficiencies stems from a low optoelectronic quality of the Cu(In,Ga)S2 absorber quantified by the quasi-Fermi level splitting (QFLS) and which serves as the upper limit of VOC achievable by a solar cell device. In this thesis, the QFLS is compared with the theoretical VOC (SQ-VOC) in the radiative limit, and “SQ-VOC deficit” is defined to compare the difference between SQ-VOC and QFLS as a comparable measure of the optoelectronic deficiency in the absorber material. In contrast to the counterpart Cu(In,Ga)Se2 absorber which has produced highly efficient solar cell devices, the Cu(In,Ga)S2 absorber still suffers from a high SQ-VOC deficit. However, SQ-VOC deficit in Cu(In,Ga)S2 can be reduced by growing the absorbers under Cu-deficient conditions. For the effective use of Cu(In,Ga)S2 as the top cell in tandem with Si or Cu(In,Ga)Se2 as the bottom cell, an optimum bandgap of 1.6-1.7 eV is required, and this is realized in absorbers with Ga content up to [Ga]/([Ga]+[In]) ratio of 0.30-0.35. However, the increase of Ga in Cu-poor Cu(In,Ga)S2 poses a challenge to the structural and optoelectronic quality of the absorber, resulting from the formation of segregated Ga phases with steep Ga/bandgap gradient which constitutes a limitation to the quality of the Cu(In,Ga)S2 absorber layer with a highSQ-VOC deficit and low open circuit voltage and overall poor performance of the finalized solar cell. In this work, the phase segregation in Cu(In,Ga)S2 has been circumvented by employing higher substrate temperatures and adapting the Ga flux during the first-stage of deposition when growing the Cu(In,Ga)S2 absorbers. A more homogenous Cu(In,Ga)S2 phase and improved Ga/bandgap gradient is achieved by optimizing the Ga flux at higher substrate temperature to obtain a Cu(In,Ga)S2 absorber with high optoelectronic quality and low SQ-VOC deficit. Additionally, the variation of the Cu-rich phase when growing the Cu(In,Ga)S2 absorber layers was found to not only alter the notch profile and bandgap minimum of the absorbers, but also influence the optoelectronic quality of the absorber. Shorter Cu-rich phase in the absorbers led to narrower notch profile and higher bandgap. Ultimately, several steps in the three-stage deposition method used for processing the Cu(In,Ga)S2 absorbers were revised to enhanced the overall quality of the absorbers. Consequently, the SQ-VOC deficit in high bandgap Cu(In,Ga)S2 absorbers is significantly reduced, leading to excellent device performance. This thesis also examines the temperature- and compositional-related optoelectronic improvement in pure Cu-rich CuInS2 absorbers without Ga, where improvement in QFLS was initially linked to a reduction of nonradiative recombination channels with higher deposition temperatures and increase in Cu content. Findings through photoluminescence decay measurements show that the origin of the improved QFLS in CuInS2 is rather linked to changes in doping levels with variations of deposition temperature and Cu content. Finally, in order to understand and gain insight into the influence of Ga in Cu(In,Ga)S2, the electronic structure of CuGaS2 absorbers was investigated in dependence of excitation intensity and temperature by low temperature photoluminescence measurements. A shallow donor level and three acceptor levels were detected. It was found that similar acceptor levels in CuInSe2 and CuGaSe2 which are otherwise shallow become deeper in CuGaS2. These deep defects serve as nonradiative recombination channels and their appearance in the Ga-containing compound is be detrimental to the optoelectronic quality of Cu(In,Ga)S2 absorbers as Ga content is increased therefore limiting the optimum performance of Cu(In,Ga)S2 devices. [less ▲] Detailed reference viewed: 71 (13 UL)![]() Kchouri, Bilal ![]() Doctoral thesis (2022) Detailed reference viewed: 45 (2 UL)![]() Grant, Erica Taylor ![]() Doctoral thesis (2022) A growing number of diseases have been linked to aberrations in the interaction between diet, gut microbiota, and host immune function. Understanding these complex dynamics will be critical for the ... [more ▼] A growing number of diseases have been linked to aberrations in the interaction between diet, gut microbiota, and host immune function. Understanding these complex dynamics will be critical for the development of personalized therapeutic regimens to improve health outcomes. In mice colonized with a defined, 14-member synthetic human microbial community, mucin-degrading bacteria proliferate and are suspected to contribute to thinning of the colonic mucus layer and enhanced pathogen susceptibility. This dissertation investigates three aspects of diet–microbiome–host interactions in healthy models. In the first chapter, we investigate this question in early life by assessing the impact of the maternal microbiota and fiber-deprivation on immune development in pups. Next, we leverage an adult mouse model to ascertain the effects of specific fiber types on bacterial metabolic output and host immunity. Finally, we translate this work into humans by examining the effects of a high- and low-fiber diet on host mucolytic bacteria populations and early inflammatory shifts in healthy adults. Interim analyses indicate that there is high translatability of the mouse findings to humans, with similar changes in composition and enzymatic activities according to fiber intake. By implementing a bench-to-bed-to-bench research approach, this work aims to expand the range of commensals that can be considered as potential biomarkers of early barrier disruption or targeted using customized diet-based approaches. [less ▲] Detailed reference viewed: 62 (3 UL)![]() Usanova, Ksenia ![]() Doctoral thesis (2022) Recent research has determined that talent management is a highly context-sensitive phenomenon. Indeed, the way talent is defined and managed varies from one context to another. Although talent management ... [more ▼] Recent research has determined that talent management is a highly context-sensitive phenomenon. Indeed, the way talent is defined and managed varies from one context to another. Although talent management has been studied for the last two decades, the majority of scientific works still focus on the context of large multinational corporations with a prevalence of managerial views. Therefore, this thesis aims to contribute to the literature by challenging the dominant understandings of talent management through examining the phenomenon in the contexts that are less explored. To that end, four empirical studies were conducted constituting this thesis. The first study explores how talent is defined and managed in the not-for-profit sector. Based on the interviews with 34 leaders of 34 mission-driven organizations, it offers a unique definition of a talent and an understanding of how TM is implemented in this sector. The second study analytically contextualizes talent management in micro-, small- and medium-sized enterprises. Based on 31 interviews with TM leaders of 27 aerospace companies, this research proposes three types of TM in this context, namely “strategic”, “entrepreneurial” and “ad hoc”. The third study, in the context of the high technology industry, explores understanding of talent management not only from the perspective of managers but also that of talent. It is based on the discussions with 20 managers and 20 talents from the aerospace industry and identifies three views on TM: talents’, managers’ and shared. Finally, the fourth study explores gender differences in quitting intentions of talent in the knowledge-based field. Drawing on the survey responses from 119 talented individuals, it shows that gender moderates relationships between talent intention to quit and its main antecedents. This thesis provides an important theoretical contribution to the talent management literature and offers useful practical implications for organizational leaders, managers, talented individuals and policy-makers. [less ▲] Detailed reference viewed: 50 (11 UL)![]() Ost, Alexander Dimitri ![]() Doctoral thesis (2022) The progressive trend to miniaturize samples presents a challenge to materials characterization techniques in terms of both lateral resolution and chemical sensitivity. The latest generation of focused ... [more ▼] The progressive trend to miniaturize samples presents a challenge to materials characterization techniques in terms of both lateral resolution and chemical sensitivity. The latest generation of focused ion beam (FIB) platforms has allowed to advance in a variety of different fields, including nanotechnology, geology, soil, and life sciences. State-of-the-art ultra-high resolution electron microscopy (EM) devices coupled with secondary ion mass spectrometry (SIMS) systems have enabled to perform in-situ morphological and chemical imaging of micro- and even nanosized objects to better understand materials by studying their properties correlatively. However, SIMS images are prone to artefacts induced by the sample topography as the sputtering yield changes with respect to the primary ion beam incidence angle. Knowing the exact sample topography is crucial to understand SIMS images. Moreover, using non-reactive primary ions (Ne+) produced in a gas field ion source (GFIS) allows to image in SIMS with an excellent lateral resolution of < 20 nm, but it comes with a lower ionization probability compared to reactive sources (e.g., Cs+) and due to small probe sizes only a limited number of atoms are sputtered, resulting in low signal statistics. This thesis focused first on taking advantage of high-resolution in-situ EM-SIMS platforms for applications in specific research fields and to go beyond traditional correlative 2D imaging workflows by developing adapted methodologies for 3D surface reconstruction correlated with SIMS (3D + 1). Applying this method to soil microaggregates and sediments allowed not only to enhance their visualization but also to acquire a deeper understanding of materials’ intrinsic transformation processes, in particular the organic carbon sequestration in soil biogeochemistry. To gain knowledge of the influence of the topography on surface sputtering, using model samples the change of the sputtering yield under light ion bombardment (He+, Ne+) for different ranges of incidence angles of the primary ion beam was studied experimentally. This data was compared to Monte Carlo simulation results and fitted with existing sputtering model functions. We showed thus that these models developed and studied for heavier ions (Ar+, Cs+) are also applicable to light ions (He+, Ne+). Additionally, an algorithm used to correct SIMS images with respect to topographical artefacts resulting from local changes of the sputtering yield was presented. Finally, the contribution of oxygen on positive SI yields was studied for non-reactive primary ions (25 keV Ne+) under high primary ion current densities (up to 10^20 ions/(cm2 ∙ s)). It was shown that in order to maximize and maintain a high ionization probability oxygen needs to be provided continuously to the surface. Secondary ion signal enhancement of up to three orders of magnitude were achieved for silicon, opening the doors for SIMS imaging at both highest spatial resolution and high sensitivity. [less ▲] Detailed reference viewed: 67 (8 UL)![]() Mangers, Jeff ![]() Doctoral thesis (2022) The concept of Circular Economy (CE) is gaining increasing attention as an indispensable renewal of linear economy without neglecting sustainable development goals. Closing resource loops and keeping ... [more ▼] The concept of Circular Economy (CE) is gaining increasing attention as an indispensable renewal of linear economy without neglecting sustainable development goals. Closing resource loops and keeping resources in the system at the highest level of use for as long as possible are cited as the main goals of CE. However, due to missing information exchange, the lack of consistency between the existing End-of-Life (EOL) infrastructure and the respective product designs hinders a successful circularity of resources. This research provides a modular method to collect, process, and apply EOL process data to provide the Beginning-of-Life (BOL) with important EOL-knowledge through a CE adapted product design assessment. EOL-data is collected using a Circular Value Stream Mapping (CVSM), EOL-information is processed using a digital state flow representation, and EOL-knowledge is applied by providing a graphical user interface for designers. The method is verified by a simulation model that serves as a decision-support tool for product designers in the context of a PET bottle case study in Luxembourg. The goal is to anticipate a circular flow of resources by reflectively aligning product design with the relevant EOL infrastructure. Within the linear economy, the focus has been on improving production processes while neglecting what happens to a product after its use. The developed method makes it possible to consider not only the requirements of users but also the actual end users, the EOL process chains, when designing products. [less ▲] Detailed reference viewed: 32 (3 UL)![]() Fabiani, Ginevra ![]() Doctoral thesis (2022) The interaction between topography and climate has a crucial role in shaping forest composition and structure. The understating of how the ecohydrological processes across the landscape affect tree ... [more ▼] The interaction between topography and climate has a crucial role in shaping forest composition and structure. The understating of how the ecohydrological processes across the landscape affect tree performance becomes especially important with the expected reduction in water availability and increase in water demand, which could enhance the thermal and hydrologic gradient along the slope. Incorporating soil moisture variation and groundwater gradient across the landscape has been found to improve the capacity to predict forest vulnerability and water fluxes in complex terrains. However, most of the information that can be retrieved by remote sensing technique cannot capture small scale-processes. Therefore, hillslope-catchment scale studies can shed light on ecosystem responses to spatially and temporally variable growing conditions. In the present work, I investigated how hillslope position affects tree physiological response to environmental controls (i.e. soil moisture, vapor pressure deficit, groundwater proximity to the surface) and tree water use in two hillslope transects (Chapter 1 and 3). Sap velocity measurements and isotopic measurements have been applied along two hillslope transects, characterized by contrasting slopes angle, climate, and species composition. We found that the different hydrological processes occurring at the two sites lead to contrasting physiological responses and water uptake strategies. In the Weierbach catchment, the lack of shallow downslope water redistribution through interflow leads to no substantial differences in vadose zone water supply between hillslope positions and ultimately no spatial differences in the tree’s physiological response to environmental drivers. Furthermore, beech and oak trees displayed different stomatal control resulting from their water uptake strategies and physiology. In the Lecciona catchment, the greater soil moisture content at the footslope, promoted by the steep slope, led to more suitable growing conditions and a longer growing season in the piedmont zone. These results emphasize the strong interconnection between vegetation, climate, and hydrological processes in complex terrains, and the need to consider them as a whole to better understand future ecosystem responses to changing climate. Additionally, the present work sheds new light on the complex interaction between sapwood and heartwood. In Chapter 2, I provide experimental evidence about water isotopic exchange between the two compartments in four tree species (Fagus sylvatica, Quercus petraea, Pseudotsuga menziesii, and Picea abies) characterized by different xylem anatomy, and timing of physiological activity. While the two functional parts display a consistent difference in isotopic composition in conifers, they are characterized by more similar values in broadleaved species in broadleaved species, suggesting a higher degree of water exchange. These results highlight the value of accounting for radial isotopic variation, which might potentially lead to uncertainties concerning the origin of the extracted water for water uptake studies. [less ▲] Detailed reference viewed: 30 (1 UL)![]() Nezhelskii, Maksim ![]() Doctoral thesis (2022) This thesis consists of three main chapters, which study different topics of financial economics. The first two chapters are applied theory studies of heterogenous agents in continuous time, where the ... [more ▼] This thesis consists of three main chapters, which study different topics of financial economics. The first two chapters are applied theory studies of heterogenous agents in continuous time, where the primary focus is endogenised portfolio choice of risky assets by agents in a general equilibrium framework. While chapter 1 studies risky-asset allocation in general, trying to match inequality data, chapter 2 models housing choice and studies the effects of different shocks on the real estate market. These first two chapters represent working papers which are written jointly with Christos Koulovatianos. Chapter 3 is an empirical paper on post-earnings announcement drift and how to better capture this anomaly using a bigger set of publicly available information. The third working paper is written jointly with Anna Ignashkina. Chapter 1 is entitled “Income and wealth inequality in heterogeneous-agent models in con- tinuous time.” In this chapter we analyse wealth inequality and how it is affected by the heterogeneity of the risk-taking pattern. Wealth inequality in the United States reached un- precedented levels over the last thirty years. The puzzle of the heavy tail of wealth distribution remains unresolved. We build a heterogeneous-agent model in continuous time with endogenous portfolio choice to test if risk-taking of the wealthy can explain the thick upper tail of wealth distribution for the US data. We incorporate the recent evidence of Guvenen et al. (2014) of income process’s non-normality in our model. We find that asset holdings play an important role in explaining increased inequality, especially when accompanied by non-normal income process. In both general equilibrium and partial setting we show that the non-normality of the income process contributes significantly to the formation of the convex risk-taking pattern against income. We also find that the rise in volatility of capital markets observed in the last 30 years can explain trends in inequality and interest rates. Chapter 2 is entitled “A Heterogeneous-Agent Model of Household Mortgages in Luxem- bourg: Responses to the Covid-19 Shock.” As it is well-known, the covid-19 pandemic lockdowns 8 did not have an impact on every worker in the same way. More social professions were affected in a more adverse manner by the lockdowns, experiencing severe income losses, while many ser- vices professions could continue working remotely as before, experiencing no income losses. In order to study the impact of these asymmetric idiosyncratic income shocks on household balance sheets in Luxembourg and on house prices, we calibrate a continuous-time heterogeneous-agent model of homeownership to pre- and post-covid income data. We compute the transition dy- namics of the net-worth distribution of households and study alternative scenarios of shocks to the mortgage rates that may stem from overall credit market conditions and central-bank policies. Our general results are that the mortgage market in Luxembourg is resilient. Yet, our model raises alert for some vulnerable households and provides a tool for policy evaluation in the future. Chapter 3 is entitled “Information aggregation and post-earnings announcement drift.” In this chapter we propose a new measure of surprise information that aggregates different signals coming together with earnings reports, complementing the standard earnings-surprise measure for analysis of Post-earnings announcement drift (PEAD). We find that new factors, such as revenue surprises and aggregated non-financial information available in earnings reports, are important determinants of post-earnings returns. Surprisingly, these new factors amplify, rather than mitigate, the PEAD anomaly. In dynamic portfolios, weekly returns to PEAD increase by 72 basis points, if more financial metrics are taken into account, compared to the standard approach. Similarly, with analyses of textual metrics, we demonstrate that changes in the text are associated with a longer drift. [less ▲] Detailed reference viewed: 58 (2 UL)![]() Mashhood, Muhammad ![]() Doctoral thesis (2022) The Additive Manufacturing (AM) process is the scale-able, flexible and prospective way of fabricating the parts. It forms the product of desired design by depositing layer upon layer of material to print ... [more ▼] The Additive Manufacturing (AM) process is the scale-able, flexible and prospective way of fabricating the parts. It forms the product of desired design by depositing layer upon layer of material to print the object in 3D. It has a vast field of applications from forming prototypes to the manufacturing of sophisticated parts for space and aeronautical industry. It has even found its way into the domain of biological research and the development of implants and artificial organs. Depending upon the form of the raw material and the mechanism of printing it layer upon layer, there are different techniques of AM in metal parts production. One of them is Selective Laser Melting (SLM). This process involves the raw material in the form of metal powder. To manufacture the product, this powder first undergoes melting through a moving laser. Afterwards, it solidifies and joins with the already solidified structure in the layer below. The movement of the laser is carried out in the shape of a 2D cross-section design which has to be consolidated at the corresponding height. This process involves the repetitive heating and cooling of the material which causes sharp thermal gradients in the object. Because of such gradients, the material during manufacturing consistently undergoes thermal loading. Such thermal loading, therefore, induces the residual stress and permanent distortion in the manufactured part. These residual stress and thermally induced distortions affect the quality of the part and cause the mismatch in dimensions between the final product and the required design. To reduce the waste of raw material and energy, therefore it is important to predict such problems beforehand. This research work presents the modelling of a numerical simulation platform which simulates the part-scale AM SLM part manufacturing process and its cooling down in a virtual environment. The objective of establishing this platform was to evaluate the residual stress and thermal distortion. It included the modelling of thermal & structural analysis and their coupling to establish the multi-physics simulation tool. The transient thermal analysis with the elastoplastic non-linearity of the material model was implemented to capture the permanent deformation behaviour of a material under thermal loading. The modelling was done on the Finite Elements Method (FEM) based open source numerical analysis tools to incorporate the flexibility of numerical modelling in the project. The modelling strategy of solidified material deposition was incorporated via the elements activation technique. To synchronize the activation of the elements in the multi-physics FEM solver with the laser movement, the interfacing with AM G-code based data was performed. With this modelling strategy, the simulation experiments were conducted to analyse the evolution of thermal gradients, residual stress and deformation in part manufacturing. The study also highlights the challenges of the applied elements activation technique and its limitations. It was also studied how the prediction of simulation results vary with the different material deposition methods. Moreover, the resulting numerical analysis of the established simulation platform was also compared with the experimental and validated simulation data to ensure its reliability. In this comparative study, the current numerical strategy replicated the trends of stress and deformation from physical experimental data and represented the expected material behaviour in the manufactured part. Additionally, during this study, the skill gained for results handling and their validation was also applied in the other field of numerical modelling e.g. in the numerical analysis conducted for a blast furnace with Computational Fluid Dynamics - Discrete Element Method (CFD-DEM) coupled multi-physics platform of eXtended Discrete Element Method (XDEM). The current working simulation platform, via its AM G-code machine data interface with numerical solver, can facilitate the manufacturing engineers to predict earlier the possible thermally caused residual stress and deformation in their AM SLM produced product via simulation. On the other hand with the identified challenges in the virtual depiction of material deposition, the simulation developers may also be able to expect such limitations and make relevant decisions in the choice of material deposition technique in their AM SLM process modelling. Moreover, with the potential of this simulation tool being the basic building block, it may also provide the opportunity to build upon it the multi-scale numerical techniques and add them to the multidisciplinary research work of Artificial Intelligence based digital twins. [less ▲] Detailed reference viewed: 67 (3 UL)![]() Ul Haq, Fitash ![]() Doctoral thesis (2022) With the recent advances of Deep Neural Networks (DNNs) in real-world applications, such as Automated Driving Systems (ADS) for self-driving cars, ensuring the reliability and safety of such DNN-Enabled ... [more ▼] With the recent advances of Deep Neural Networks (DNNs) in real-world applications, such as Automated Driving Systems (ADS) for self-driving cars, ensuring the reliability and safety of such DNN-Enabled Systems (DES) emerges as a fundamental topic in software testing. Automatically generating new and diverse test data that lead to safety violations of DES presents the following challenges: (1) there can be many safety requirements to be considered at the same time, (2) running a high-fidelity simulator is often very computationally intensive, (3) the space of all possible test data that may trigger safety violations is too large to be exhaustively explored, (4) depending upon the accuracy of the DES under test, it may be infeasible to find a scenario causing violations for some requirements, and (5) DNNs are often developed by a third party, who does not provide access to internal information of the DNNs. In this dissertation, in collaboration with IEE sensing, we address the aforementioned challenges by providing scalable and practical automated solutions for testing Deep Learning (DL) models and systems. Specifically, we present the following in the dissertation. 1. We conduct an empirical study to compare offline testing and online testing in the context of Automated Driving Systems (ADS). We also investigate whether simulator-generated data can be used in lieu of real-world data. Furthermore, we investigate whether offline testing results can be used to help reduce the cost of online testing. 2. We propose an approach to generate test data using many-objective search algorithms tailored for test suite generation to generate test data for DNN with many outputs. We also demonstrate a way to learn conditions that cause the DNN to mispredict the outputs. 3. In order to reduce the number of computationally expensive simulations, we propose an automated approach, SAMOTA, to generate data for DNN-enabled automated driving systems, using many- objective search and surrogate-assisted optimisation. 4. The environmental conditions (e.g., weather, lighting) often stay the same during a simulation, which can limit the scope of testing. To address this limitation, we present an automated approach, MORLAT, to dynamically interact with the environment during simulation. MORLAT relies on reinforcement learning and many-objective optimisation. We evaluate our approaches using state-of-the-art deep neural networks and systems. The results show that our approaches perform statistically better than the alternatives [less ▲] Detailed reference viewed: 64 (15 UL)![]() Zhou, Yang ![]() Doctoral thesis (2022) Neutrophils are important actors of the immune system, particularly through the release of cytokines in the inflammatory environment. This process must be highly orchestrated to avoid cell overactivation ... [more ▼] Neutrophils are important actors of the immune system, particularly through the release of cytokines in the inflammatory environment. This process must be highly orchestrated to avoid cell overactivation and unwanted tissue damage. However, this elegant regulation is still only partially known and poorly characterized. Increasing evidence over the years show that Ca2+ is actively involved in cytokine secretion but our knowledge on the relationship between these two phenomena needs to be filled. To this end, in this study, we investigated the Ca2+-dependent mechanisms underlying cytokine secretion in neutrophils. For that, the differentiated myeloid cell line HL-60 (dHL-60) was used as a cell model since primary neutrophils are unable to be genetically modified. Our results showed that the mobilization of several cytokines, notably IL-8, was up-regulated in a time-dependent manner by a pro-inflammatory stimulus (fMLF). Experiments of intracellular flow cytometry staining provided evidence on the presence of preformed IL-8 as well as de novo synthesis of IL-8. Changes in intracellular Ca2+ levels, resulting from an extracellular Ca2+ entry, is shown to be indispensable for efficient secretion of CCL2, CCL3, CCL4 and IL-8, even if an additional signal appears to be required for the release of IL-8. Ca2+-dependent cytokine secretion was associated to the store-operated Ca2+ entry (SOCE) mechanism and probably relies on the Ca2+ sensor STIM1. Since Ca2+-binding proteins S100A8/A9 have been previously reported to be key actors in the regulation of neutrophil NADPH oxidase activation, we hypothesized that Ca2+ signals could be converted into cytokine secretion through intracellular S100A8/A9. Knockdown studies performed in mouse (Hoxb8 cells) and human (dHL-60) neutrophil models confirmed the involvement of S100A8/A9 in the regulation of cytokine secretion. Moreover, our data support the fact that a part of cytokine secretion occurs through the degranulation process. Finally, we investigated the post-transcriptional mechanism involved in the regulation of S100A8/A9 expression and thus, in the control of cytokine secretion. Based on prediction network analysis, miR-132-5p was identified as a potential regulator of S100A8/A9. Stable overexpression of miR-132-5p in dHL-60 cells caused a strong inhibition of S100A8/A9 expression and IL-8 secretion underlining the preponderant role of miR-132-5p-regulated S100A8/A9 expression in the pro-inflammatory response. To summarize, for the first time, we prove that Ca2+-dependent cytokine secretion is associated with SOCE and is regulated by intracellular S100A8/A9, which is negatively modulated by miR-132-5p to prevent excessive neutrophil activation and host damage. [less ▲] Detailed reference viewed: 45 (29 UL)![]() Gomez Ramos, Borja ![]() Doctoral thesis (2022) Midbrain dopaminergic neurons (mDANs) control voluntary movement, cognition, and reward behavior and are implicated in human diseases such as Parkinson’s disease (PD). Many transcription factors (TFs ... [more ▼] Midbrain dopaminergic neurons (mDANs) control voluntary movement, cognition, and reward behavior and are implicated in human diseases such as Parkinson’s disease (PD). Many transcription factors (TFs) controlling human mDAN differentiation have been described but much of the regulatory landscape remains undefined. The location and the low number of these cells in the brain have limited the application of epigenomic assays, as they usually require a high number of cells. Thanks to the emergence of induced pluripotent stem cell (iPSC) technology, differentiation protocols for the derivation of mDANs were developed, making access to this neuronal subtype easier, facilitating its study. However, current protocols for the differentiation of human iPSC towards mDANs produce a mixture of developmentally immature and incompletely specified cells together with more physiological cells. Differentiation protocols are based on the developmental knowledge generated from animal studies and the translation of this knowledge to humans appears not to be completely compatible. Therefore, a better understanding of human development is needed, encouraging the use of human-based models. A proper understanding of the epigenetic landscape of human mDAN differentiation will have direct implications for uncovering gene regulatory mechanisms, disease-associated variants (as most of them are in the non-coding regions of the genome), and cell identity. In this study, a human tyrosine hydroxylase (TH) reporter line of iPSC was used for the generation of time series transcriptomic and epigenomic profiles from differentiating mDANs. TH is the rate-limiting enzyme for dopamine production and therefore a specific marker for mDANs. In the reporter line, mCherry was expressed under the control of the TH promoter, which allowed to isolate mDANs from the cultures by FACS. Integration of time-point-specific chromatin accessibility and associated TF binding motifs with paired transcriptome profiles across 50 days of differentiation was performed using an adapted version of the EPIC-DREM pipeline. Time-point-specific gene regulatory interactions were obtained and served to identify putative key TFs controlling mDAN differentiation. Low-input ChIP-seq for histone H3 lysine 27 acetylation (H3K27ac) was performed to identify and prioritize key TFs controlled by super-enhancer regions. LBX1, NHLH1, and NR2F1/2 were found to be necessary for mDAN differentiation. Overexpression of either LBX1 or NHLH1 was also able to increase mDAN numbers. LBX1 was found to regulate cholesterol biosynthesis and translation possibly via mTOR signaling. NHLH1 was found to be necessary for the induction of miR-124, a potent neurogenic microRNA. Interestingly, miR-124 and NHLH1 appear to be part of a positive feedback loop. Thus, the results from this study provide novel insights into the regulatory landscape of human mDAN differentiation. In addition, as the identified candidates from EPIC-DREM did not show selective expression in mDANs, the data produced was further explored for the identification of novel expression selective TFs in these cells. ZFHX4 was selected as a relevant TF for mDANs that was also downregulated in PD patients. It presented a high and specific expression during development and in adult mDANs from human brains. Depletion of ZFHX4 during differentiation affected mDAN neurogenesis. However, CRISPR-mediated overexpression of ZFHX4 during differentiation did not affect mDAN numbers. Transcriptomic analysis revealed a role of ZFHX4 in controlling cell cycle and cell division on mDANs. ZFHX4 seems to be regulating cell cycle control by interaction with E2F TFs and the NuRD complex, as these proteins have also been associated with this function and appeared in the analysis performed. Overall, the present study provides a novel profile of mDANs during differentiation that can be used for many other applications apart from the one presented here, like the identification of disease-associated variants affecting these neurons. Incorporating epigenetic information into the current transcriptomic knowledge increased the understanding of this neuronal subtype and uncovered important pathways involved in the biology of these cells and most probably with implications to disease. [less ▲] Detailed reference viewed: 23 (1 UL)![]() Spignoli, Lorenzo ![]() Doctoral thesis (2022) Detailed reference viewed: 32 (3 UL)![]() Manisekaran, Ahilan ![]() Doctoral thesis (2022) Detailed reference viewed: 74 (6 UL)![]() Kyriakis, Dimitrios ![]() Doctoral thesis (2022) Using RNA sequencing, we can examine distinctions between different cell types and capture a moment in time of the dynamic activities taking place inside a cell. Researchers in fields like developmental ... [more ▼] Using RNA sequencing, we can examine distinctions between different cell types and capture a moment in time of the dynamic activities taking place inside a cell. Researchers in fields like developmental biology have embraced this technology quickly as it has improved over the past few years, and there are now many single-cell RNA sequencing datasets accessible. A surge in the development of computational analysis techniques has occurred along with the invention of technologies for generating single-cell RNA sequencing data. In my thesis, I examine computational methods and tools for single-cell RNA sequencing data analysis in 3 distinct projects. In the fetal brain project, I tried to decipher the complexity of the human brain and its development, and the link between development and neuropsychiatric diseases at early fetal brain development. I provide a unique resource of fetal brain development across a number of functionally distinct brain regions in a brain region-specific manner at single nuclei resolution. In total, I retrieved 50,937 single nuclei from four individual time points (Early; gestational weeks 18 and 19, and late; gestational weeks 23 and 24) and four distinct brain regions (cortical plate, hippocampus, thalamus, and striatum). In my dissertation, I also tried to investigate the underlying mechanisms of Parkinsons disease (PD), the second-most prevalent neurodegenerative disorder, characterized by the loss of dopaminergic neurons (mDA) in the midbrain. I examined the disease process using single cells of mDA neurons developed from human induced pluripotent stem cells (hiPSCs) expressing the ILE368ASN mutation in the PINK1 gene, at four different maturation time points. Differential expression analysis resulted in a potential core network of PD development which linked known genetic risk factors of PD to mitochondrial and ubiquitination processes. In the final part of my thesis, I perform an analysis of a dataset from brain biopsies from patients with Intracerebral hemorrhage (ICH) stroke. In this project, I tried to investigate the dynamic spectrum of polarization of the immune cells to pro/anti-inflammatory states. I also tried to identify markers that potentially can be used to predict the outcome of the ICH patients. Overall, my thesis discusses a wide range of single-cell RNA sequencing tools and methods, as well as how to make sense of real datasets using already-developed tools. These discoveries may eventually lead to a more thorough understanding of Parkinson’s disease, ICH stroke but also psychiatric diseases and may facilitate the creation of novel treatments. v [less ▲] Detailed reference viewed: 41 (5 UL)![]() Soriano Baguet, Leticia ![]() Doctoral thesis (2022) Th17 cells are a subset of effector CD4+ T cells essential for the protection against extracellular bacteria and fungi. At the same time, Th17 cells have been implicated in the progression of autoimmune ... [more ▼] Th17 cells are a subset of effector CD4+ T cells essential for the protection against extracellular bacteria and fungi. At the same time, Th17 cells have been implicated in the progression of autoimmune diseases, including multiple sclerosis, rheumatoid arthritis and psoriasis. Effector T cells require energy and building blocks for their proliferation and effector function. To that end, these cells switch from oxidative and mitochondrial metabolism to fast and short pathways such as glycolysis and glutaminolysis. Pyruvate dehydrogenase (PDH) is the central enzyme connecting cytoplasmic glycolysis to the mitochondrial tricarboxylic acid (TCA) cycle. The specific role of PDH in inflammatory Th17 cells is unknown. To unravel the role of this pivotal enzyme, a mutant mouse line where T cells do not express the catalytic subunit of the PDH complex was generated, using the Cre-Lox recombination system. In this study, PDH was shown to be essential for the generation of an exclusive glucose-derived citrate pool needed for the proliferation, survival, and effector functions of Th17 cells. In vivo, mice harboring a T cell-specific deletion of PDH were less susceptible to experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis, showing lower disease burden and increased survival. In vitro, the absence of PDH in Th17 cells increased glutamine and glucose uptake, as well as glycolysis and glutaminolysis. Similarly, lipid uptake was increased through CD36 in a glutamine-mTOR axis-dependent manner. On the contrary, the TCA cycle was impaired, interfering with oxidative phosphorylation (OXPHOS) and causing levels of cellular citrate to remain critically low in mutant Th17 cells. Citrate is the substrate of ATP citrate synthase (ACLY), an enzyme responsible for the generation of acetyl-CoA, which is essential for lipid synthesis and histone acetylation, crucial for the transcription process. In line, PDH-deficient Th17 cells showed a reduced expression of Th17 signature genes. Notably, increasing cellular citrate by the addition of acetate in PDH-deficient Th17 cells restored their metabolism and function. PDH was identified as a pivotal enzyme for the maintenance of a metabolic feedback loop within central carbon metabolism that can be of relevance for therapeutically targeting Th17 cell-driven autoimmunity. [less ▲] Detailed reference viewed: 92 (8 UL)![]() Damodaran, Aditya Shyam Shankar ![]() Doctoral thesis (2022) Privacy preserving protocols typically involve the use of Zero Knowledge (ZK) proofs, which allow a prover to prove that a certain statement holds true, to a verifier, without revealing the witness ... [more ▼] Privacy preserving protocols typically involve the use of Zero Knowledge (ZK) proofs, which allow a prover to prove that a certain statement holds true, to a verifier, without revealing the witness (secret information that allows one to verify whether said statement holds true) to the verifier. This mechanism allows for the participation of users in such protocols whilst preserving the privacy of sensitive personal information. In some protocols, the need arises for the reuse of the information (or witnesses) used in a proof. In other words, the witnesses used in a proof must be related to those used in previous proofs. We propose Stateful Zero Knowledge (SZK) data structures, which are primitives that allow a user to store state information related to witnesses used in proofs, and then prove subsequent facts about this information. Our primitives also decouple state information from the proofs themselves, allowing for modular protocol design. We provide formal definitions for these primitives using a composable security framework, and go on to describe constructions that securely realize these definitions. These primitives can be used as modular building blocks to attenuate the security guarantees of existing protocols in literature, to construct privacy preserving protocols that allow for the collection of statistics about secret information, and to build protocols for other schemes that may benefit from this technique, such as those that involve access control and oblivious transfer. We describe several such protocols in this thesis. We also provide computational cost measurements for our primitives and protocols by way of implementations, in order to show that they are practical for large data structure sizes. We finally provide a notation and a compiler that takes as input a ZK proof represented by said notation and outputs a secure SZK protocol, allowing for a layer of abstraction so that practitioners may specify the security properties and the data structures they wish to use, and be presented with a ready to use implementation without needing to deal with the theoretical aspects of these primitives, essentially bridging the gap between theoretical cryptographic constructions and their implementation. This thesis conveys the results of FNR CORE Junior project, Stateful Zero Knowledge. [less ▲] Detailed reference viewed: 37 (8 UL)![]() Peracchi, Silvia ![]() Doctoral thesis (2022) Detailed reference viewed: 32 (5 UL)![]() Bai, Peiru ![]() Doctoral thesis (2022) Du fait de la mondialisation galopante, des familles transnationales se déplacent, s’installent et s’intègrent dans un nouvel environnement linguistique et culturel autre que leur pays d’origine. Notre ... [more ▼] Du fait de la mondialisation galopante, des familles transnationales se déplacent, s’installent et s’intègrent dans un nouvel environnement linguistique et culturel autre que leur pays d’origine. Notre travail de thèse s’intéresse à l’étude des politiques linguistiques au sein de trois familles transnationales d’origine chinoise au Grand-Duché de Luxembourg, où le multilinguisme constitue tant un défi qu’une opportunité pour leur intégration dans la société luxembourgeoise. Entre héritage et intégration, la confrontation interculturelle ne s’effectue pas sans tension. Il s’agit, pour les familles, d’une part, de maintenir leur langue d’origine (le chinois) et, d’autre part, d’apprendre des langues scolaires (le luxembourgeois, le français, l’allemand) et l’anglais dans le système d’éducation luxembourgeois. Notre objectif est d’étudier les idéologies linguistiques et éducatives des parents, le développement du plurilinguisme chez l’enfant et les questions identitaires qui méritent réflexion dans un environnement multilingue et interculturel. Comment les parents conceptualisent-ils le développement du plurilinguisme chez leurs enfants au Luxembourg ? Quelles sont les considérations et les aspirations des parents quant à l’éducation de leurs enfants au sens large en contexte multilingue et interculturel ? Comment et dans quelle mesure la construction et la négociation des idéologies linguistiques des membres des trois familles s’articulent-elles autour du niveau individuel, du niveau familial et du niveau social ? Nous avons opté pour une approche qualitative et ethnolinguistique. L’entretien semi-directif, l’observation participante et l’enregistrement des interactions familiales ont été mobilisés comme outils principaux de recueil des données sur le terrain. L’analyse thématique, l’étude de cas et l’analyse transversale ont constitué notre démarche d’analyse, avec un accent mis sur l’identification et l’appréhension des idéologies linguistiques et éducatives. L’analyse des données nous permet de conclure que le mécanisme de rapports entre l’héritage linguistique, la conformité aux exigences scolaires et la valorisation de l’anglais comme lingua franca s’avère d’autant plus complexe que les familles chinoises ont à faire face à plus d’une langue dans un environnement multilingue comme celui du Luxembourg. Primo, les parents chinois démontrent un engagement ferme vis-à-vis du maintien du chinois standard chez leurs enfants, mais avec des attentes variées quant aux compétences à acquérir. Secundo, face à la multitude de langues en jeu, ils témoignent d’une ambivalence évidente dans leur conception du rôle et de l’ordre de priorité des différentes langues, mais aussi, pour ce qui est de l’aspiration au plurilinguisme et à la mentalité monolingue. Et, tertio, leur appréciation des valeurs des langues et leur construction du rôle de celles-ci se fondent essentiellement sur le pragmatisme et l’affirmation identitaire. Il est mis en exergue également que les idéologies linguistiques et éducatives des parents au niveau individuel s’articulent avec l’environnement social. Ce travail de thèse contribue à faire avancer nos connaissances sur les politiques linguistiques familiales dans les familles transnationales, notamment celles d’origine non occidentale dans un pays occidental marqué par le multilinguisme. [less ▲] Detailed reference viewed: 51 (4 UL)![]() Krieger, Bastian ![]() Doctoral thesis (2022) This dissertation investigates the innovation effects of three public policies with large economic relevance, high political priority, and increasing scientific coverage in three essays. The first essay ... [more ▼] This dissertation investigates the innovation effects of three public policies with large economic relevance, high political priority, and increasing scientific coverage in three essays. The first essay examines the role of including environmental selection criteria in public procurement tenders for the introduction of more environmentally friendly products, services, and processes. The implementation of competitive large-scale university funding programs and their heterogeneous effects on regional firms’ innovativeness is covered in the second essay. The third essay analyzes the liberalization of trade in foreign knowledge services and its relevance to the innovation activities of domestic firms. [less ▲] Detailed reference viewed: 44 (4 UL)![]() Shang, Lan ![]() Doctoral thesis (2022) Detailed reference viewed: 40 (24 UL)![]() Acharya, Kishor ![]() Doctoral thesis (2022) Atmospheric Pressure Plasma has been used to enhance and/or initiate the Chemical Vapour Deposition (AP-PECVD) to deposit thin films or functional layer coatings over a large surface area on a large range ... [more ▼] Atmospheric Pressure Plasma has been used to enhance and/or initiate the Chemical Vapour Deposition (AP-PECVD) to deposit thin films or functional layer coatings over a large surface area on a large range of substrates. Now an ability to localise the AP-PECVD coating on an area of interest and control the deposition’s dimension showed its potential application as a viable technique to perform Additive Manufacturing (AM). Additive Manufacturing (AM) is a bottom-up approach in which 2-D patterning or 3-D structures are built using a layer-by-layer deposition. AM allowed easy design optimization and quickly provided the customized parts on demands, thus making itself a very popular technique in the mainstream manufacturing process. As such, it has a wide application in automotive, optics, electronics, aeronautics, medical and biotechnology fields. However, the existing AM printing techniques have some limitations regarding high-resolution printing deposition in a wide variety of substrates and very often get restricted to the types of precursors that could be printed. Whereas, due to the high energetic/reactive species in non-thermal plasma, the AP-PECVD deposition has been obtained using a wide range of precursors on a versatile surface. Thus, there has been a growing interest in performing an area selective localised AP-PECVD coating, mainly by adapting the design of the PECVD reactor. Hence, this thesis aims to design, optimize and study a one-step mask-free AP-PECVD plasma process that could locally deposit the material of interest with high precision to perform AM. In the thesis, the technical approach undertaken by the home-built prototype “plasma torch” is to decouple the plasma generator annular tube and the precursor injector central capillary. This approach has allowed a way to tune the diameter of the deposited dot by changing the dimension of the precursor injector, which has been demonstrated by the deposition of the micro-dot as small as 400µm in diameter. Further, the flexibility to move the capillary tube without significant changes in the plasma torch's overall geometry has also allowed for selectively injecting the precursor (Methylmethacrylate, MMA) in the spatial plasma post-discharge region. Thanks to this setting, the deposited dot has high retention of monomer's chemistry (functional group) and unprecedented molecular weights (oligomeric chain up to 18 MMA units). Hence, initially, a novel area selective AP-PECVD plasma torch design has been demonstrated, and its performance has been defined to obtain the micro resolution coating. During the research work, gas flow rates have been identified as a crucial parameter in obtaining the localised coating; three kinetic regimes with different coating morphology have been discovered. By performing a thorough computational fluid dynamic (CFD) simulation of the torch phenomena, it has been possible to establish a parallel between the fluid behaviour and the deposition size. The deposition was found to be confined in a zone created by the dynamical behaviour of gas, i.e., re-circulating vortices between the torch and substrate. Hence, later the gas flow rate was used to tune the diameter of the confinement zone, which in return changed the diameter of the deposited dot. The gas flow dynamic impacts the involved species, i.e., reactive plasma species, precursor molecules, and the open-air interaction and distribution on the surface of the substrate. When organosilicon precursors with the presence or absence of vinyl bond and/or ethoxy groups are used, it results in different depositional chemical reactions and depositional patterns. The correlation between the depositional patterns and the mass fraction distribution of involved species has been obtained thanks to the performed CFD simulation done in parallel. Further, the likelihood of deposition mechanisms like "vinyl group opening by free radical" for vinyl containing precursor resulting in silicon oxycarbide-like (SiOxCyH) structural deposition, and the Reactive Oxygen Species (ROS)-induced "fragmentation and adsorption" deposition mechanism resulting in silica SiOx like structural deposition for siloxane containing precursor has been suggested and discussed. The understanding gained from this systematic case study implies the importance of reactive plasma species in the underlying deposition mechanisms; hence, it has been suggested that tuning/tailoring its distribution can alter the chemical nature of deposition and its pattens. Overall, this thesis work provides insight into area selective AP-PECVD coating (plasma printing) and demonstrates that plasma technology is a viable option for additive manufacturing. The findings would be helpful in both designing the AP-PECVD plasma torch and selecting precursors for the desired organic/inorganic deposition. Thanks to the insight gained during the thesis work, the home-designed prototype of the plasma torch has been upgraded to implement in a commercial 3-D printer. [less ▲] Detailed reference viewed: 24 (1 UL)![]() Ramirez Sanchez, Omar ![]() Doctoral thesis (2022) With a record power conversion efficiency of 23.35% and a low carbon footprint, Cu(In,Ga)Se2 remains as one of the most suitable solar energy materials to assist in the mitigation of the climate crisis we ... [more ▼] With a record power conversion efficiency of 23.35% and a low carbon footprint, Cu(In,Ga)Se2 remains as one of the most suitable solar energy materials to assist in the mitigation of the climate crisis we are currently facing. The progress seen in the last decade of Cu(In,Ga)Se2 advancement, has been made possible by the development of postdeposition treatments (PDTs) with heavy alkali metals. PDTs are known to affect both surface and bulk properties of the absorber, resulting in an improvement of the solar cell parameters open-circuit voltage, short-circuit current density and fill factor. Even though the beneficial effects of PDTs are not questioned, the underlying mechanisms responsible for the improvement, mainly the one related to the open-circuit voltage, are still under discussion. Although such improvement has been suggested to arise from a suppression of bulk recombination, the complex interplay between alkali metals and grain boundaries has complicated the labour to discern what exactly in the bulk material is profiting the most from the PDTs. In this regard, the development of this thesis aims at investigating the effects of PDTs on the bulk properties of Cu(In,Ga)Se2 single crystals, i.e., to study the effects of alkali metals in the absence of grain boundaries. Most of the presented analyses are based on photoluminescence, since this technique allows to get access to relevant information for solar cells such as the quasi-Fermi level splitting and the density of tail states directly from the absorber layer, and without the need of complete devices. This work is a cumulative thesis of three scientific publications obtained from the results of the different studies carried out. Each publication aims at answering important questions related to the intrinsic properties of Cu(In,Ga)Se2 and the effects of PDTs. The first publication presents a thorough investigation on the effects of a single heavy alkali metal species on the optoelectronic properties of Cu(In,Ga)Se2. In the case of polycrystalline absorbers, the effects of potassium PDTs in the absence of sodium have been previously attributed to the passivation of grain boundaries and donor-like defects. The obtained results, however, suggest that potassium incorporated from a PDT can act as a dopant in the absence of grain boundaries and yield an improvement in quasi-Fermi level splitting of up to 30 meV in Cu-poor CuInSe2, where a type inversion from N-to-P is triggered upon potassium incorporation. This observation led to the second paper, where a closer look was taken to how the carrier concentration and electrical conductivity of alkali-free Cu-poor CuInSe2 is affected by the incorporation of gallium in the solid solution Cu(In,Ga)Se2. The results obtained suggest that the N-type character of CuInSe2 can remain as such until the gallium content reaches the critical concentration of 15-19%, where the N-to-P transition occurs. A model based on the trends in formation energies of donor and acceptor-like defects is presented to explain the experimental results. The conclusions drawn in this paper shed light on why CuGaSe2 cannot be doped N-type like CuInSe2. Since a decreased density of tail states as a result of reduced band bending at grain boundaries had been previously pointed out as the mechanism behind the improvement of the open-circuit voltage after postdeposition treatments, the third publication focusses on how compositional variations and alkali incorporation affect the density of tail states of Cu(In,Ga)Se2 single crystals. The results presented in this paper suggest that increasing the copper and reducing the gallium content leads to the reduction of tail states. Furthermore, it is observed that tail states in single crystals are similarly affected by the addition of alkali metals as in the case of polycrystalline absorbers, which demonstrates that tail states arise from grain interior properties and that the role of grain boundaries is not as relevant as it was thought. Finally, an analysis of the voltage losses in high-efficiency polycrystalline and single crystalline solar cells, suggested that the doping effect caused by the alkalis affects the density of tail states through the reduction of electrostatic potential fluctuations, which are reduced due to a decrease in the degree of compensation. By taking the effect of doping on tail sates into account, the entirety of the VOC losses in Cu(In,Ga)Se2 is described. The findings presented in this thesis explain the link between tail states and open circuit voltage losses and demonstrate that the effects of alkali metals in Cu(In,Ga)Se2 go beyond grain boundary passivation. The results presented shed light on the understanding of tail states, VOC losses and the intrinsic properties of Cu(In,Ga)Se2, which is a fundamental step in this technology towards the development of more efficient devices. [less ▲] Detailed reference viewed: 83 (13 UL)![]() Kamlovskaya, Ekaterina ![]() Doctoral thesis (2022) The genre of Australian Aboriginal autobiography is a literature of significant socio-political importance, with authors sharing a history different to the one previously asserted by the European settlers ... [more ▼] The genre of Australian Aboriginal autobiography is a literature of significant socio-political importance, with authors sharing a history different to the one previously asserted by the European settlers which ignored or misrepresented Australia's First People. While there has been a number of studies looking at the works belonging to this genre from various perspectives, Australian Indigenous life writing has never been approached from the digital humanities point of view which, given the constant development of computer technologies and growing availability of digital sources, offers humanities researchers many opportunities for exploring textual collections from various angles. With this research work I contribute to closing the above-mentioned research gap and discuss the results of the interdisciplinary research project within the scope of which I created a bibliography of published Australian Indigenous life writing works, designed and assembled a corpus and created word embedding models of this corpus which I then used to explore the discourses of identity, land, sport, and foodways, as well as gender biases present in the texts in the context of postcolonial literary studies and Australian history. Studying these discourses is crucial for gaining a better understanding of the contemporary Australian society as well as the nation's history. Word embeddings modelling has recently been used in digital humanities as an exploratory technique to complement and guide traditional close reading approaches, which is justified by their potential to identify word use patterns in a collection of texts. In this dissertation, I provide a case study of how word embedding modelling can be used to investigate humanities research questions and reflect on the issues which researchers may face while working with such models, approaching various aspects of the research project from the perspectives of digital source and tool criticism. I demonstrate how word embedding model of the analysed corpus represents discourses through relationships between word vectors that reflect the historical, political, and cultural environment of the authors and some unique experiences and perspectives related to their racial and gender identities. I show how the narrators reconstruct the analysed discourses to achieve the main goals of Australian Indigenous life writing as a genre - reclaiming identity and rewriting history. [less ▲] Detailed reference viewed: 89 (15 UL)![]() Fixemer, Sonja ![]() Doctoral thesis (2022) Worldwide more than 55 million people are suffering from incurable age-related neurodegenerative diseases and associated dementia, including Alzheimer’s Disease (AD) and Dementia with Lewy bodies (DLB ... [more ▼] Worldwide more than 55 million people are suffering from incurable age-related neurodegenerative diseases and associated dementia, including Alzheimer’s Disease (AD) and Dementia with Lewy bodies (DLB). AD and DLB patients share memory impairment symptoms but present specific deterioration patterns of the hippocampus, a brain region essential for memory processes. Notably, the CA1 subregion is more vulnerable to atrophy in AD patients than in DLB patients. However, it remains unclear which factors contribute to this differential subregional vulnerability. On the neuropathological level, both AD and DLB patients frequently present an overlap of misfolded protein pathologies with AD-typical pathologies including extracellular amyloid-β (Aβ) plaques and neurofibrillary tangles (NFTs) of hyperphosphorylated tau protein (pTau), and DLB-typical pathological inclusions of phosphorylated 𝛼-synuclein (pSyn). Recent genome-wide association studies (GWAS) have revealed many genetic AD risk factors that are directly linked to microglia and suggest that they play an active role in pathology. However, how microglia alterations are linked to local pathological environments and which role microglia subpopulations play in specific vulnerability patterns of the hippocampus in AD and DLB remains poorly understood. This PhD thesis addressed two main aspects of microglia alterations in the post-mortem hippocampus of AD and DLB patients. The first study provided a very detailed and 3D characterization of microglia alterations at individual cell levels across CA1, CA3 and DG/CA4 subfields; and their local association with concomitant pTau, Aβ and pSyn loads across AD and DLB. We show that the co-occurrence of these three different types of misfolded proteins is frequent and follows specific subregional patterns in both diseases but is more severe in AD than DLB cases. Our results suggest that high burdens of pTau and pSyn associated with increased microglial alterations could be associated to the CA1 vulnerability in AD. Our second study provided an morphological and molecular characterization of a type of microglia accumulations referred to as coffin-like microglia (CoM), using high- and super-resolution microscopy as well as digital spatial profiling. We showed that CoM were enriched in the pyramidal layer of CA1/CA2 and were not linked to Aβ plaques, but occasionally engulfed or contained NFTs or intraneuronal granular pSyn inclusions. Furthermore, CoM are not surrounded by hypertrophic reactive astrocytes like plaque-associated microglia (PAM), but rather by dystrophic astrocytic processes. We found that proteomic and transcriptomic signatures of CoM point toward cellular senescence and immune cell infiltration, while PAM signatures indicate oxido-reductase activity and lipid degradation. Our studies provide new insights in complex signatures of human microglia in the hippocampus of age-related neurodegenerative diseases. [less ▲] Detailed reference viewed: 62 (5 UL)![]() El Orche, Fatima Ezzahra ![]() Doctoral thesis (2022) Detailed reference viewed: 42 (2 UL)![]() Proverbio, Daniele ![]() Doctoral thesis (2022) From population collapses to cell-fate decision, critical phenomena are abundant in complex real-world systems. Among modelling theories to address them, the critical transitions framework gained traction ... [more ▼] From population collapses to cell-fate decision, critical phenomena are abundant in complex real-world systems. Among modelling theories to address them, the critical transitions framework gained traction for its purpose of determining classes of critical mechanisms and identifying generic indicators to detect and alert them (“early warning signals”). This thesis contributes to such research field by elucidating its relevance within the systems biology landscape, by providing a systematic classification of leading mechanisms for critical transitions, and by assessing the theoretical and empirical performance of early warning signals. The thesis thus bridges general results concerning the critical transitions field – possibly applicable to multidisciplinary contexts – and specific applications in biology and epidemiology, towards the development of sound risk monitoring system. [less ▲] Detailed reference viewed: 143 (14 UL)![]() Kremer, Paul ![]() Doctoral thesis (2022) Improvised Explosive Devices (IEDs) are an ever-growing worldwide threat. The disposal of IEDs is typically performed by experts of the police or the armed forces with the help of specialized ground ... [more ▼] Improvised Explosive Devices (IEDs) are an ever-growing worldwide threat. The disposal of IEDs is typically performed by experts of the police or the armed forces with the help of specialized ground Ordnance Disposal Robots (ODRs). Unlike aerial robots, those ODRs have poor mobility, and their deployment in complex environments can be challenging or even impossible. Endowed with manipulation capabilities, aerial robots can perform complex manipulation tasks akin to ground robots. This thesis leverages the manipulation skills and the high mobility of aerial robots to perform aerial disposal of IEDs. Being, in essence, an aerial manipulation task, this work presents numerous contributions to the broader field of aerial manipulation. This thesis presents the mechatronic concept of an aerial ODR and a high-level view of the fundamental building blocks developed throughout this thesis. Starting with the system dynamics, a new hybrid modeling approach for aerial manipulators (AMs) is proposed that provides the closed-form dynamics of any given open-chain AM. Next, a highly integrated, lightweight Universal Gripper (called TRIGGER) customized for aerial manipulation is introduced to improve grasping performance in unstructured environments. The gripper (attached to a multicopter) is tested under laboratory conditions by performing a pick-and-release task. Finally, an autonomous grasping solution is presented alongside its control architecture featuring computer vision and trajectory optimization. To conclude, the grasping concept is validated in a simulated IED disposal scenario. [less ▲] Detailed reference viewed: 35 (3 UL)![]() Liu, Bowen ![]() Doctoral thesis (2022) Advances in networking and hardware technology have made the design and deployment of Internet of Things (IoTs) and decentralised applications a trend. For example, the fog computing concept and its ... [more ▼] Advances in networking and hardware technology have made the design and deployment of Internet of Things (IoTs) and decentralised applications a trend. For example, the fog computing concept and its associated edge computing technologies are pushing computations to the edge so that data aggregation can be avoided to some extent. This naturally brings benefits such as efficiency and privacy, but on the other hand, it forces data analysis tasks to be carried out in a distributed manner. Hence, we will focus on establishing a secure channel between an edge device and a server and performing data analysis with privacy protection. In this thesis, we first studied the state-of-art Key Exchange (KE) and Authenticated Key Exchange (AKE) protocols in the literature, including security properties, security models for various security properties, existing KE and AKE schemes of pre-quantum and post-quantum era with varied authentication factors. As a result of the above research, a novel IoT-oriented security model for AKE protocol is introduced. In addition to the general security properties satisfaction, we also define several detailed security games for the desired security properties of perfect forward secrecy, key compromise impersonation resilience and server compromise impersonation resilience. Furthermore, by studying the current multi-factor AKE protocols in the literature, we are inspired by the usage of bigdata in the IoT setting for the authentication and session key establishment propose. With this in mind, we proposed a bigdata-facilitated two-party AKE protocol for IoT systems that uses the bigdata as one of the authentication factors. Moreover, we also proposed a modular framework for constructing IoT-server AKE in post-quantum setting. It is flexible that it can integrate with a public key encryption and a KE component. In addition to this, we notice that as IoT generates and collects more and more data, the need to perform data analysis increases at the same time. In order to avoid the performance limitations of IoT devices, ease the burden of the server, and also guarantee the quality of service of IoT applications, we presented a privacy-preserving decentralised Singular Value Decomposition (SVD) for fog architecture, which could be considered as a multi-IoT and multi-server setting and provides protection for the bigdata set. Next, we would like to further integrate the SVD results from different subsets using a federated learning mechanism. Privacy protection is always a fundamental requirement we need to consider; with this in mind, we proposed a privacy-preserving federated SVD scheme with secure aggregation. The results from the different edge devices are securely aggregated with the server and returned to the individual devices for further applications. [less ▲] Detailed reference viewed: 35 (5 UL)![]() Chitic, Ioana Raluca ![]() Doctoral thesis (2022) Detailed reference viewed: 25 (3 UL)![]() Michelsen, Andreas Nicolai Bock ![]() Doctoral thesis (2022) In the search for non-Abelian anyonic zero modes for inherently fault-tolerant quantum computing, the hybridized superconductor - quantum Hall edge system plays an important role. Inspired by recent ... [more ▼] In the search for non-Abelian anyonic zero modes for inherently fault-tolerant quantum computing, the hybridized superconductor - quantum Hall edge system plays an important role. Inspired by recent experimental realizations of this system, we describe it through a microscopic theory based on a BCS superconductor with Rashba spin-orbit coupling and Meissner effect at the surface, which is tunnel-coupled to a spin-polarized integer or fractional quantum Hall edge. By integrating out the superconductor, we arrive at an effective theory of the proximitized edge state and establish a qualitative description of the induced superconductivity. We predict analytical relations between experimentally available parameters and the key parameters of the induced superconductivity, as well as the experimentally relevant transport signatures. Extending the model to the fractional quantum Hall case, we find that both the spin-orbit coupling and the Meissner effect play central roles. The former allows for transport across the interface, while the latter controls the topological phase transition of the induced p-wave pairing in the edge state, allows for particle-hole conversion in transport for weak induced pairing amplitudes, and determines when pairing dominates over fractionalization in the proximitized fractional quantum Hall edge. Further experimental indicators are predicted for the system of a superonductor coupled through a quantum point contact with an integer or fractional quantum Hall edge, with a Pauli blockade which is robust to interactions and fractionalization as a key indicator of induced superconductivity. With these predictions we establish a more solid qualitative understanding of this important system, and advance the field towards the realization of anyonic zero modes. [less ▲] Detailed reference viewed: 36 (7 UL)![]() Colling, Joanne ![]() Doctoral thesis (2022) Detailed reference viewed: 26 (4 UL)![]() Dalle, Marie-Alix ![]() Doctoral thesis (2022) Water and power related resources (energy sources and required material) are both critical and crucial resources that have become even more and more strategic as a result of climate change and geopolitics ... [more ▼] Water and power related resources (energy sources and required material) are both critical and crucial resources that have become even more and more strategic as a result of climate change and geopolitics. By making a large store of salty water available, desalination appears to be a viable solution to the water crisis already affecting 40% of the population today. However, because existing desalination procedures are power intensive and rely on non-renewable energy resources, their power use at large scale is unsustainable. Alternative techniques exist that are promising in terms of environmental impact, but not yet competitive in terms of fresh water outflow and energy efficiency. The focus of this work is on one of these alternatives, Air-Gap Membrane Distillation (AGMD), which was chosen because it relies on low-grade heat that is easy to collect from solar radiations or from industrial waste heat. This technique mimics the water cycle, thanks to the use of a membrane, allowing to bring the hot and cold water streams closer together. As a result, the temperature difference that drives evaporation is strengthened and the process accelerated. However, the development of a boundary layer at the membrane interface reduces this temperature difference and thus decreases the overall performance of the process. Thus this technique still requires some improvements to become industrially attractive, in terms of fresh water outflow per kWh and energy use. The goal of this thesis work is to contribute to AGMD energy efficiency and output flow enhancement by leveraging both experimental and theoretical considerations. A test facility characterizing the boundary layer based on a Schlieren method as well as an adapted AGMD module were designed and built. By interacting with the boundary layer, the laser allows the observation of the continuous temperature profile in the hot water channel of a at sheet AGMD module. The measurement can be performed in close proximity to the membrane and under a variety of operational conditions (inlet hot and cold temperatures, inlet velocities). In parallel, the fresh water outflow corresponding to these experimental conditions can be measured. Moreover, the experimental layout opens the way for further observations of the AGMD process from a different angle - such as concentration profiles or experimentation in the air-gap - with very little addition. The overall experimental set-up has eventually been used to produce a first set of data over a range of temperature (60-75◦C), which is then interpreted thanks to a custom algorithm deriving the temperature profiles and boundary layer thicknesses. A three dimensional heat and mass transfer model for AGMD (3DH&MT) - previously developed in the research team - has been used to numerically reproduce the experimental conditions and compare the results. The comparison showed promising results as the temperature gradients at the membrane interface and fresh water outflows present similar orders of magnitude and trends. The accuracy of the experiment can be further increased through several adaptations in the set-up. This 3DH&MT model could be used to simulate more complex AGMD module designs, such as spiral modules in order to optimize the operating conditions and the overall shape of the AGMD module to enhance its performances. Finally, in the aim of improving the energy efficiency and fresh water outflow of the AGMD process, spacers are usually added in the individual channels to boost mixing and thus reduce the boundary layer thickness, which improves evaporation flux. Two novel spacer geometries inspired by current industrial mixing state of the art and nature have been proposed and investigated, yielding interesting results for two distinct applications. One is particularly well-suited to maximizing mixing regardless of the energy used, hence improving the energy efficiency of the process. The second is optimal for minimizing energy consumption while maintaining a decent mixing result, thus enhancing the fresh water outflow of the process. A couple of indicators have also been proposed to assess the mixing performance of more complex 3D geometries. Overall this work broadens the current AGMD research by providing an experimental test-bench enabling the continuous temperature profile measurement, and the validation of a 3D heat and mass transfer model. Moreover, interesting tracks for improving the design of spacers are proposed in order to minimize the AGMD process's energy efficiency resistance. AGMD is an extremely promising water treatment technique since it is applicable to a broader range of waters than just seawater. The test equipment described in this work is sufficiently adaptable to investigate this potential as well as variants of AGMD processes that might boost its attractiveness. As it is based on readily available materials and technologies, it may be used anywhere and its reliance on a naturally available energy flow (solar radiation) makes it attractive in isolated regions. [less ▲] Detailed reference viewed: 46 (10 UL)![]() Schifano, Sonia ![]() Doctoral thesis (2022) Detailed reference viewed: 50 (20 UL)![]() Abdu, Tedros Salih ![]() Doctoral thesis (2022) The application of Satellite Communications (SatCom) has recently evolved from providing simple Direct-To-Home television (DTHTV) to enable a range of broadband internet services. Typically, it offers ... [more ▼] The application of Satellite Communications (SatCom) has recently evolved from providing simple Direct-To-Home television (DTHTV) to enable a range of broadband internet services. Typically, it offers services to the broadcast industry, the aircraft industry, the maritime sector, government agencies, and end-users. Furthermore, SatCom has a significant role in the era of 5G and beyond in terms of integrating satellite networks with terrestrial networks, offering backhaul services, and providing coverage for the Internet of Things (IoT) applications. Moreover, thanks to the satellite's wide coverage area, it can provide services to remote areas where terrestrial networks are inaccessible or expensive to connect. Due to the wide range of satellite applications outlined above, the demand for satellite service from user terminals is rapidly increasing. Conventionally, satellites use multi-beam technology with uniform resource allocation to provide service to users/beams. In this case, the satellite's resources, such as power and bandwidth, are evenly distributed among the beams. However, this resource allocation method is inefficient since it does not consider the heterogeneous demands of each beam, which may result in a beam with a low demand receiving too many resources while a beam with a high demand receiving few resources. Consequently, we may not satisfy some beam demands. Additionally, satellite resources are limited due to spectrum regulations and onboard batteries constraint, which require proper utilization. Therefore, the next generation of satellites must address the above main challenges of conventional satellites. For this, in this thesis, novel advanced resource management techniques are proposed to manage satellite resources efficiently while accommodating heterogeneous beam demands. In the above context, the second and third chapters of the thesis explore on-demand resource allocation methods with no precoding technique. These methods aim to closely match the beam traffic demand by using the minimum transmit power and utilized bandwidth while having tolerable interference among the beams. However, an advanced interference mitigation technique is required in a high interference scenario. Thus, in the fourth chapter of the thesis, we propose a combination of resource allocation and interference management strategies to mitigate interference and meet high-demand requirements with less power and bandwidth consumption. In this context, the performance of the resource management method for systems with full precoding, that is, all beams are precoded; without precoding, that is, no precoding is applied to any beams; and with partial precoding, that is, some beams are precoded, is investigated and compared. Thanks to emerging technologies, the next generation of satellite communication systems will deploy onboard digital payloads; thus, advanced resource management techniques can be implemented. In this case, the digital payload can be configured to change the bandwidth, carrier frequency, and transmit power of the system in response to heterogeneous traffic demands. Typically, onboard digital payloads consist of payload processors, each operating with specific power and bandwidth to process each beam signal. There are, however, only a limited number of processors, thus requiring proper management. Furthermore, the processors consume more energy to process the signals, resulting in high power consumption. Therefore, payload management will be crucial for future satellite generation. In this context, the fifth chapter of the thesis proposes a demand-aware onboard payload processor management method, which switches on the processors according to the beam demand. In this case, for low demand, fewer processors are in-use, while more processors become necessary as demand increases. Demand-aware resource allocation techniques may require optimization of large variables. Consequently, this may increase the computational time complexity of the system. Thus, the sixth chapter of the thesis explores the methods of combining demand-aware resource allocation and deep learning (DL) to reduce the computational complexity of the system. In this case, a demand-aware algorithm enables bandwidth and power allocation, while DL can speed up computation. Finally, the last chapter provides the main conclusions of the thesis, as well as the future research directions. [less ▲] Detailed reference viewed: 85 (15 UL)![]() Robinet, François ![]() Doctoral thesis (2022) The research presented in this dissertation focuses on reducing the need for supervision in two tasks related to autonomous driving: end-to-end steering and free space segmentation. For end-to-end ... [more ▼] The research presented in this dissertation focuses on reducing the need for supervision in two tasks related to autonomous driving: end-to-end steering and free space segmentation. For end-to-end steering, we devise a new regularization technique which relies on pixel-relevance heatmaps to force the steering model to focus on lane markings. This improves performance across a variety of offline metrics. In relation to this work, we publicly release the RoboBus dataset, which consists of extensive driving data recorded using a commercial bus on a cross-border public transport route on the Luxembourgish-French border. We also tackle pseudo-supervised free space segmentation from three different angles: (1) we propose a Stochastic Co-Teaching training scheme that explicitly attempts to filter out the noise in pseudo-labels, (2) we study the impact of self-training and of different data augmentation techniques, (3) we devise a novel pseudo-label generation method based on road plane distance estimation from approximate depth maps. Finally, we investigate semi-supervised free space estimation and find that combining our techniques with a restricted subset of labeled samples results in substantial improvements in IoU, Precision and Recall. [less ▲] Detailed reference viewed: 53 (5 UL)![]() Ferreira Silva, Marielle ![]() Doctoral thesis (2022) How we design, construct and live in our houses as well as go to work can mitigate carbon dioxide (CO2) emissions and global climate change. Furthermore, the complex world we live in is in an ongoing ... [more ▼] How we design, construct and live in our houses as well as go to work can mitigate carbon dioxide (CO2) emissions and global climate change. Furthermore, the complex world we live in is in an ongoing transformation process. The housing shortage problem is increasing as the world population and cities are increasingly growing. Thereby, we must think of all the other issues that come along with population growth, such as increased demand for built space, mobility, expansion of cities into green areas, use of resources, and materials scarcity. Various projects from history have used alternatives to solve the problem of social housing, such as increasing density in cities through housing complexes, fast and low-cost constructions with prefabricated methods and materials, and modularisation systems. However, the current architecture is not designed to meet users’ future needs and reduce the environmental impact. A proposal to change this situation would be to go back to the beginning of architecture’s conception and to design it differently. In addition, nowadays, there is an increasing focus on moving towards sustainable and circular living spaces based on shared, adaptable and modular built environments to improve residents’ quality of life. For this reason, the main objective of this thesis is to study the potential of architecture that can reconfigure spatially and temporally, and produce alternative generic models to reuse and recycle architectural elements and spaces for functional flexibility through time. To approach the discussion, a documentary research methodology was applied to study the modular, prefabricated and ecological architectural typologies to address recyclability in buildings. The Atlas with case studies and architectural design strategies emerged from the analyses of projects from Durant to the 21st century. Furthermore, this thesis is a part of the research project Eco-Construction for Sustainable Development (ECON4SD), which is co-funded by the EU in partnership with the University of Luxembourg, and it presents three new generic building typologies. They are named according to their strong characteristics: Prototype 1 - Slab typology, a building designed as a concrete shelf structure in which timber housing units can be plugged in and out; Prototype 2 - Tower typology, a tower building with a flexible floor plan combining working and residential facilities with adjacent multi-purpose facilities; and Prototype 3 - Block typology, a structure characterised by the entire disassembly. The three new typologies combine modularity, prefabrication, flexibility and disassembly strategies to address the increasing demand for multi-use, reusable and resourceefficient housing units. The prototypes continually adapt to the occupants’ needs as the infrastructure incorporates repetition, exposed structure, central core, terrace, open floors, unfinished spaces, prefabrication, combined activities, and have reduced and different housing unit sizes, in which parts can be disassembled. They also densify the region that they are being implemented in. Moreover, the new circular typologies can offer more generous public and shared space for the occupants within the same building size as an ordinary building. The alternative design allows the reconversion of existing buildings or the reconstruction of the same buildings in other places reducing waste and increases its useful lifespan. Once the building is adapted and reused as much as possible, and the life cycle comes to an end, it can be disassembled, and the materials can be sorted for reusable or recyclable resources. The results demonstrate that circular architecture is feasible, realistic, adapts through time, increases material use, avoids unnecessary demolition, reduces construction waste and CO2 emissions and extends the useful life of the buildings. [less ▲] Detailed reference viewed: 65 (6 UL)![]() Perez Becker, Nicole ![]() Doctoral thesis (2022) As global population and income levels have increased, so has the waste generated as a byproduct of our production and consumption processes. Approximately two billion tons of municipal solid waste are ... [more ▼] As global population and income levels have increased, so has the waste generated as a byproduct of our production and consumption processes. Approximately two billion tons of municipal solid waste are generated globally every year – that is, more than half a kilogram per person each day. This waste, which is generated at various stages of the supply chain, has negative environmental effects and often represents an inefficient use or allocation of limited resources. With the growing concern about waste, many governments are implementing regulations to reduce waste. Waste is a often consequence of the inventory decisions of different players in a supply chain. As such, these regulations aim to reduce waste by influencing inventory decisions. However, determining the inventory decisions of players in a supply chain is not trivial. Modern supply chains often consist of numerous players, who may each differ in their objectives and in the factors they consider when making decisions such as how much product to buy and when. While each player makes unilateral inventory decisions, these decisions may also affect the decisions of other players. This complexity makes it difficult to predict how a policy will affect profit and waste outcomes for individual players and the supply chain as a whole. This dissertation studies the inventory decisions of players in a supply chain when faced with policy interventions to reduce waste. In particular, the focus is on food supply chains, where food waste and packaging waste are the largest waste components. Chapter 2 studies a two-period inventory game between a seller (e.g., a wholesaler) and a buyer (e.g., a retailer) in a supply chain for a perishable food product with uncertain demand from a downstream market. The buyer can differ in whether he considers factors affecting future periods or the seller’s supply availability in his period purchase decisions – that is, in his degree of strategic behavior. The focus is on understanding how the buyer’s degree of strategic behavior affects inventory outcomes. Chapter 3 builds on this understanding by investigating waste outcomes and how policies that penalize waste affect individual and supply chain profits and waste. Chapter 4 studies the setting of a restaurant that uses reusable containers instead of single-use ones to serve its delivery and take-away orders. With policy-makers discouraging the use of single-use containers through surcharges or bans, reusable containers have emerged as an alternative. Managing inventories of reusable containers is challenging for a restaurant as both demand and returns of containers are uncertain and the restaurant faces various customers types. This chapter investigates how the proportion of each customer type affects the restaurant’s inventory decisions and costs. [less ▲] Detailed reference viewed: 64 (5 UL)![]() Mazier, Arnaud ![]() Doctoral thesis (2022) Background: Breast-conserving surgery is the most acceptable option for breast cancer removal from an invasive and psychological point of view. During the surgical procedure, the imaging acquisition using ... [more ▼] Background: Breast-conserving surgery is the most acceptable option for breast cancer removal from an invasive and psychological point of view. During the surgical procedure, the imaging acquisition using Magnetic Image Resonance is performed in the prone configuration, while the surgery is achieved in the supine stance. Thus, a considerable movement of the breast between the two poses drives the tumor to move, complicating the surgeon's task. Therefore, to keep track of the lesion, the surgeon employs ultrasound imaging to mark the tumor with a metallic harpoon or radioactive tags. This procedure, in addition to an invasive characteristic, is a supplemental source of uncertainty. Consequently, developing a numerical method to predict the tumor movement between the imaging and intra-operative configuration is of significant interest. Methods: In this work, a simulation pipeline allowing the prediction of patient-specific breast tumor movement was put forward, including personalized preoperative surgical drawings. Through image segmentation, a subject-specific finite element biomechanical model is obtained. By first computing an undeformed state of the breast (equivalent to a nullified gravity), the estimated intra-operative configuration is then evaluated using our developed registration methods. Finally, the model is calibrated using a surface acquisition in the intra-operative stance to minimize the prediction error. Findings: The capabilities of our breast biomechanical model to reproduce real breast deformations were evaluated. To this extent, the estimated geometry of the supine breast configuration was computed using a corotational elastic material model formulation. The subject-specific mechanical properties of the breast and skin were assessed, to get the best estimates of the prone configuration. The final results are a Mean Absolute Error of 4.00 mm for the mechanical parameters E_breast = 0.32 kPa and E_skin = 22.72 kPa. The optimized mechanical parameters are congruent with the recent state-of-the-art. The simulation (including finding the undeformed and prone configuration) takes less than 20 s. The Covariance Matrix Adaptation Evolution Strategy optimizer converges on average between 15 to 100 iterations depending on the initial parameters for a total time comprised between 5 to 30 min. To our knowledge, our model offers one of the best compromises between accuracy and speed. The model could be effortlessly enriched through our recent work to facilitate the use of complex material models by only describing the strain density energy function of the material. In a second study, we developed a second breast model aiming at mapping a generic model embedding breast-conserving surgical drawing to any patient. We demonstrated the clinical applications of such a model in a real-case scenario, offering a relevant education tool for an inexperienced surgeon. [less ▲] Detailed reference viewed: 118 (11 UL)![]() Smajic, Semra ![]() Doctoral thesis (2022) For a very long time, the main focus in Parkinson’s disease (PD) research was the loss of neuromelanin-containing dopaminergic neurons from the substantia nigra (SN) of the midbrain - the key pathological ... [more ▼] For a very long time, the main focus in Parkinson’s disease (PD) research was the loss of neuromelanin-containing dopaminergic neurons from the substantia nigra (SN) of the midbrain - the key pathological feature of the disease. However, the association of neuronal vulnerability and neuromelanin presence has not been a common study subject. Recently, cells other than neurons also gained attention as mediators of PD pathogenesis. There are indications that glial cells undergo disease-related changes, however, the exact mechanisms remain unknown. In this thesis, I aimed to explore the contribution of every cell type of the midbrain to PD using single-nuclei RNA sequencing. Additionally, the goal was to explore their association to PD risk gene variants. As we identified microgliosis as a major mechanism in PD, we further extended our research to microglia. We sought to investigate the relation of microglia and neuromelanin. Thus, we aimed to, by means of immunohistochemical staining, imaging and laser-capture microdissection-based transcriptomics, elucidate this association on a single-cell level. This work resulted in the first midbrain single-cell atlas from idiopathic PD subjects and age- and sex-matched controls. We revealed SN-specific microgliosis with GPNMB upregulation, which also seemed to be specific to the idiopathic form of the disease. We further observed an accumulation of (extraneuronal) neuromelanin particles in Parkinson’s midbrain parenchyma, indicative of incomplete degradation. Moreover, we showed that GPNMB can be alleviated in microglia in contact with neuromelanin. Taken together, we provide evidence of a GPNMB-related microglial state as a disease mechanism specific to idiopathic PD, and highlight neuromelanin as an important player in microglia disease pathology. Further investigations are needed to understand whether the modulation of neuromelanin levels could be relevant in the context of PD therapy. [less ▲] Detailed reference viewed: 45 (9 UL)![]() Riom, Timothée ![]() Doctoral thesis (2022) Programming has become central in the development of human activities while not being immune to defaults, or bugs. Developers have developed specific methods and sequences of tests that they implement to ... [more ▼] Programming has become central in the development of human activities while not being immune to defaults, or bugs. Developers have developed specific methods and sequences of tests that they implement to prevent these bugs from being deployed in releases. Nonetheless, not all cases can be thought through beforehand, and automation presents limits the community attempts to overcome. As a consequence, not all bugs can be caught. These defaults are causing particular concerns in case bugs can be exploited to breach the program’s security policy. They are then called vulnerabilities and provide specific actors with undesired access to the resources a program manages. It damages the trust in the program and in its developers, and may eventually impact the adoption of the program. Hence, to attribute a specific attention to vulnerabilities appears as a natural outcome. In this regard, this PhD work targets the following three challenges: (1) The research community references those vulnerabilities, categorises them, reports and ranks their impact. As a result, analysts can learn from past vulnerabilities in specific programs and figure out new ideas to counter them. Nonetheless, the resulting quality of the lessons and the usefulness of ensuing solutions depend on the quality and the consistency of the information provided in the reports. (2) New methods to detect vulnerabilities can emerge among the teachings this monitoring provides. With responsible reporting, these detection methods can provide hardening of the programs we rely on. Additionally, in a context of computer perfor- mance gain, machine learning algorithms are increasingly adopted, providing engaging promises. (3) If some of these promises can be fulfilled, not all are not reachable today. Therefore a complementary strategy needs to be adopted while vulnerabilities evade detection up to public releases. Instead of preventing their introduction, programs can be hardened to scale down their exploitability. Increasing the complexity to exploit or lowering the impact below specific thresholds makes the presence of vulnerabilities an affordable risk for the feature provided. The history of programming development encloses the experimentation and the adoption of so-called defence mechanisms. Their goals and performances can be diverse, but their implementation in worldwide adopted programs and systems (such as the Android Open Source Project) acknowledges their pivotal position. To face these challenges, we provide the following contributions: • We provide a manual categorisation of the vulnerabilities of the worldwide adopted Android Open Source Project up to June 2020. Clarifying to adopt a vulnera- bility analysis provides consistency in the resulting data set. It facilitates the explainability of the analyses and sets up for the updatability of the resulting set of vulnerabilities. Based on this analysis, we study the evolution of AOSP’s vulnerabilities. We explore the different temporal evolutions of the vulnerabilities affecting the system for their severity, the type of vulnerability, and we provide a focus on memory corruption-related vulnerabilities. • We undertake the replication of a machine-learning based detection algorithms that, besides being part of the state-of-the-art and referenced to by ensuing works, was not available. Named VCCFinder, this algorithm implements a Support- Vector Machine and bases its training on Vulnerability-Contributing Commits and related patches for C and C++ code. Not in capacity to achieve analogous performances to the original article, we explore parameters and algorithms, and attempt to overcome the challenge provided by the over-population of unlabeled entries in the data set. We provide the community with our code and results as a replicable baseline for further improvement. • We eventually list the defence mechanisms that the Android Open Source Project incrementally implements, and we discuss how it sometimes answers comments the community addressed to the project’s developers. We further verify the extent to which specific memory corruption defence mechanisms were implemented in the binaries of different versions of Android (from API-level 10 to 28). We eventually confront the evolution of memory corruption-related vulnerabilities with the implementation timeline of related defence mechanisms. [less ▲] Detailed reference viewed: 73 (9 UL)![]() Tawakuli, Amal ![]() Doctoral thesis (2022) Substantial volumes of data are generated at the edge as a result of an exponential increase in the number of Internet of Things (IoT) applications. IoT data are generated at edge components and, in most ... [more ▼] Substantial volumes of data are generated at the edge as a result of an exponential increase in the number of Internet of Things (IoT) applications. IoT data are generated at edge components and, in most cases, transmitted to central or cloud infrastructures via the network. Distributing data preprocessing to the edge and closer to the data sources would address issues found in the data early in the pipeline. Thus, distribution prevents error propagation, removes redundancies, minimizes privacy leakage and optimally summarizes the information contained in the data prior to transmission. This in turn, prevents wasting valuable yet limited resources at the edge, which would otherwise be used for transmitting data that may contain anomalies and redundancies. New legal requirements such the GDPR and ethical responsibilities render data preprocessing, which addresses these emerging topics, urgent especially at the edge prior to the data leaving the premises of data owners. This PhD dissertation is divided into two parts that focus on two main directions within data preprocessing. The first part focuses on structuring and normalizing the data preprocessing design phase for AI applications. This involved an extensive and comprehensive survey of data preprocessing techniques coupled with an empirical analysis. From the survey, we introduced a holistic and normalized definition and scope of data preprocessing. We also identified the means of generalizing data preprocessing by abstracting preprocessing techniques into categories and sub-categories. Our survey and empirical analysis highlighted dependencies and relationships between the different categories and sub-categories, which determine the order of execution within preprocessing pipelines. The identified categories, sub-categories and their dependencies were assembled into a novel data preprocessing design tool that is a template from which application and dataset specific preprocessing plans and pipelines are derived. The design tool is agnostic to datasets and applications and is a crucial step towards normalizing, regulating and structuring the design of data preprocessing pipelines. The tool helps practitioners and researchers apply a modern take on data preprocessing that enhances the reproducibility of preprocessed datasets and addresses a broader spectrum of issues in the data. The second part of the dissertation focuses on leveraging edge computing within an IoT context to distribute data preprocessing at the edge. We empirically evaluated the feasibility of distributing data preprocessing techniques from different categories and assessed the impact of the distribution including on the consumption of different resources such as time, storage, bandwidth and energy. To perform the distribution, we proposed a collaborative edge-cloud framework dedicated to data preprocessing with two main mechanisms that achieve synchronization and coordination. The synchronization mechanism is an Over-The-Air (OTA) updating mechanism that remotely pushes updated preprocessing plans to the different edge components in response to changes in user requirements or the evolution of data characteristics. The coordination mechanism is a resilient and progressive execution mechanism that leverages the Directed Acyclic Graph (DAG) to represent the data preprocessing plans. Distributed preprocessing plans are shared between different cloud and edge components and are progressively executed while adhering to the topological order dictated by the DAG representation. To empirically test our proposed solutions, we developed a prototype, named DeltaWing, of our edge-cloud collaborative data preprocessing framework that consists of three stages; one central stage and two edge stages. A use-case was also designed based on a dataset obtained from Honda Research Institute US. Using DeltaWing and the use-case, we simulated an Automotive IoT application to evaluate our proposed solutions. Our empirical results highlight the effectiveness and positive impact of our framework in reducing the consumption of valuable resources (e.g., ≈ 57% reduction in bandwidth usage) at the edge while retaining information (prediction accuracy) and maintaining operational integrity. The two parts of the dissertation are interconnected yet can exist independently. Their contributions combined, constitute a generic toolset for the optimization of the data preprocessing phase. [less ▲] Detailed reference viewed: 87 (7 UL)![]() Baniasadi, Mehri ![]() Doctoral thesis (2022) Deep brain stimulation (DBS) is a surgical therapy to alleviate symptoms of numerous movement and psychiatric disorders by electrical stimulation of specific neural tissues via implanted electrodes ... [more ▼] Deep brain stimulation (DBS) is a surgical therapy to alleviate symptoms of numerous movement and psychiatric disorders by electrical stimulation of specific neural tissues via implanted electrodes. Precise electrode implantation is important to target the right brain area. After the surgery, DBS parameters, including stimulation amplitude, frequency, pulse width, and selection of electrode’s active contacts, are adjusted during programming sessions. Programming sessions are normally done by trial and error. Thus, they can be long and tiring. The main goal of the thesis is to make the post-operative experience, particularly the programming session, easier and faster by using visual aids to create a virtual reconstruction of the patient’s case. This enables in silico testing of different scenarios before applying them to the patient. A quick and easy-to-use deep learning-based tool for deep brain structure segmentation is developed with 89% ± 3 accuracy (DBSegment). It is much easier to implement compared to widely-used registration-based methods, as it requires less dependencies and no parameter tuning. Therefore, it is much more practical. Moreover, it segments 40 times faster than the registration-based method. This method is combined with an electrode localization method to reconstruct patients’ cases. Additionally, we developed a tool that simulates DBS-induced electric field distributions in less than a seconds (FastField). This is 1000 times faster than standard methods based on finite elements, with nearly the same performance (92%). The speed of the electric field simulation is particularly important for DBS parameter initialization, which we initialize by solving an optimization problem (OptimDBS). A grid search method confirms that our novel approach convergences to the global minimum. Finally, all the developed methods are tested on clinical data to ensure their applicability. In conclusion, this thesis develops various novel user-friendly tools, enabling efficient and accurate DBS reconstruction and parameter initialization. The methods are by far the quickest among open-source tools. They are easy to use and publicly available, FastField within the LeadDBS toolbox, and DBSegment as a Python pip package and a Docker image. We hope they can boost the DBS post-operative experience, maximize the therapy’s efficacy, and ameliorate DBS research. [less ▲] Detailed reference viewed: 87 (5 UL)![]() Sauvage, Delphine ![]() Doctoral thesis (2022) Detailed reference viewed: 36 (3 UL)![]() Gentile, Niccolo' ![]() Doctoral thesis (2022) Detailed reference viewed: 61 (23 UL)![]() Nguyen, Trung ![]() Doctoral thesis (2022) Detailed reference viewed: 35 (11 UL)![]() Veizaga Campero, Alvaro Mario ![]() Doctoral thesis (2022) Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a ... [more ▼] Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a combination of models and natural language (NL). Neither models nor NL requirements provide a complete picture of the information in the software system, and NL is highly prone to quality issues, such as vagueness, ambiguity, and incompleteness. Poorly written requirements are difficult to communicate and reduce the opportunity to process requirements automatically, particularly the automation of tedious and error-prone tasks, such as deriving acceptance criteria (AC). AC are conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. AC are derived by developers and testers from requirement models. To obtain a precise AC, it is necessary to reconcile the information content in NL requirements and the requirement models. In collaboration with an industrial partner from the financial domain, we first systematically developed and evaluated a controlled natural language (CNL) named Rimay to help analysts write functional requirements. We then proposed an approach that detects common syntactic and semantic errors in NL requirements. Our approach suggests Rimay patterns to fix errors and convert NL requirements into Rimay requirements. Based on our results, we propose a semiautomated approach that reconciles the content in the NL requirements with that in the requirement models. Our approach helps modelers enrich their models with information extracted from NL requirements. Finally, an existing test-specification derivation technique was applied to the enriched model to generate AC. The first contribution of this dissertation is a qualitative methodology that can be used to systematically define a CNL for specifying functional requirements. This methodology was used to create Rimay, a CNL grammar, to specify functional requirements. This CNL was derived after an extensive qualitative analysis of a large number of industrial requirements and by following a systematic process using lexical resources. An empirical evaluation of our CNL (Rimay) in a realistic setting through an industrial case study demonstrated that 88% of the requirements used in our empirical evaluation were successfully rephrased using Rimay. The second contribution of this dissertation is an automated approach that detects syntactic and semantic errors in unstructured NL requirements. We refer to these errors as smells. To this end, we first proposed a set of 10 common smells found in the NL requirements of financial applications. We then derived a set of 10 Rimay patterns as a suggestion to fix the smells. Finally, we developed an automatic approach that analyzes the syntax and semantics of NL requirements to detect any present smells and then suggests a Rimay pattern to fix the smell. We evaluated our approach using an industrial case study that obtained promising results for detecting smells in NL requirements (precision 88%) and for suggesting Rimay patterns (precision 89%). The last contribution of this dissertation was prompted by the observation that a reconciliation of the information content in the NL requirements and the associated models is necessary to obtain precise AC. To achieve this, we define a set of 13 information extraction rules that automatically extract AC-related information from NL requirements written in Rimay. Next, we propose a systematic method that generates recommendations for model enrichment based on the information extracted from the 13 extraction rules. Using a real case study from the financial domain, we evaluated the usefulness of the AC-related model enrichments recommended by our approach. The domain experts found that 89% of the recommended enrichments were relevant to AC, but absent from the original model (precision of 89%). [less ▲] Detailed reference viewed: 58 (10 UL)![]() Aruchamy, Naveen ![]() Doctoral thesis (2022) Ferroelectric materials are ubiquitous in several applications and offer advantages for microelectromechanical systems (MEMS) in their thin film form. However, novel applications require ferroelectric ... [more ▼] Ferroelectric materials are ubiquitous in several applications and offer advantages for microelectromechanical systems (MEMS) in their thin film form. However, novel applications require ferroelectric films to be deposited on various substrates, which requires effective integration and know-how of the material response when selecting a substrate for film deposition. As substrate-induced stress can alter the ferroelectric properties of the films, the knowledge of how stress changes the ferroelectric response under different actuation conditions is essential. Furthermore, the stress-dependent behavior raises the question of understanding the reliability and degradation mechanisms under cyclic electric loading. Therefore, the ferroelectric thin film’s fatigue and breakdown characteristics become more relevant. Lead zirconate titanate (PZT) thin films are popular among other ferroelectric materials. However, there is a tremendous effort made in the direction of finding a lead-free alternative to PZT. Ferroelectric thin films can be deposited using different processing techniques. In this work, the chemical solution deposition route is adapted for depositing PZT thin films on transparent and non-transparent substrates. A correlation between the substrate-induced ferroelectric properties and processing conditions with different electrode configurations is established. Finite element modeling is used to understand the influence of the design parameters of the co-planar interdigitated electrodes for fabricating fully transparent PZT stacks. In-plane and out-of-plane ferroelectric properties of PZT thin films in metal-insulator-metal (MIM) and interdigitated electrode (IDE) geometries, respectively, on different substrates, are compared to establish the connection between the stress-induced effect and the actuation mode. It is shown that the out-of-plane polarization is high under in-plane compressive stress but reduced by nearly four times by in-plane tensile stress. In contrast, the in-plane polarization shows an unexpectedly weak stress dependence. The fatigue behavior of differently stressed PZT thin films with IDE structures is reported for the first time in this study. The results are compared to the fatigue behavior of the same films in MIM geometry. PZT films in MIM geometry, irrespective of the stress state, show a notable decrease in switchable polarization during fatigue cycling. In contrast, the films actuated with IDEs have much better fatigue resistance. The primary fatigue mechanism is identified as domain wall pinning by charged defects. The observed differences in fatigue behavior between MIM and IDE geometries are linked to the orientation of the electric field with respect to the columnar grain structure of the films. Hafnium oxide, an emerging and widely researched lead-free alternative to PZT for non-volatile ferroelectric memory application, is also explored in this work. The breakdown properties of chemical solution-deposited ferroelectric hafnium oxide thin films are also studied. The structure-property relationship for stabilizing the ferroelectric phase in solution-deposited hafnium oxide thin films is established. Furthermore, the effect of processing conditions on the ferroelectric switching behavior and breakdown characteristics are demonstrated and correlated with the possible mechanism. [less ▲] Detailed reference viewed: 81 (6 UL)![]() Farina, Sofia ![]() Doctoral thesis (2022) The human brain is the most structurally and biochemically complex organ, and its broad spectrum of diverse functions is accompanied by high energy demand. In order to address this high energy demand ... [more ▼] The human brain is the most structurally and biochemically complex organ, and its broad spectrum of diverse functions is accompanied by high energy demand. In order to address this high energy demand, brain cells of the central nervous system are organised in a complex and balanced ecosystem, and perturbation of brain energy metabolism is known to be associated with neurodegenerative diseases such as Alzheimer's (AD) and Parkinson's disease. Among all cells composing this ecosystem, astrocytes contribute metabolically to produce the primary energy substrate of life, $\ATP$, and lactate, which can be exported to neurons to support their metabolism. Astrocytes have a star-shaped morphology, allowing them to connect on the one side with blood vessels to uptake glucose and on the other side with neurons to provide lactate. Astrocytes may also exhibit metabolic dysfunctions and modify their morphology in response to diseases. A mechanistic understanding of the morphology-dysfunction relation is still elusive. This thesis developed and applied a mechanistic multiscale modelling approach to investigate astrocytic metabolism in physiological morphologies in healthy and diseased human subjects. The complexity of cellular systems is a significant obstacle in investigating cellular behaviour. Systems biology tackles biological unknowns by combining computational and biological investigations. In order to address the elusive connection between metabolism and morphology in astrocytes, we developed a computational model of central energy metabolism in realistic morphologies. The underlying processes are described by a reaction-diffusion system that can represent cells more realistically by considering the actual three-dimensional shape than classical ordinary differential equation models where the cells are assumed to be spatially punctual, i.e. have no spatial dimension. Thus, the computational model we developed integrates high-resolution microscopy images of astrocytes from human post-mortem brain samples and simulates glucose metabolism in different physiological astrocytic human morphologies associated with AD and healthy conditions. The first part of the thesis is dedicated to presenting a numerical approach that includes complex morphologies. We investigate the classical finite element method (FEM) and cut finite element method (\cutfem{}) for simplified metabolic models in complex geometries. Establishing our image-driven numerical method leads to the second part of this thesis, where we investigate the crucial role played by the locations of reaction sites. We demonstrate that spatial organisation and chemical diffusivity play a pivotal role in the system output. Based on these new findings, we subsequently use microscopy images of healthy and Alzheimer's diseased human astrocytes to build simulations and investigate cell metabolism. In the last part of the thesis, we consider another critical process for astrocytic functionality: calcium signalling. The energy produced in metabolism is also partially used for calcium exchange between cell compartments and mainly can drive mitochondrial activity as a main ATP generating entity. Thus, the active cross-talk between glucose metabolism and calcium signalling can significantly impact the metabolic functionality of cells and requires deeper investigation. For this purpose, we extend our established metabolic model by a calcium signalling module and investigate the coupled system in two-dimensional geometries. Overall, the investigations showed the importance of spatially organised metabolic modelling and paved the way for a new direction of image-driven-meshless modelling of metabolism. Moreover, we show that complex morphologies play a crucial role in metabolic robustness and how astrocytes' morphological changes to AD conditions lead to impaired energy metabolism. [less ▲] Detailed reference viewed: 66 (23 UL)![]() Vetter, Florian ![]() Doctoral thesis (2022) “Pecunia non olet”. Ironically, this Latin dictum strongly relates to the 20th and 21st century if one considers how banks dematerialised constantly money and changed the way a society deals with deposits ... [more ▼] “Pecunia non olet”. Ironically, this Latin dictum strongly relates to the 20th and 21st century if one considers how banks dematerialised constantly money and changed the way a society deals with deposits. By implementing quite radical changes to the concept of money, banks became an accelerating element for social and technological innovation. Our research project within the field of computerisation and digitalisation concentrates on banking activities and services from a European perspective. Banks’ communication regarding credit cards and cashless payments is at the heart of this research. The study intertwines several case studies in selected European countries (i.e., Luxembourg, Germany, France). In particular, the study focuses on the following bank services: automated teller machines, bankcards (especially MasterCard and Eurocard) and home banking since the emergence of Minitel, Vidéotex, or Btx. The comparative and diachronic perspective of this study, starting from the 1960s onwards, aims at shedding light on a history which has often only been seen from an insider’s perspective. It should be noted that our focus is primarily the communication strategies of banks and their related advertisement campaigns for credit cards and cashless payments. This is achieved by focusing on the strategy of the banks and their economic, technical, digital, but also societal approaches. The research topic relates to contemporary history, the history of digitalisation and innovation. In this context, press, audio-visual materials, banking reports, advertising, oral history, as well as web archives serve as primary sources. Moreover, bank archives in Luxembourg, France and Germany are used to complete the study corpus. All in all, the research results help us to understand the high complex world of banking services from an unusual research angle. Therefore, the research topic changes the current scientific standard of banking history by including the perspective of various actors of the European payment market as well as their perception of banking innovations over the years (1968 – 2015) and by analysing a European transnational corpus. Furthermore, by analysing the history of the Eurocard and its relation to MasterCard in a long-term perspective, we offer a novel approach. It helps to enrich the field of banking history, which is slowly changing and introducing different research angles, thanks to pioneering research by Bernardo Bátiz-Lazo, Sabine Effosse, David Sparks Evans, Richard Schmalensee, Lana Schwartz, Sebastian Gießmann and others. In this respect, this PhD research aims to add a milestone to historical research on banking innovation and retail banking which is still in its early stages but is moving fast, driven forward in particular by the pioneers mentioned above. [less ▲] Detailed reference viewed: 74 (4 UL)![]() Balmas, Paolo ![]() Doctoral thesis (2022) Despite the vast research on China’s external economic expansion, little is known about the spatial organisation and operations of Chinese commercial and development banks that enable such expansion. This ... [more ▼] Despite the vast research on China’s external economic expansion, little is known about the spatial organisation and operations of Chinese commercial and development banks that enable such expansion. This thesis by publications sheds new light on the physical presence, organisation and agency of Chinese banks in Europe. It analyses the capability of Chinese banks to create new financial spaces. I start with the assumption that socioeconomic interactions, which I ascribe to the combinations of network-place and structure-agency, construct (financial) space. I identify Luxembourg as a key place of the spatial organisation of Chinese banks in Europe and detect Chinese banks as key players in organising the mechanisms that enable China’s economic expansion into Europe. To understand the implications of Chinese banks’ presence and operations in Europe, I address three intertwined overarching questions: what are Chinese banks doing in Europe? How are they spatially organised? Are they reshaping European financial spaces? To answer, I designed interdisciplinary qualitative research based on expert interviews and desk research. I selected three dimensions for Chinese financial activity in Europe: bank networks, currency and investments, which I analyse in four chapters/publications. The first two chapters analyse the geoeconomics of Chinese bank networks’ expansion and its spatial organisation that enables mergers and acquisitions in Europe respectively. Chapter 3 analyses how Chinese development banks make use of Luxembourg’s investment fund industry to invest in (energy) infrastructures and private equity in Central and Eastern European countries. Chapter 4 analyses the investment role of money as a neglected dimension to understand renminbi internationalisation. This chapter highlights the roles of Luxembourg and Western banks as key for investments into China’s domestic financial markets, and the role of China’s state in governing the inflow of such investments. Findings from the four chapters show how Chinese financial spaces in Europe are co-constituted by both Chinese and European actors. I find that Chinese banks have established a wide set of networks across Europe while their activity is still limited. This suggests that Chinese bank networks are still in an embryonic stage although they are preparing to widen their activities in the (near) future. This strengthens Luxembourg’s positionality as a key financial hub connecting China to Europe. Chinese banks’ attractiveness as future gatekeepers to the Chinese domestic financial markets suggests that they will expand their activities in Europe despite current geopolitical frictions between China and the West. Beyond contributing to the growing literature on China in Europe, this thesis contributes to the advancement of the sub-disciplines of economic and financial geography by conceptualising banks as key agents of financial space creation and shapers of global financial networks. [less ▲] Detailed reference viewed: 38 (4 UL)![]() Khanna, Nikhar ![]() Doctoral thesis (2022) The thesis is focused on developing spectral selective coatings (SSC) composed of multilayer cermets and periodic array of resonating omega structures, turning them to behave like metamaterials, while ... [more ▼] The thesis is focused on developing spectral selective coatings (SSC) composed of multilayer cermets and periodic array of resonating omega structures, turning them to behave like metamaterials, while showing high thermal stability up to1000°C. The developed SSC is intended to be used for the concentrated solar power (CSP) applications. With the aim of achieving highest possible absorbance in the visible region of the spectrum and highest reflectance in the infrared region of the spectrum. The thesis highlights the numerical design, the synthesis and optical characterization of the SSC of approximately 500 nm thickness. A bottom-up approach was adopted for the preparation of a stack with alternate layers, consisting of a distribution of Titanium Nitride (TiN) nanoparticles with a layer of Aluminum Nitride (AlN) on top. The TiN nanoparticles, laid on a Silicon substrate by wet chemical method, are coated with conforming layer of AlN, via Plasma-enhanced Atomic Layer Deposition (PE-ALD). The control of the morphology at the nanoscale is fundamental for tuning the optical behaviour of the material. For this reason, two composites were prepared. One starting with TiN dispersion made with dry TiN powder and deionized water, and the other with ready-made TiN dispersion. Nano-structured metamaterial based absorbers have many benefits over conventional absorbers, such as miniaturisation, adaptability and frequency tuning. Dealing with the current challenges of producing the new metamaterial based absorber with optimal nanostructure design along with its synthesis within current nano-technological limits, we were capable of turning the cermets into metamaterial. A periodic array of metallic omega structures was patterned on top of both the composites I and II, by using e-beam lithography technique. Parameters, such as the size of TiN nanoparticles, the thickness of AlN thin film and the dimensions of the omega structure were all revealed by the numerical simulations, performed using Wave-Optics module in COMSOL Multiphysics. The work showcased clearly compares the two kinds of composites, using scanning electron microscope, X-ray photoelectron spectroscopy(XPS) and electrical conductivity measurement. The improvement in the optical performance of the SSC after the inclusion of metallic omega structures in the uppermost layer of the two composites has been thoroughly investigated for light absorption boosting. In addition, the optical performance of the two prepared composites and the metamaterial is used as a means of validating the computational model. [less ▲] Detailed reference viewed: 50 (5 UL)![]() Barlier, Celine ![]() Doctoral thesis (2022) Detailed reference viewed: 51 (6 UL)![]() Nouchokgwe Kamgue, Youri Dilan ![]() Doctoral thesis (2022) Caloric materials are suggested as energy-efficient refrigerants for future cooling devices. They could replace the greenhouse gases used for decades in our air conditioners, fridges, and heat pumps ... [more ▼] Caloric materials are suggested as energy-efficient refrigerants for future cooling devices. They could replace the greenhouse gases used for decades in our air conditioners, fridges, and heat pumps. Among the four types of caloric materials (electro, baro, elasto, magneto caloric), electrocaloric materials are more promising as applying large electric fields is much simpler and cheaper than the other fields. The research in the last years has been focused on looking for electrocaloric materials with high thermal responses. However, the energy efficiency crucial for future replacement of the vapor compression technology has been overlooked. The intrinsic efficiency of electrocaloric has been barely studied. In the present dissertation, we will study the efficiency of EC materials defined as materials efficiency. It is the ratio of the reversible electrocaloric heat to the reversible electrical work required to drive this heat. In this work, we will study the materials efficiency of the benchmark lead scandium tantalate in different shapes (bulk ceramic and multilayer capacitors). A comparison to other caloric materials is presented in this dissertation. Our work gives more insights on the figure merit of materials efficiency to further improve the efficiency of our devices. [less ▲] Detailed reference viewed: 48 (14 UL)![]() Owiso, Owiso ![]() Doctoral thesis (2022) The overall aim of this thesis is to investigate the potential role of regional inter-governmental organisations (RIGOs) in international criminal accountability, specifically through the establishment of ... [more ▼] The overall aim of this thesis is to investigate the potential role of regional inter-governmental organisations (RIGOs) in international criminal accountability, specifically through the establishment of criminal accountability mechanisms, and to make a case for RIGOs’ active involvement. The thesis proceeds from the assumption that international criminal justice is a cosmopolitan project that demands that a tenable conception of state sovereignty guarantees humanity’s fundamental values, specifically human dignity. Since cosmopolitanism emphasises the equality and unity of the human family, guaranteeing the dignity and humanity of the human family is therefore a common interest of humanity rather than a parochial endeavour. Accountability for international crimes is one way through which human dignity can be validated and reaffirmed where such dignity has been grossly and systematically assaulted. Therefore, while accountability for international crimes is primarily the obligation of individual sovereign states, this responsibility is ultimately residually one of humanity as a whole, exercisable through collective action. As such, the thesis advances the argument that states as collective representations of humanity have a responsibility to assist in ensuring accountability for international crimes where an individual state is either genuinely unable or unwilling by itself to do so. The thesis therefore addresses the question as to whether RIGOs, as collective representations of states and their peoples, can establish international criminal accountability mechanisms. Relying on cosmopolitanism as a theoretical underpinning, the thesis examines the exercise of what can be considered as elements of sovereign authority by RIGOs in pursuit of the cosmopolitan objective of accountability for international crimes. In so doing, the thesis interrogates whether there is a basis in international law for such engagement, and examines how such engagement can practically be undertaken, using two case studies of the European Union and the Kosovo Specialist Chambers and Specialist Prosecutor’s Office, and the African Union and the (proposed) Hybrid Court for South Sudan. The thesis concludes that general international law does not preclude RIGOs from exercising elements of sovereign authority necessary for the establishment of international criminal accountability mechanisms, and that specific legal authority to engage in this regard can then be determined by reference to the doctrine of attributed/conferred powers and the doctrine of implied powers in interpreting the legal instruments of RIGOs. Based on this conclusion, the thesis makes a normative case for an active role for RIGOs in the establishment of international criminal accountability mechanisms, and provides a practical step-by-step guide on possible legal approaches for the establishment of such mechanisms by RIGOs, as well as guidance on possible design models for these mechanisms. [less ▲] Detailed reference viewed: 74 (8 UL)![]() Rida, Ahmad ![]() Doctoral thesis (2022) The thesis proposes a near-field Communication (NF) based solution, for the tire pressure monitoring system (TPMS) in heavy commercial vehicles, as an alternative to wireless far-field (FF) based ... [more ▼] The thesis proposes a near-field Communication (NF) based solution, for the tire pressure monitoring system (TPMS) in heavy commercial vehicles, as an alternative to wireless far-field (FF) based communication used in conventional TPMS. Truck and tire manufacturers have stepped up efforts to develop TPMS solutions, as recent EU regulations will soon make TPMS mandatory in heavy commercial vehicles, but the dense metal content in this application environment attenuates the wireless communication and hinders the development of efficient and robust TPMS solutions. The thesis covers many practical aspects and includes an extensive literature review on the state-of-the-art TPMS solutions commercially available for heavy commercial vehicles. A second literature review was conducted on NF communication and its automotive applications. The researcher then conducted a finite element analysis (FEA) to simulate the application environment, represented by the tire and wheel combination, in order to evaluate the conventional TPMS signal propagation and the proposed system signal propagation. The simulations demonstrated the adverse effect of the application environment on the signal propagation of conventional TPMS and showed the merit of using NF-based communication. The proposed transmitter design was built and evaluated on an actual truck wheel and tire combination in a series of laboratory tests. The proposed transmitter unit was detected with sufficiently high signal-to-noise to establish a communication channel in the presence of limited metal objects in close proximity in laboratory conditions, which could allow for a more advance commercial vehicle TPMS. This industry driven project addresses a serious traffic safety issue and forms a proof-of-concept for the development of a complete TPMS solution. [less ▲] Detailed reference viewed: 29 (2 UL)![]() Brunhoferova, Hana ![]() Doctoral thesis (2022) One of the biggest global challenges is the enormous growth of the population. With the growing population rises consequentially production and release of anthropogenic compounds, which then, due to ... [more ▼] One of the biggest global challenges is the enormous growth of the population. With the growing population rises consequentially production and release of anthropogenic compounds, which then, due to insufficient wastewater treatment system, become pollutants, more precisely micropollutants (MPs). Advanced wastewater technologies presented in this dissertation are solutions applied for targeted elimination of MPs. Ozonation and adsorption on Activated Carbon or their combination belong to the most used advanced wastewater treatment technologies in Europe, however, they are suited for effluents of larger wastewater treatment plants. Therefore, an attempt has been made to test Constructed Wetlands (CWs) as an advanced wastewater treatment technology for small-to-medium sized WWTPs, which are typical for rural areas at the catchment of the river Sûre, the geographical border between Luxembourg and Germany. The efficiency of the CWs for the removal of 27 selected compounds has been tested at different scales (laboratory to pilot) in the Interreg Greater Region project EmiSûre 2017-2021 (Développement de stratégies visant à réduire l'introduction de micropolluants dans les cours d'eau de la zone transfrontalière germanoluxembourgeoise). The results of the project confirmed high ability of CWs to remove MPs from municipal effluents. The quantification of the main mechanisms contributing to the elimination of MPs within the CWs was thus established as the main target of the present PhD research, given the evidence of their high ability in the EmiSûre project. The main mechanisms have been identified as adsorption on the soil of the wetland, phytoremediation by the wetland macrophytes and bioremediation by the wetland microorganisms. The nature of the doctoral thesis is cumulative, the core of the thesis are the following four publications: • Publication [I] describes the usage of CWs as a post-treatment step for municipal effluents. • Publication [II] assesses the role of adsorption of the targeted MPs on the used substrates within the studied CWs and presents characterization of the wetland substrates. • Publication [III] describes the role of the wetland macrophytes in the phytoremediation of the targeted MPs within the studied CWs. Furthermore, it reveals a comparison of the different macrophyte types in varying vegetation stadia. • Publication [IV] outlines the role of the wetland microbes in the bioremediation of the targeted MPs within the studied CWs. Moreover, the wetland microbes known to be able to digest MPs or contribute to the elimination of MPs are identified and quantified. Results suggest adsorption as leading removal mechanism (achieved average removal 18 out of 27 compounds >80%), followed by bioremediation (achieved average removal 18 out of 27 compounds >40%) and phytoremediation (achieved average removal 17 out of 27 compounds <20%). The research described contributes to the extension of knowledge about CWs applied for the elimination of MPs from water. Some of the outcomes (deepened knowledge about soil influencing adsorption, recommendations for adjustment of operational parameters, etc.) could be used as a tool for enhancement of the wetland’s treatment efficiency. The research is concluded by recommendations for further investigations of the individual mechanisms (e.g. application of artificial aeration or circulation of the reaction matrix could result in enhancement of bioremediation). [less ▲] Detailed reference viewed: 62 (8 UL)![]() Schmit, Kristopher John ![]() Doctoral thesis (2022) Detailed reference viewed: 22 (2 UL)![]() Ghamizi, Salah ![]() Doctoral thesis (2022) With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In ... [more ▼] With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In particular, the most critical and fundamental technologies that handle our economic systems, transportation, health, and even privacy. However, while these systems are becoming more effective, their complexity inherently decreases our ability to understand, test, and assess the dependability and trustworthiness of these systems. This problem becomes even more challenging under a multi-objective framework: When the ML model is required to learn multiple tasks together, behave under constrained inputs or fulfill contradicting concomitant objectives. Our dissertation focuses on the context of robust ML under limited training data, i.e., use cases where it is costly to collect additional training data and/or label it. We will study this topic under the prism of three real use cases: Fraud detection, pandemic forecasting, and chest x-ray diagnosis. Each use-case covers one of the challenges of robust ML with limited data, (1) robustness to imperceptible perturbations, or (2) robustness to confounding variables. We provide a study of the challenges for each case and propose novel techniques to achieve robust learning. As the first contribution of this dissertation, we collaborate with BGL BNP Paribas. We demonstrate that their overdraft and fraud detection systems are prima facie robust to adversarial attacks because of the complexity of their feature engineering and domain constraints. However, we show that gray-box attacks that take into account domain knowledge can easily break their defense. We propose, CoEva2 adversarial fine-tuning, a new defense mechanism based on multi-objective evolutionary algorithms to augment the training data and mitigate the system’s vulnerabilities. Next, we investigate how domain knowledge can protect against adversarial attacks through multi-task learning. We show that adding domain constraints in the form of additional tasks can significantly improve the robustness of models to adversarial attacks, particularly for the robot navigation use case. We propose a new set of adaptive attacks and demonstrate that adversarial training combined with such attacks can improve robustness. While the raw data available in the BGL or Robot Navigation is vast, it is heavily cleaned, feature-engineered, and annotated by domain experts (which are expensive), and the end training data is scarce. In contrast, raw data is scarce when dealing with an outbreak, and designing robust ML systems to predict, forecast, and recommend mitigation policies is challenging. In particular, for small countries like Luxembourg. Contrary to common techniques that forecast new cases based on previous data in time series, we propose a novel surrogate-based optimization as an integrated loop. It combines a neural network prediction of the infection rate based on mobility attributes and a model-based simulation that predicts the cases and deaths. Our approach has been used by the Luxembourg government’s task force and has been recognized with a best paper award at KDD2020. Our following work focuses on the challenges that pose cofounding factors to the robustness and generalization of Chest X-ray (CXR) classification. We first investigate the robustness and generalization of multi-task models, then demonstrate that multi-task learning, leveraging the cofounding variables, can significantly improve the generalization and robustness of CXR classification models. Our results suggest that task augmentation with additional knowledge (like extraneous variables) outperforms state-of-art data augmentation techniques in improving test and robust performances. Overall, this dissertation provides insights into the importance of domain knowledge in the robustness and generalization of models. It shows that instead of building data-hungry ML models, particularly for critical systems, a better understanding of the system as a whole and its domain constraints yields improved robustness and generalization performances. This dissertation also proposes theorems, algorithms, and frameworks to effectively assess and improve the robustness of ML systems for real-world cases and applications. [less ▲] Detailed reference viewed: 69 (11 UL)![]() Lee, Jaekwon ![]() Doctoral thesis (2022) Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations ... [more ▼] Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations of their time constraints (deadlines), which can have catastrophic consequences. To verify whether the systems meet their time constraints, engineers perform schedulability analysis from early stages and throughout development. However, there are challenges in obtaining precise results from schedulability analysis due to estimating the worst-case execution times (WCETs) and assigning optimal priorities to tasks. Estimating WCET is an important activity at early design stages of real-time systems. Based on such WCET estimates, engineers make design and implementation decisions to ensure that task executions always complete before their specified deadlines. However, in practice, engineers often cannot provide a precise point of WCET estimates and they prefer to provide plausible WCET ranges. Task priority assignment is an important decision, as it determines the order of task executions and it has a substantial impact on schedulability results. It thus requires finding optimal priority assignments so that tasks not only complete their execution but also maximize the safety margins from their deadlines. Optimal priority values increase the tolerance of real-time systems to unexpected overheads in task executions so that they can still meet their deadlines. However, it is a hard problem to find optimal priority assignments because their evaluation relies on uncertain WCET values and complex engineering constraints must be accounted for. This dissertation proposes three approaches to estimate WCET and assign optimal priorities at design stages. Combining a genetic algorithm and logistic regression, we first suggest an automatic approach to infer safe WCET ranges with a probabilistic guarantee based on the worst-case scheduling scenarios. We then introduce an extended approach to account for weakly hard real-time systems with an industrial schedule simulator. We evaluate our approaches by applying them to industrial systems from different domains and several synthetic systems. The results suggest that they are possible to estimate probabilistic safe WCET ranges efficiently and accurately so the deadline constraints are likely to be satisfied with a high degree of confidence. Moreover, we propose an automated technique that aims to identify the best possible priority assignments in real-time systems. The approach deals with multiple objectives regarding safety margins and engineering constraints using a coevolutionary algorithm. Evaluation with synthetic and industrial systems shows that the approach significantly outperforms both a baseline approach and solutions defined by practitioners. All the solutions in this dissertation scale to complex industrial systems for offline analysis within an acceptable time, i.e., at most 27 hours. [less ▲] Detailed reference viewed: 84 (10 UL)![]() Ezzini, Saad ![]() Doctoral thesis (2022) Requirements Engineering (RE) quality control is a crucial step for a project’s success. Natural Language (NL) is by far the most commonly used means for capturing requirement specifications. Despite ... [more ▼] Requirements Engineering (RE) quality control is a crucial step for a project’s success. Natural Language (NL) is by far the most commonly used means for capturing requirement specifications. Despite facilitating communication, NL is prone to quality defects, one of the most notable of which is ambiguity. Ambiguous requirements can lead to misunderstandings and eventually result in a system that is different from what is intended, thus wasting time, money, and effort in the process. This dissertation tackles selected quality issues in NL requirements: • Using Domain-specific Corpora for Improved Handling of Ambiguity in Requirements: Syntactic ambiguity types occurring in coordination and prepositional-phrase attachment structures are prevalent in requirements (in our document collection, as we discuss in Chapter 3, 21% and 26% of the requirements are subject to coordination and prepositional-phrase attachment ambiguity analysis, respectively). We devise an automated solution based on heuristics and patterns for improved handling of coordination and prepositional-phrase attachment ambiguity in requirements. As a prerequisite for this research, we further develop a more broadly applicable corpus generator that creates a domain-specific knowledge resource by crawling Wikipedia. • Automated Handling of Anaphoric Ambiguity in Requirements: A Multi-solution Study: Anaphoric ambiguity is another prevalent ambiguity type in requirements. Estimates from the RE literature suggest that nearly 20% of industrial requirements contain anaphora [1, 2]. We conducted a multi-solution study for anaphoric ambiguity handling. Our study investigates six alternative solutions based on three different technologies: (i) off-the-shelf natural language processing (NLP), (ii) recent NLP methods utilizing language models, and (iii) machine learning (ML). • AI-based Question Answering Assistant for Analyzing NL Requirements: Understanding NL requirements requires domain knowledge that is not necessarily shared by all the involved stakeholders. We develop an automated question-answering assistant that supports requirements engineers during requirements inspections and quality assurance. Our solution uses advanced information retrieval techniques and machine reading comprehension models to answer questions from the same requirement specifications document and/or an external domain-specific knowledge resource. All the research components in this dissertation are tool-supported. Our tools are released with open-source licenses to encourage replication and reuse. [less ▲] Detailed reference viewed: 110 (19 UL)![]() Kolla, Sri Sudha Vijay Keshav ![]() Doctoral thesis (2022) In the last decade, the manufacturing industry has seen a shift in the way products are produced due to the integration of digital technologies and existing manufacturing systems. This transformation is ... [more ▼] In the last decade, the manufacturing industry has seen a shift in the way products are produced due to the integration of digital technologies and existing manufacturing systems. This transformation is often referred to as \textbf{Industry 4.0} (I4.0), which guarantees to deliver cost efficiency, mass customization, operational agility, traceability, and enable service orientation. To realize the potential of I4.0, integration of physical and digital elements using advanced technologies is a prerequisite. Large manufacturing companies have been embracing the I4.0 transformation swiftly. However, Small and Medium-sized Enterprises (SMEs) face challenges in terms of skills and capital requirements required for a smoother digital transformation. The goal of this thesis is to understand the features of a typical manufacturing SME and map them with the existing (e.g. Lean) and I4.0 manufacturing systems. The mapping is then used to develop a Self-Assessment Tool (SAT) to measure the maturity of a manufacturing entity. The SAT developed in this research has a critical SME focus. However, the scope of the SAT is not limited to SMEs and can be used for large companies. The analysis of the maturity of manufacturing companies revealed that the managerial dimensions of the companies are more mature than the technical dimensions. Therefore, this thesis attempts to fill the gap in technical dimensions especially Augmented Reality (AR) and Industrial Internet of Things (IIoT) through laboratory experiments and industrial validation. A holistic method is proposed to introduce I4.0 technologies in manufacturing enterprises based on maturity assessment, observations, technical road map, and applications. The method proposed in this research includes SAT, which measures the maturity of a manufacturing company in five categorical domains (\textbf{dimensions}): Strategy, Process and Value Stream, Organization, Methods and Tools, and Personnel. Furthermore, these dimensions are attributed to 36 modules, which help manufacturing companies measure their maturity level in terms of lean and I4.0. The SAT is tested in 100 manufacturing enterprises in Grande Région consisting of the pilot study (n=20) and maturity assessment (n=63). The observations from the assessment are then used to set up the technological road map for the research. AR and IIoT are the two technologies that are associated with the least mature modules, which are explored in depth in this thesis. A holistic method is incomplete without industry validation. Therefore, the above-mentioned technologies are applied in two manufacturing companies for further validation of the laboratory results. These applications include 1) the application of AR for maintenance and quality inspection in a tire manufacturing industry, and 2) the application of retrofitting technology for IIoT on a production machine in an SME. With the validated assessment model and the industrial applications, this thesis overall presents a holistic approach to introducing I4.0 technologies in manufacturing enterprises. This is accomplished through identifying the status of the company using maturity assessment and deriving the I4.0 roadmap for high potential modules. The skill gap in the addressed technologies is compensated by designing and testing prototypes in the laboratory before applying them in the industry. [less ▲] Detailed reference viewed: 71 (11 UL)![]() Barthel, Jim Jean-Pierre ![]() Doctoral thesis (2022) This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem ... [more ▼] This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem asks to find all integer solutions to a given system of congruences where each congruence is defined by one modulus and one remainder. The Simultaneous Chinese Remainder Problem is a direct generalization of its classical counterpart where for each modulus the single remainder is replaced by a non-empty set of remainders. The solutions of a Simultaneous Chinese Remainder Problem instance are completely defined by a set of minimal positive solutions, called primitive solutions, which are upper bounded by the lowest common multiple of the considered moduli. However, contrary to its classical counterpart, which has at most one primitive solution, the Simultaneous Chinese Remainder Problem may have an exponential number of primitive solutions, so that any general-purpose solving algorithm requires exponential time. Furthermore, through a direct reduction from the 3-SAT problem, we prove first that deciding whether a solution exists is NP-complete, and second that if the existence of solutions is guaranteed, then deciding whether a solution of a particular size exists is also NP-complete. Despite these discouraging results, we studied methods to find the minimal solution to Simultaneous Chinese Remainder Problem instances and we discovered some interesting statistical properties. A Conjecture On Primes In Arithmetic Progressions And Geometric Intervals: Dirichlet’s theorem on primes in arithmetic progressions states that for any positive integer q and any coprime integer a, there are infinitely many primes in the arithmetic progression a + nq (n ∈ N), however, it does not indicate where those primes can be found. Linnik’s theorem predicts that the first such prime p0 can be found in the interval [0;q^L] where L denotes an absolute and explicitly computable constant. Albeit only L = 5 has been proven, it is widely believed that L ≤ 2. We generalize Linnik’s theorem by conjecturing that for any integers q ≥ 2, 1 ≤ a ≤ q − 1 with gcd(q, a) = 1, and t ≥ 1, there exists a prime p such that p ∈ [q^t;q^(t+1)] and p ≡ a mod q. Subsequently, we prove the conjecture for all sufficiently large exponent t, we computationally verify it for all sufficiently small modulus q, and we investigate its relation to other mathematical results such as Carmichael’s totient function conjecture. On The (M)iNTRU Assumption Over Finite Rings: The inhomogeneous NTRU (iNTRU) assumption is a recent computational hardness assumption, which claims that first adding a random low norm error vector to a known gadget vector and then multiplying the result with a secret vector is sufficient to obfuscate the considered secret vector. The matrix inhomogeneous NTRU (MiNTRU) assumption essentially replaces vectors with matrices. Albeit those assumptions strongly remind the well-known learning-with-errors (LWE) assumption, their hardness has not been studied in full detail yet. We provide an elementary analysis of the corresponding decision assumptions and break them in their basis case using an elementary q-ary lattice reduction attack. Concretely, we restrict our study to vectors over finite integer rings, which leads to a problem that we call (M)iNTRU. Starting from a challenge vector, we construct a particular q-ary lattice that contains an unusually short vector whenever the challenge vector follows the (M)iNTRU distribution. Thereby, elementary lattice reduction allows us to distinguish a random challenge vector from a synthetically constructed one. A Conditional Attack Against Functional Encryption Schemes: Functional encryption emerged as an ambitious cryptographic paradigm supporting function evaluations over encrypted data revealing the result in plain. Therein, the result consists either in a valid output or a special error symbol. We develop a conditional selective chosen-plaintext attack against the indistinguishability security notion of functional encryption. Intuitively, indistinguishability in the public-key setting is based on the premise that no adversary can distinguish between the encryptions of two known plaintext messages. As functional encryption allows us to evaluate functions over encrypted messages, the adversary is restricted to evaluations resulting in the same output only. To ensure consistency with other primitives, the decryption procedure of a functional encryption scheme is allowed to fail and output an error. We observe that an adversary may exploit the special role of these errors to craft challenge messages that can be used to win the indistinguishability game. Indeed, the adversary can choose the messages such that their functional evaluation leads to the common error symbol, but their intermediate computation values differ. A formal decomposition of the underlying functionality into a mathematical function and an error trigger reveals this dichotomy. Finally, we outline the impact of this observation on multiple DDH-based inner-product functional encryption schemes when we restrict them to bounded-norm evaluations only. [less ▲] Detailed reference viewed: 45 (5 UL)![]() Pascoal, Túlio ![]() Doctoral thesis (2022) Understanding the interplay between genomics and human health is a crucial step for the advancement and development of our society. Genome-Wide Association Study (GWAS) is one of the most popular methods ... [more ▼] Understanding the interplay between genomics and human health is a crucial step for the advancement and development of our society. Genome-Wide Association Study (GWAS) is one of the most popular methods for discovering correlations between genomic variations associated with a particular phenotype (i.e., an observable trait such as a disease). Leveraging genome data from multiple institutions worldwide nowadays is essential to produce more powerful findings by operating GWAS at larger scale. However, this raises several security and privacy risks, not only in the computation of such statistics, but also in the public release of GWAS results. To that extent, several solutions in the literature have adopted cryptographic approaches to allow secure and privacy-preserving processing of genome data for federated analysis. However, conducting federated GWAS in a secure and privacy-preserving manner is not enough since the public releases of GWAS results might be vulnerable to known genomic privacy attacks, such as recovery and membership attacks. The present thesis explores possible solutions to enable end-to-end privacy-preserving federated GWAS in line with data privacy regulations such as GDPR to secure the public release of the results of Genome-Wide Association Studies (GWASes) that are dynamically updated as new genomes become available, that might overlap with their genomes and considered locations within the genome, that can support internal threats such as colluding members in the federation and that are computed in a distributed manner without shipping actual genome data. While achieving these goals, this work created several contributions described below. First, the thesis proposes DyPS, a Trusted Execution Environment (TEE)-based framework that reconciles efficient and secure genome data outsourcing with privacy-preserving data processing inside TEE enclaves to assess and create private releases of dynamic GWAS. In particular, DyPS presents the conditions for the creation of safe dynamic releases certifying that the theoretical complexity of the solution space an external probabilistic polynomial-time (p.p.t.) adversary or a group of colluders (up to all-but-one parties) would need to infer when launching recovery attacks on the observation of GWAS statistics is large enough. Besides that, DyPS executes an exhaustive verification algorithm along with a Likelihood-ratio test to measure the probability of identifying individuals in studies. Thus, also protecting individuals against membership inference attacks. Only safe genome data (i.e., genomes and SNPs) that DyPS selects are further used for the computation and release of GWAS results. At the same time, the remaining (unsafe) data is kept secluded and protected inside the enclave until it eventually can be used. Our results show that if dynamic releases are not improperly evaluated, up to 8% of genomes could be exposed to genomic privacy attacks. Moreover, the experiments show that DyPS’ TEE-based architecture can accommodate the computational resources demanded by our algorithms and present practical running times for larger-scale GWAS. Secondly, the thesis offers I-GWAS that identifies the new conditions for safe releases when considering the existence of overlapping data among multiple GWASes (e.g., same individuals participating in several studies). Indeed, it is shown that adversaries might leverage information of overlapping data to make both recovery and membership attacks feasible again (even if they are produced following the conditions for safe single-GWAS releases). Our experiments show that up to 28.6% of genetic variants of participants could be inferred during recovery attacks, and 92.3% of these variants would enable membership attacks from adversaries observing overlapping studies, which are withheld by I-GWAS. Lastly yet importantly, the thesis presents GenDPR, which encompasses extensions to our protocols so that the privacy-verification algorithms can be conducted distributively among the federation members without demanding the outsourcing of genome data across boundaries. Further, GenDPR can also cope with collusion among participants while selecting genome data that can be used to create safe releases. Additionally, GenDPRproduces the same privacy guarantees as centralized architectures, i.e., it correctly identifies and selects the same data in need of protection as with centralized approaches. In the end, the thesis presents a homogenized framework comprising DyPS, I-GWAS and GenDPR simultaneously. Thus, offering a usable approach for conducting practical GWAS. The method chosen for protection is of a statistical nature, ensuring that the theoretical complexity of attacks remains high and withholding releases of statistics that would impose membership inference risks to participants using Likelihood-ratio tests, despite adversaries gaining additional information over time, but the thesis also relates the findings to techniques that can be leveraged to protect releases (such as Differential Privacy). The proposed solutions leverage Intel SGX as Trusted Execution Environment to perform selected critical operations in a performant manner, however, the work translates equally well to other trusted execution environments and other schemes, such as Homomorphic Encryption. [less ▲] Detailed reference viewed: 117 (13 UL)![]() Minoungou, Wendkouni Nadège ![]() Doctoral thesis (2022) Hepatocellular carcinoma (HCC), the main form of primary liver cancer, is the second leading cause of cancer-related deaths worldwide after lung cancer. Multiple aetiologies have been associated with the ... [more ▼] Hepatocellular carcinoma (HCC), the main form of primary liver cancer, is the second leading cause of cancer-related deaths worldwide after lung cancer. Multiple aetiologies have been associated with the development of HCC, which arises in most cases in the context of a chronically inflamed liver. HCC is in fact an inflammation-driven cancer, with the TNF and IL6 families of cytokines playing key roles in maintaining a chronic inflammatory state, promoting hepatocarcinogenesis. IL6 signals mainly through the JAK1/STAT3 signal transduction pathway and is known to play key roles in liver physiology and disease. In the interest of identifying novel players and downstream effectors of the IL6/JAK1/STAT3 signalling pathway that may contribute to the signal transduction of IL6 in liver-derived cells, we have been investigating the expression of long non-coding RNAs (lncRNAs) in response to treatment with the designer cytokine Hyper-IL6. Indeed, lncRNAs have recently emerged as a key layer of biological regulation and have been shown to be differentially expressed in cancer, including HCC. Upon analysis of time series transcriptomics data, we have identified hundreds of lncRNAs to be differentially expressed in HepG2, HuH7, and Hep3B hepatoma cells upon cytokine stimulation; 26 of which are common to the three cell lines tested. qPCR validation experiments have been performed for several lncRNAs, such as for the liver-specific lncRNA linc-ELL2. By functionally characterising identified clusters of IL6-regulated coding and non-coding genes in hepatoma cells, we propose, based on a guilt-by-association hypothesis, novel functions for previously poorly characterized lncRNAs and pseudogenes such as AL391422.4 or TUBA5P. Several lncRNA genes seem to be co-regulated with a protein-coding gene localized in their vicinity. For example, Hyper-IL-6 increases the mRNA and protein levels of XBP1, a well-known regulator of the unfolded protein response. At the same time, the expression of lncRNA AF086143 increases, which is expressed from the same gene locus in a bidirectional way. The targeted as well as a genome-wide analysis of lncRNA/mRNA gene pairs indicates a possible cis-regulatory role of lncRNAs with regards to their antisense and bidirectional protein coding counterparts. Taken together, these results provide a comprehensive characterisation of the lncRNA and pseudogene repertoire of IL6-regulated genes in hepatoma cells. Our results emphasize lncRNAs as crucial components of the gene regulatory networks affected by cytokine signalling pathways. [less ▲] Detailed reference viewed: 49 (15 UL)![]() Malyeyev, Artem ![]() Doctoral thesis (2022) The present PhD thesis is devoted to the development of the use of the magnetic small-angle neutron scattering (SANS) technique for analyzing the magnetic microstructures of magnetic materials. The ... [more ▼] The present PhD thesis is devoted to the development of the use of the magnetic small-angle neutron scattering (SANS) technique for analyzing the magnetic microstructures of magnetic materials. The emphasis is on the three aspects: (i) analytical development of the magnetic Guinier law; (ii) the application the magnetic Guinier law and of the generalized Guinier-Porod model to the analysis of experimental neutron data on various magnets such as a Nd-Fe-B nanocomposite, nanocrystalline cobalt, and Mn-Bi rare-earth-free permanent magnets; (iii) development of the theory of uniaxial neutron polarization analysis and experimental testing on a soft magnetic nanocrystalline alloy. The conventional “nonmagnetic” Guinier law represents the low-q approximation for the small-angle scattering curve from an assembly of particles. It has been derived for nonmagnetic particle-matrix-type systems and is routinely employed for the estimation of particle sizes in e.g., soft-matter physics, biology, colloidal chemistry, materials science. Here, the extension of the Guinier law is provided for magnetic SANS through the introduction of the magnetic Guinier radius, which depends on the applied magnetic field, on the magnetic interactions (exchange constant, saturation magnetization), and on the magnetic anisotropy-field radius. The latter quantity characterizes the size over which the magnetic anisotropy field is coherently aligned into the same direction. In contrast to the conventional Guinier law, the magnetic version can be applied to fully dense random-anisotropy-type ferromagnets. The range of applicability is discussed and the validity of the approach is experimentally demonstrated on a Nd-Fe-B-based ternary permanent magnet and on a nanocrystalline cobalt sample. Rare-earth-free permanent magnets in general and the Mn-Bi-based ones in particular have received a lot of attention lately due to their application potential in electronics devices and electromotors. Mn-Bi samples with three different alloy compositions were studied by means of unpolarized SANS and by very small-angle neutron scattering (VSANS). It turns out that the magnetic scattering of the Mn-Bi samples is determined by long-wavelength transversal magnetization fluctuations. The neutron data is analyzed in terms of the generalized Guinier-Porod model and the distance distribution function. The results for the so-called dimensionality parameter obtained from the Guinier-Porod model indicate that the magnetic scattering of a Mn$_{45}$Bi$_{55}$ specimen has its origin in slightly shape-anisotropic structures and the same conclusions are drawn from the distance distribution function analysis. Finally, based on Brown’s static equations of micromagnetics and the related theory of magnetic SANS, the uniaxial polarization of the scattered neutron beam of a bulk magnetic material is computed. The theoretical expressions are tested against experimental data on a soft magnetic nanocrystalline alloy, and both qualitative and quantitative correspondence is discussed. The rigorous analysis of the polarization of the scattered neutron beam establishes the framework for the emerging polarized real-space techniques such as spin-echo small-angle neutron scattering (SESANS), spin-echo modulated small-angle neutron scattering (SEMSANS), and polarized neutron dark-field contrast imaging (DFI), and opens up a new avenue for magnetic neutron data analysis on nanoscaled systems. [less ▲] Detailed reference viewed: 67 (13 UL)![]() Noesen, Melanie ![]() Doctoral thesis (2022) Die Dissertation thematisiert Portfolioarbeit in einer inklusionsorientierten Grundschule im Rahmen der luxemburgischen Bildungsreform im Spannungsfeld zwischen Kompetenzstandardisierung und ... [more ▼] Die Dissertation thematisiert Portfolioarbeit in einer inklusionsorientierten Grundschule im Rahmen der luxemburgischen Bildungsreform im Spannungsfeld zwischen Kompetenzstandardisierung und Inklusionsorientierung. Das Portfolio wurde als wichtiges Instrument zur Umsetzung eines Kompetenzmodells qua Reform der Leistungsbeurteilung angedacht (Winter et al. 2012; MENFP 2011) und sollte nicht nur dem Lernen der Kinder, sondern auch der Unterrichtsentwicklung dienen (ebd.). So geht die Arbeit der Frage nach, wie sich Lernen im Rahmen der Portfolioarbeit in einem zweiten Zyklus bei Kindern zu Beginn des Schriftspracherwerbs in einer inklusionsorientierten luxemburgischen Grundschule im Kontext der Bildungsreform gestaltet. Spezifischer fragt die Studie nach der Gestaltung und dem Einsatz des Portfolios seitens der Kinder, der Lehrpersonen sowie der Eltern. Einer Beschreibung der Funktionen der Portfolioarbeit in der erforschten Lerngruppe folgt die Einordnung des Zusammenhangs zwischen Lernen (spezifischer auch des Sprachenlernens) und Portfolioarbeit unter Berücksichtigung der Repräsentationen der Lehrpersonen auf ihre Praxis vor dem Hintergrund der luxemburgischen Grundschulreform. [less ▲] Detailed reference viewed: 31 (2 UL)![]() Kemp, Francoise ![]() Doctoral thesis (2022) Systems biology is an interdisciplinary approach investigating complex biological systems at different levels by combining experimental and modelling approaches to understand underlying mechanisms of ... [more ▼] Systems biology is an interdisciplinary approach investigating complex biological systems at different levels by combining experimental and modelling approaches to understand underlying mechanisms of health and disease. Complex systems including biological systems are affected by a plethora of interactions and dynamic processes often with the aim to ensure robustness to emer- gent system properties. The need for interdisciplinary approaches became very evident in the recent COVID-19 pandemic spreading around the globe since the end of 2019. This pandemic came with a bundle of urgent epidemiological open questions including the infection and transmis- sion mechanisms of the virus, its pathogenicity and the relation to clinical symptoms. During the pandemic, mathematical modelling became an essential tool to integrate biological and healthcare data into mechanistic frameworks for projections of future developments and the assessment of different mitigation strategies. In this regard, systems biology with its interdisciplinary approach was a widely applied framework to support society in the COVID-19 crisis. In my thesis, I applied different mathematical modelling approaches as a tool to identify underlying mechanisms of the complex dynamics of the COVID-19 pandemic with a specific focus on the situation in Luxembourg. For this purpose, I analysed the COVID-19 pandemic at its different phases and from various perspectives by investigating mitigation strategies, consequences in the healthcare and economical system, and pandemic preparedness in terms of early-warning signals for re-emergence of new COVID-19 outbreaks by extended and adapted epidemiological Susceptible-Exposed-Infectious-Recovered (SEIR) models. [less ▲] Detailed reference viewed: 45 (5 UL)![]() Mbodj, Natago Guilé ![]() Doctoral thesis (2022) Metal Additive Manufacturing (MAM) offers many advantages such as fast product manufacturing, nearly zero material waste, prototyping of complex large parts and the automatization of the manufacturing ... [more ▼] Metal Additive Manufacturing (MAM) offers many advantages such as fast product manufacturing, nearly zero material waste, prototyping of complex large parts and the automatization of the manufacturing process in the aerospace, automotive and other sectors. In the MAM, several parameters influence the product creation steps, making the MAM challenging. In this thesis, we modelize and control the deposition process for a type of MAM where a laser beam is used to melt a metallic wire to create the metal parts called the Laser Wire Additive Manufacturing Process (LWAM). In the dissertation, first, a novel parametric modeling approach is created. The goal of this approach is to use parametric product design features to simulate and print 3D metallic objects for the LWAM. The proposed method includes a pattern and the robot toolpath creation while considering several process requirements of LWAM, such as the deposition sequences and the robot system. This technique aims to develop adaptive robot toolpaths for a precise deposition process with nearly zero error in the product creation process. Second, a layer geometry (width and height) prediction model to improve deposition accuracy is proposed. A machine learning regression algorithm is applied to several experimental data to predict the bead geometry across layers. Furthermore, a neural network-based approach was used to study the influence of different deposition parameters, namely laser power, wire-feed rate and travel speed on bead geometry. The experimental results shows that the model has an error rate of (i.e., 2∼4%). Third, a physics-based model of the bead geometry including known process parameters and material properties was created. The model developed for the first time includes critical process parameters, the material properties and the thermal history to describe the relationship between the layer height with different process inputs (i.e., the power, the standoff distance, the temperature, the wire-feed rate and the travel speed). The numerical results show a match of the model with the experimental measurements. Finally, a Model Predictive Controller (MPC) was designed to keep the layer height trajectory constant, considering the constraints and the operating range of the parameters of the process inputs. The model simulation result shows an acceptable tracking of the reference height. [less ▲] Detailed reference viewed: 151 (5 UL)![]() Iglesias González, Alba ![]() Doctoral thesis (2022) The last century has been characterized by the increasing presence of synthetic chemicals in human surroundings, with as consequence, the increasing exposure of individuals to a wide variety of chemical ... [more ▼] The last century has been characterized by the increasing presence of synthetic chemicals in human surroundings, with as consequence, the increasing exposure of individuals to a wide variety of chemical substances on a regular basis. The Lancet Commission on Pollution and Health estimated that since synthetic chemicals started to be available for common use at the end of the 1940s, more than 140,000 new chemicals have been produced, including five thousand used globally in massive volume. In parallel, awareness of the adverse effects of pollutant mixtures, possibly more severe than single-chemical exposures, has drawn attention towards the need of multi-residue analytical methods to obtain the most comprehensive information on human chemical exposome. Human biomonitoring, consisting in the measurement of pollutants in biological matrices, provides information that integrates all the possible sources of exposure, and is specific to the subject the sample is collected from. For this purpose, hair appears as a particularly promising matrix to assess chemical exposure thanks to its multiple benefits. Hair enables to detect both parent chemicals and metabolites, it is suitable to investigate exposure to chemicals from different families, and allows the detection of persistent and non-persistent chemicals. Moreover, contrary to fluids such as urine and blood, which only give information on the short-term exposure and present great variability in chemical concentration, hair is representative of wider time windows that can easily cover several months. Children represent the most vulnerable part of the population, and exposure to pollutants at young ages has been associated with severe health effects during childhood, but also during the adult life. Nevertheless, most epidemiological studies investigating exposure to pollutants are still conducted on adults, and data on children remain much more limited. The present study named “Biomonitoring of children exposure to pollutants based on hair analysis” investigated the relevance of hair analysis for assessing children exposure to pollutants. In this study, 823 hair samples were collected from children and adolescents living in 9 different countries (Luxembourg, France, Spain, Uganda, Indonesia, Ecuador, Suriname, Paraguay and Uruguay), and 117 hair samples were also collected from French adults. All samples were analysed for the detection of 153 organic compounds (140 were pesticides, 4 PCBs, 7 BDEs and 2 bisphenols). Moreover, the hair samples of French adults and children were also analysed for the detection of polycyclic aromatic hydrocarbons (PAH) and their metabolites (n = 62), nicotine, cotinine and metals (n = 36). The results obtained here clearly demonstrated that children living in different geographical areas are simultaneously exposed to multiple chemicals from different chemical classes. Furthermore, the presence of persistent organic pollutants in all children, and not only in adults, suggests that exposure to these chemicals is still ongoing, although these chemicals were banned decades ago. In the sub-group of Luxembourgish children, information collected within questionnaires in parallel to hair sample collection allowed to identify some possible determinant of exposure, such as diet (organic vs conventional), residence area (urban vs countryside), and presence of pets at home. Moreover, results showed higher levels of concentration in younger children, and higher exposure of boys to non-persistent pesticides than girls, which could possibly be attributed to differences in metabolism, behaviour and gender-specific activities. Finally, the study also highlighted high level of similarity in the chemical exposome between children from the same family compared to the rest of the population. The present study strongly supports the use of hair analysis for assessing exposure to chemical pollutants, and demonstrates the relevance of multi-residue methods to investigate exposome. [less ▲] Detailed reference viewed: 84 (3 UL)![]() Epping, Elisabeth ![]() Doctoral thesis (2022) This thesis explains and investigates the development and the institutionalisation of Science and Innovation Centres (SICs) as being distinct instruments of science diplomacy. SICs are a unique and ... [more ▼] This thesis explains and investigates the development and the institutionalisation of Science and Innovation Centres (SICs) as being distinct instruments of science diplomacy. SICs are a unique and underexplored instrument in the science diplomacy toolbox and they are increasingly being adopted by highly innovative countries. This research responds to a growing interest in the field. Science diplomacy is commonly understood as a distinct governmental approach that mobilises science for wider foreign policy goals, such as improving international relations. However, science diplomacy discourse is characterised by a weak empirical basis and driven by normative perspectives. This research responds to these shortcomings and aims to lift the smokescreen of science diplomacy by providing an insight into its governance while also establishing a distinctly actor-centred perspective. In order to achieve this, two distinct SICs, Germany’s Deutsche Wissenschafts- und Innovationshäuser (DWIH) and Switzerland’s Swissnex are closely analysed in an original comparative and longitudinal study. While SICs are just one instrument in the governmental toolbox for promoting international collaboration and competition, they are distinct due to their holistic set- up and their role as a nucleus for the wider research and innovation system they represent. Moreover, SICs appear to have the potential to create a significant impact, despite their limited financial resources. This thesis takes a historical development perspective to outline how these two SICs were designed as well as their gradual development and institutionalisation. The thesis further probes why actors participate in SICs by unpacking their differing rationales, developing a distinctly actor-centred perspective on science diplomacy. This study has been designed in an inductive and exploratory way to account for the novelty of the topic; the research findings are based on the analysis of 41 interviews and a substantial collection of documents. The study finds evidence that SICs developed as a response to wider societal trends, although these trends differed for the two case studies. Moreover, the development of SICs has been characterised by aspects such as timing, contingency and critical junctures. SICs are inextricably connected to their national contexts and mirror distinct system characteristics, such as governance arrangements or degree of actor involvement. These aspects were also seen as explaining the exact shape that SICs take. Furthermore, this study finds evidence of an appropriation of SICs by key actors, in line with their organisational interests. In the case of the DWIH, this impacted and even limited its (potential) design and ways of operating. However, the analysis of SICs’ appropriation also revealed a distinct sense of collectivity, which developed among actors in the national research and innovation ecosystem due to this joint instrument. The research findings reaffirm that science diplomacy is clearly driven by national interests, while further highlighting that the notion of science diplomacy and its governance (actors, rationales and instruments) can only be fully understood by analysing the national context. [less ▲] Detailed reference viewed: 144 (5 UL)![]() Garino, Valentin ![]() Doctoral thesis (2022) We use recent tools from stochastic analysis (such as Stein's method and Malliavin calculus) to study the asymptotic behaviour of some functionals of a Gaussien Field. Detailed reference viewed: 57 (6 UL)![]() Karta, Jessica ![]() Doctoral thesis (2022) Dysbiosis is an imbalance in the gut microbiome that is often associated with inflammation and cancer. Several microbial species, such as Fusobacterium nucleatum, have been suggested to be involved in ... [more ▼] Dysbiosis is an imbalance in the gut microbiome that is often associated with inflammation and cancer. Several microbial species, such as Fusobacterium nucleatum, have been suggested to be involved in colorectal cancer (CRC). To date, most studies have focused on the interaction between CRC-associated bacteria and tumor cells. However, the tumor microenvironment (TME) is composed of various types of cells, among which cancer-associated fibroblasts (CAFs), one of the most vital players in the TME. The interaction between CRC-associated bacteria and CAFs and especially the impact of their cross-talk on tumor cells, remains largely unknown. In this regard, this thesis investigated the interaction between a well described and accepted CRC-associated bacteria, Fusobacterium nucleatum, and CAFs and their subsequent effects on tumor progression in CRC. Our findings show that F.nucleatum binds to CAFs and induces phenotypic changes. F.nucleatum promotes CAFs to secrete several pro-inflammatory cytokines and membrane-associated proteases. Upon exposure with F.nucleatum, CAFs also undergo metabolic rewiring with higher mitochondrial ROS and lactate secretion. Importantly, F.nucleatum-treated CAFs increase the migration ability of tumor cells in vitro through secreted cytokines, among which CXCL1. Furthermore, the co-injection of F.nucleatum-treated CAFs with tumor cells in vivo leads to a faster tumor growth as compared to the co-injection of untreated CAFs with tumor cells. Taken together, our results show that CAFs are an important player in the gut microbiome-CRC axis. Targeting the CAF-microbiome crosstalk might represent a novel therapeutic strategy for CRC. [less ▲] Detailed reference viewed: 94 (15 UL)![]() Torelló Massana, Àlvar ![]() Doctoral thesis (2022) The following work investigates the development of heat pumps that exploit electrocaloric effects in Pb(Sc,Ta)03 (PST) multilayer capacitors (MLCs). The electrocaloric effect refers to reversible thermal ... [more ▼] The following work investigates the development of heat pumps that exploit electrocaloric effects in Pb(Sc,Ta)03 (PST) multilayer capacitors (MLCs). The electrocaloric effect refers to reversible thermal changes in a material upon application (and removal) of an electric field. Electrocaloric cooling is interesting because 1) it has the potential to be more efficient than competing technologies, such as vapour-compression systems, and 2) it does not compel the use of greenhouse gases, which is crucial in order to slow down global warming and mitigate the effects of climate change. The continuous progress in the field of electrocalorics has promoted the creation of several electrocaloric based heat pump prototypes. Despite the different designs and working principles utilized, these prototypes have struggled to maintain temperature variations as large as 10 K, discouraging their industrial development. In this work, bespoke PST-MLCs exhibiting large electrocaloric effects near room temperature were embodied in a novel heat pump with the motivation to surpass the 10 K-barrier. The experimental design of the heat pump was based on the outcome of a numerical model. After implementing some of the modifications suggested by the latter, consistent temperature spans of 13 K at 30 °C were reported, with cooling powers of 12 W / kg. Additional simulations predicted temperature spans as large as 50 K and cooling powers in the order of 1000 W / kg, if a new set of plausible modifications were to be put in place. Similarly, these very same PST-MLCs samples were implemented into pyroelectric harvesters revisiting Olsen's pioneering work from 1980. The harvested energies were found to be as large as 11.2 J, with energy densities reaching up to 4.4 J / cm3 of active material, when undergoing temperature oscillations of 100 K under electric fields applied of 140-200 kV / cm. These findings are two and four times, respectively, larger than the best reported values in the literature. The results obtained in this dissertation are beyond the state-of-the-art and show that 1) electrocaloric heat pumps can indeed achieve temperature spans larger than 10 K, and 2) pyroelectric harvesters can generate electrical energy in the Joule-range. Moreover, numerical models indicated that there is still room for improvement, especially when it comes to the power of these devices. This should encourage the development of these kinds of electrocaloric- and pyroelectric-based applications in the near future. [less ▲] Detailed reference viewed: 101 (6 UL)![]() Salmeri, Antonino ![]() Doctoral thesis (2022) Few contests that space mining holds the potential to revolutionize the space sector. The utilization of space resources can reduce the costs of deep space exploration and kick-off an entirely new economy ... [more ▼] Few contests that space mining holds the potential to revolutionize the space sector. The utilization of space resources can reduce the costs of deep space exploration and kick-off an entirely new economy in our solar system. However, whether such a revolution will happen for good or for worse depends also on the enactment of appropriate regulation. Under the right framework, space mining will be able to deliver on its promise of a new era of prosperous and sustainable space exploration. But with the wrong rules (or lack thereof), unbalanced space resource activities can destabilize the space community to a truly unprecedented scale. With companies planning mining operations on the Moon already during this decade, the regulation of space resource activities has thus become one of the most pressing and crucial topics to be addressed by the global space community. In this context, this thesis provides a first-of-its-kind, comprehensive and innovative analysis of the regulatory and enforcement options currently shaping the multi-level governance of space mining. In addition to this, the thesis also suggests a series of correctives that can improve the system and ensure the peaceful, rational, safe, and sustainable conduct of space mining. Structurally, the thesis moves from general to particular and is divided in three chapters. Chapter 1 discusses the relationship between space law and international law to contextualize the specific assessment of space mining. Chapter 2 analyses the current regulatory framework applicable to space mining, considering both the international and national levels. Finally, Chapter 3 identifies potential enforcement options, assesses them in terms of effectiveness and legitimacy, and further proposes some pragmatic correctives to reinforce the governance system. [less ▲] Detailed reference viewed: 114 (16 UL)![]() Gini, Agnese ![]() Doctoral thesis (2022) Detailed reference viewed: 79 (18 UL)![]() Sobon-Mühlenbrock, Elena Katarzyna ![]() Doctoral thesis (2022) The European Union has been striving to become the first climate-neutral continent by 2050. This implicates an intensified transition towards sustainability. The most applied renewable energy sources are ... [more ▼] The European Union has been striving to become the first climate-neutral continent by 2050. This implicates an intensified transition towards sustainability. The most applied renewable energy sources are the sun and wind, which are intermittent. Thus, great fluctuating shares in the energy network are expected within the next years. Consequently, there might occur periods of no congruence between energy demand and energy supply leading to destabilization of the electricity grid. Therefore, an urgency to overcome the intermittency arises. One feasible option is to use a third renewable energy source, biomass, which can be produced demand-oriented. Hence, a flexible biogas plant running on a two-stage mode, where the first stage would serve as a storage for liquid intermediates, could be a viable option to create demand-driven and need-orientated electricity. Since vast amounts of food waste are thrown away each year (in 2015 they amounted 88 mio. tones within the EU-28, accounting for ca. 93 TWh of energy), one could energetically recover this substrate in the above-described process. This is a promising concept, however, not widely applicable as it faces many challenges, such as technical and economical. Additionally, food waste is inhomogeneous, and its composition is country- and collecting season dependent. The motivation of this work was to contribute to a vaster understanding of the two-stage anaerobic digestion process by using food waste as a major substrate. At first, an innovative substitute for a heterogeny food waste was introduced and examined at two different loadings and temperature modes. It was proven that the Model Kitchen Waste (MKW) was comparable to the real Kitchen Waste (KW), at mesophilic and thermophilic mode for an organic loading in accordance with the guideline VDI 4630 (2016). For an “extreme” loading, and mesophilic mode, the MKW generated similar biogas, methane, and volatile fatty acid (VFA) patterns as well. Furthermore, another two MKW versions were developed, allowing covering a variety of different organic wastes and analyzing the impact of fat content on the biogas production. Afterwards, a semi-continuous one-stage experiment of 122 days was conducted. It was followed by an extensive semi-continuous two-stage study of almost 1.5-year runtime. Different loadings and hydraulic retention times were investigated in order to optimize this challenging process. Additionally, the impact of co-digestion of lignocellulose substrate was analyzed. It was concluded that two-stage mode led to a higher biogas and methane yield than the one-stage. However, the former posed challenges related to the stability and the process maintenance. Additionally, it was found that co-digestion of food waste and maize silage results in methane yield, atypical for the acidic stage. Apart from the experiments, the Anaerobic Digestion Model No. 1 (ADM1), originally developed for wastewater, was modified so that it would suit the anaerobic digestion of food waste of different fat contents, at batch and semi-continuous mode consisting of one- and two-stages. The goodness of fit was assessed by the Normalized Root Mean Square Error (NRMSE) and coefficient of efficiency (CE). For the batch mode, two temperature modes could be properly simulated at loadings conform and nonconform to the VDI 4630 (2016). For each mode, two different sets of parameters were introduced, namely for substrates of low-fat content and for substrates of middle/high fat content (ArSo LF and ArSo MF, with LF standing for low fat and MF for middle fat). The models could be further validated in another experiment, also using a co-digestion of lignocellulose substances. Further, parameters estimated for the batch mode, were applied for the semi-continuous experiment. It proved successful, however, due to a high amount of butyrate (HBu) and valerate (HVa), the model underwent calibration so that it could better predict the acids (model developed for one-stage semi-continuous experiment was called: ArSo M LF*). This could be validated on another semi-continuous reactor running on one-stage. Finally, the acidic-stage of the two-stage mode was analyzed. The model applied for one-stage fitted the data of the two-stage mode as far as the VFA are concerned. Nevertheless, due to a vast amount of acids, it was adjusted and called ArSo M LF**. [less ▲] Detailed reference viewed: 44 (2 UL)![]() Levin, Vladimir ![]() Doctoral thesis (2022) The present doctoral thesis consists of three main chapters. The chapters of the thesis can be considered independently. Each of the three chapters raises a research question, reviews the related ... [more ▼] The present doctoral thesis consists of three main chapters. The chapters of the thesis can be considered independently. Each of the three chapters raises a research question, reviews the related literature, proposes a method for the analysis, and, finally, reports results and conclusions. Chapter 1 is entitled Dark Trading and Financial Markets Stability and it is based on a working paper co-authored with Prof. Dr. Jorge Goncalves and Prof. Dr. Roman Kraussl. This paper examines how the implementation of a new dark order -- Midpoint Extended Life Order (M-ELO) on Nasdaq -- impacts financial markets stability in terms of occurrences of mini-flash crashes in individual securities. We use high-frequency order book data and apply panel regression analysis to estimate the effect of dark order trading activity on market stability and liquidity provision. The results suggest a predominance of a speed bump effect of M-ELO rather than a darkness effect. We find that the introduction of M-ELO increases market stability by reducing the average number of mini-flash crashes, but its impact on market quality is mixed. Chapter 2 is entitled Dark Pools and Price Discovery in Limit Order Markets and it is a single-authored work. This paper examines how the introduction of a dark pool impacts price discovery, market quality, and aggregate welfare of traders. I use a four-period model where rational and risk-neutral agents choose the order type and the venue and obtain the equilibrium numerically. The comparative statics on the order submission probability suggests a U-shaped order migration to the dark pool. The overall effect of dark trading on market quality and aggregate welfare was found to be positive but limited in size and depended on market conditions. I find mixed results for the process of price discovery. Depending on the immediacy need of traders, price discovery may change due to the presence of the dark venue. Chapter 3 is entitled Machine Learning and Market Microstructure Predictability and it is another single-authored piece of work. This paper illustrates the application of machine learning to market microstructure research. I outline the most insightful microstructure measures, that possess the highest predictive power and are useful for the out-of-sample predictions of such features of the market as liquidity volatility and general market stability. By comparing the models' performance during the normal time versus the crisis time, I come to the conclusion that financial markets remain efficient during both periods. Additionally, I find that high-frequency traders activity is not able to forecast accurately neither of the market features. [less ▲] Detailed reference viewed: 71 (3 UL) |
||