References of "Doctoral thesis"
     in
Bookmark and Share    
Full Text
See detailEssays on the Economics of Wellbeing and Machine Learning
Gentile, Niccolo' UL

Doctoral thesis (2022)

Detailed reference viewed: 122 (30 UL)
Full Text
See detailModel-based Specification and Analysis of Natural Language Requirements in the Financial Domain
Veizaga Campero, Alvaro Mario UL

Doctoral thesis (2022)

Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a ... [more ▼]

Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a combination of models and natural language (NL). Neither models nor NL requirements provide a complete picture of the information in the software system, and NL is highly prone to quality issues, such as vagueness, ambiguity, and incompleteness. Poorly written requirements are difficult to communicate and reduce the opportunity to process requirements automatically, particularly the automation of tedious and error-prone tasks, such as deriving acceptance criteria (AC). AC are conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. AC are derived by developers and testers from requirement models. To obtain a precise AC, it is necessary to reconcile the information content in NL requirements and the requirement models. In collaboration with an industrial partner from the financial domain, we first systematically developed and evaluated a controlled natural language (CNL) named Rimay to help analysts write functional requirements. We then proposed an approach that detects common syntactic and semantic errors in NL requirements. Our approach suggests Rimay patterns to fix errors and convert NL requirements into Rimay requirements. Based on our results, we propose a semiautomated approach that reconciles the content in the NL requirements with that in the requirement models. Our approach helps modelers enrich their models with information extracted from NL requirements. Finally, an existing test-specification derivation technique was applied to the enriched model to generate AC. The first contribution of this dissertation is a qualitative methodology that can be used to systematically define a CNL for specifying functional requirements. This methodology was used to create Rimay, a CNL grammar, to specify functional requirements. This CNL was derived after an extensive qualitative analysis of a large number of industrial requirements and by following a systematic process using lexical resources. An empirical evaluation of our CNL (Rimay) in a realistic setting through an industrial case study demonstrated that 88% of the requirements used in our empirical evaluation were successfully rephrased using Rimay. The second contribution of this dissertation is an automated approach that detects syntactic and semantic errors in unstructured NL requirements. We refer to these errors as smells. To this end, we first proposed a set of 10 common smells found in the NL requirements of financial applications. We then derived a set of 10 Rimay patterns as a suggestion to fix the smells. Finally, we developed an automatic approach that analyzes the syntax and semantics of NL requirements to detect any present smells and then suggests a Rimay pattern to fix the smell. We evaluated our approach using an industrial case study that obtained promising results for detecting smells in NL requirements (precision 88%) and for suggesting Rimay patterns (precision 89%). The last contribution of this dissertation was prompted by the observation that a reconciliation of the information content in the NL requirements and the associated models is necessary to obtain precise AC. To achieve this, we define a set of 13 information extraction rules that automatically extract AC-related information from NL requirements written in Rimay. Next, we propose a systematic method that generates recommendations for model enrichment based on the information extracted from the 13 extraction rules. Using a real case study from the financial domain, we evaluated the usefulness of the AC-related model enrichments recommended by our approach. The domain experts found that 89% of the recommended enrichments were relevant to AC, but absent from the original model (precision of 89%). [less ▲]

Detailed reference viewed: 75 (10 UL)
Full Text
See detailFATIGUE AND BREAKDOWN STUDIES OF SOLUTION DEPOSITED OXIDE FERROELECTRIC THIN FILMS
Aruchamy, Naveen UL

Doctoral thesis (2022)

Ferroelectric materials are ubiquitous in several applications and offer advantages for microelectromechanical systems (MEMS) in their thin film form. However, novel applications require ferroelectric ... [more ▼]

Ferroelectric materials are ubiquitous in several applications and offer advantages for microelectromechanical systems (MEMS) in their thin film form. However, novel applications require ferroelectric films to be deposited on various substrates, which requires effective integration and know-how of the material response when selecting a substrate for film deposition. As substrate-induced stress can alter the ferroelectric properties of the films, the knowledge of how stress changes the ferroelectric response under different actuation conditions is essential. Furthermore, the stress-dependent behavior raises the question of understanding the reliability and degradation mechanisms under cyclic electric loading. Therefore, the ferroelectric thin film’s fatigue and breakdown characteristics become more relevant. Lead zirconate titanate (PZT) thin films are popular among other ferroelectric materials. However, there is a tremendous effort made in the direction of finding a lead-free alternative to PZT. Ferroelectric thin films can be deposited using different processing techniques. In this work, the chemical solution deposition route is adapted for depositing PZT thin films on transparent and non-transparent substrates. A correlation between the substrate-induced ferroelectric properties and processing conditions with different electrode configurations is established. Finite element modeling is used to understand the influence of the design parameters of the co-planar interdigitated electrodes for fabricating fully transparent PZT stacks. In-plane and out-of-plane ferroelectric properties of PZT thin films in metal-insulator-metal (MIM) and interdigitated electrode (IDE) geometries, respectively, on different substrates, are compared to establish the connection between the stress-induced effect and the actuation mode. It is shown that the out-of-plane polarization is high under in-plane compressive stress but reduced by nearly four times by in-plane tensile stress. In contrast, the in-plane polarization shows an unexpectedly weak stress dependence. The fatigue behavior of differently stressed PZT thin films with IDE structures is reported for the first time in this study. The results are compared to the fatigue behavior of the same films in MIM geometry. PZT films in MIM geometry, irrespective of the stress state, show a notable decrease in switchable polarization during fatigue cycling. In contrast, the films actuated with IDEs have much better fatigue resistance. The primary fatigue mechanism is identified as domain wall pinning by charged defects. The observed differences in fatigue behavior between MIM and IDE geometries are linked to the orientation of the electric field with respect to the columnar grain structure of the films. Hafnium oxide, an emerging and widely researched lead-free alternative to PZT for non-volatile ferroelectric memory application, is also explored in this work. The breakdown properties of chemical solution-deposited ferroelectric hafnium oxide thin films are also studied. The structure-property relationship for stabilizing the ferroelectric phase in solution-deposited hafnium oxide thin films is established. Furthermore, the effect of processing conditions on the ferroelectric switching behavior and breakdown characteristics are demonstrated and correlated with the possible mechanism. [less ▲]

Detailed reference viewed: 88 (6 UL)
Full Text
See detailModelling astrocytic metabolism in actual cell morphologies
Farina, Sofia UL

Doctoral thesis (2022)

The human brain is the most structurally and biochemically complex organ, and its broad spectrum of diverse functions is accompanied by high energy demand. In order to address this high energy demand ... [more ▼]

The human brain is the most structurally and biochemically complex organ, and its broad spectrum of diverse functions is accompanied by high energy demand. In order to address this high energy demand, brain cells of the central nervous system are organised in a complex and balanced ecosystem, and perturbation of brain energy metabolism is known to be associated with neurodegenerative diseases such as Alzheimer's (AD) and Parkinson's disease. Among all cells composing this ecosystem, astrocytes contribute metabolically to produce the primary energy substrate of life, $\ATP$, and lactate, which can be exported to neurons to support their metabolism. Astrocytes have a star-shaped morphology, allowing them to connect on the one side with blood vessels to uptake glucose and on the other side with neurons to provide lactate. Astrocytes may also exhibit metabolic dysfunctions and modify their morphology in response to diseases. A mechanistic understanding of the morphology-dysfunction relation is still elusive. This thesis developed and applied a mechanistic multiscale modelling approach to investigate astrocytic metabolism in physiological morphologies in healthy and diseased human subjects. The complexity of cellular systems is a significant obstacle in investigating cellular behaviour. Systems biology tackles biological unknowns by combining computational and biological investigations. In order to address the elusive connection between metabolism and morphology in astrocytes, we developed a computational model of central energy metabolism in realistic morphologies. The underlying processes are described by a reaction-diffusion system that can represent cells more realistically by considering the actual three-dimensional shape than classical ordinary differential equation models where the cells are assumed to be spatially punctual, i.e. have no spatial dimension. Thus, the computational model we developed integrates high-resolution microscopy images of astrocytes from human post-mortem brain samples and simulates glucose metabolism in different physiological astrocytic human morphologies associated with AD and healthy conditions. The first part of the thesis is dedicated to presenting a numerical approach that includes complex morphologies. We investigate the classical finite element method (FEM) and cut finite element method (\cutfem{}) for simplified metabolic models in complex geometries. Establishing our image-driven numerical method leads to the second part of this thesis, where we investigate the crucial role played by the locations of reaction sites. We demonstrate that spatial organisation and chemical diffusivity play a pivotal role in the system output. Based on these new findings, we subsequently use microscopy images of healthy and Alzheimer's diseased human astrocytes to build simulations and investigate cell metabolism. In the last part of the thesis, we consider another critical process for astrocytic functionality: calcium signalling. The energy produced in metabolism is also partially used for calcium exchange between cell compartments and mainly can drive mitochondrial activity as a main ATP generating entity. Thus, the active cross-talk between glucose metabolism and calcium signalling can significantly impact the metabolic functionality of cells and requires deeper investigation. For this purpose, we extend our established metabolic model by a calcium signalling module and investigate the coupled system in two-dimensional geometries. Overall, the investigations showed the importance of spatially organised metabolic modelling and paved the way for a new direction of image-driven-meshless modelling of metabolism. Moreover, we show that complex morphologies play a crucial role in metabolic robustness and how astrocytes' morphological changes to AD conditions lead to impaired energy metabolism. [less ▲]

Detailed reference viewed: 86 (26 UL)
Full Text
See detailCREDIT CARDS AND CASHLESS PAYMENT: BANK COMMUNICATION POLICIES IN FRANCE, GERMANY AND LUXEMBOURG (1968-2015)
Vetter, Florian UL

Doctoral thesis (2022)

“Pecunia non olet”. Ironically, this Latin dictum strongly relates to the 20th and 21st century if one considers how banks dematerialised constantly money and changed the way a society deals with deposits ... [more ▼]

“Pecunia non olet”. Ironically, this Latin dictum strongly relates to the 20th and 21st century if one considers how banks dematerialised constantly money and changed the way a society deals with deposits. By implementing quite radical changes to the concept of money, banks became an accelerating element for social and technological innovation. Our research project within the field of computerisation and digitalisation concentrates on banking activities and services from a European perspective. Banks’ communication regarding credit cards and cashless payments is at the heart of this research. The study intertwines several case studies in selected European countries (i.e., Luxembourg, Germany, France). In particular, the study focuses on the following bank services: automated teller machines, bankcards (especially MasterCard and Eurocard) and home banking since the emergence of Minitel, Vidéotex, or Btx. The comparative and diachronic perspective of this study, starting from the 1960s onwards, aims at shedding light on a history which has often only been seen from an insider’s perspective. It should be noted that our focus is primarily the communication strategies of banks and their related advertisement campaigns for credit cards and cashless payments. This is achieved by focusing on the strategy of the banks and their economic, technical, digital, but also societal approaches. The research topic relates to contemporary history, the history of digitalisation and innovation. In this context, press, audio-visual materials, banking reports, advertising, oral history, as well as web archives serve as primary sources. Moreover, bank archives in Luxembourg, France and Germany are used to complete the study corpus. All in all, the research results help us to understand the high complex world of banking services from an unusual research angle. Therefore, the research topic changes the current scientific standard of banking history by including the perspective of various actors of the European payment market as well as their perception of banking innovations over the years (1968 – 2015) and by analysing a European transnational corpus. Furthermore, by analysing the history of the Eurocard and its relation to MasterCard in a long-term perspective, we offer a novel approach. It helps to enrich the field of banking history, which is slowly changing and introducing different research angles, thanks to pioneering research by Bernardo Bátiz-Lazo, Sabine Effosse, David Sparks Evans, Richard Schmalensee, Lana Schwartz, Sebastian Gießmann and others. In this respect, this PhD research aims to add a milestone to historical research on banking innovation and retail banking which is still in its early stages but is moving fast, driven forward in particular by the pioneers mentioned above. [less ▲]

Detailed reference viewed: 90 (4 UL)
Full Text
See detailChina's financial spaces in Europe: Bank networks, investments, and currency
Balmas, Paolo UL

Doctoral thesis (2022)

Despite the vast research on China’s external economic expansion, little is known about the spatial organisation and operations of Chinese commercial and development banks that enable such expansion. This ... [more ▼]

Despite the vast research on China’s external economic expansion, little is known about the spatial organisation and operations of Chinese commercial and development banks that enable such expansion. This thesis by publications sheds new light on the physical presence, organisation and agency of Chinese banks in Europe. It analyses the capability of Chinese banks to create new financial spaces. I start with the assumption that socioeconomic interactions, which I ascribe to the combinations of network-place and structure-agency, construct (financial) space. I identify Luxembourg as a key place of the spatial organisation of Chinese banks in Europe and detect Chinese banks as key players in organising the mechanisms that enable China’s economic expansion into Europe. To understand the implications of Chinese banks’ presence and operations in Europe, I address three intertwined overarching questions: what are Chinese banks doing in Europe? How are they spatially organised? Are they reshaping European financial spaces? To answer, I designed interdisciplinary qualitative research based on expert interviews and desk research. I selected three dimensions for Chinese financial activity in Europe: bank networks, currency and investments, which I analyse in four chapters/publications. The first two chapters analyse the geoeconomics of Chinese bank networks’ expansion and its spatial organisation that enables mergers and acquisitions in Europe respectively. Chapter 3 analyses how Chinese development banks make use of Luxembourg’s investment fund industry to invest in (energy) infrastructures and private equity in Central and Eastern European countries. Chapter 4 analyses the investment role of money as a neglected dimension to understand renminbi internationalisation. This chapter highlights the roles of Luxembourg and Western banks as key for investments into China’s domestic financial markets, and the role of China’s state in governing the inflow of such investments. Findings from the four chapters show how Chinese financial spaces in Europe are co-constituted by both Chinese and European actors. I find that Chinese banks have established a wide set of networks across Europe while their activity is still limited. This suggests that Chinese bank networks are still in an embryonic stage although they are preparing to widen their activities in the (near) future. This strengthens Luxembourg’s positionality as a key financial hub connecting China to Europe. Chinese banks’ attractiveness as future gatekeepers to the Chinese domestic financial markets suggests that they will expand their activities in Europe despite current geopolitical frictions between China and the West. Beyond contributing to the growing literature on China in Europe, this thesis contributes to the advancement of the sub-disciplines of economic and financial geography by conceptualising banks as key agents of financial space creation and shapers of global financial networks. [less ▲]

Detailed reference viewed: 45 (5 UL)
Full Text
See detailScale law on energy efficiency of electrocaloric materials
Nouchokgwe Kamgue, Youri Dilan UL

Doctoral thesis (2022)

Caloric materials are suggested as energy-efficient refrigerants for future cooling devices. They could replace the greenhouse gases used for decades in our air conditioners, fridges, and heat pumps ... [more ▼]

Caloric materials are suggested as energy-efficient refrigerants for future cooling devices. They could replace the greenhouse gases used for decades in our air conditioners, fridges, and heat pumps. Among the four types of caloric materials (electro, baro, elasto, magneto caloric), electrocaloric materials are more promising as applying large electric fields is much simpler and cheaper than the other fields. The research in the last years has been focused on looking for electrocaloric materials with high thermal responses. However, the energy efficiency crucial for future replacement of the vapor compression technology has been overlooked. The intrinsic efficiency of electrocaloric has been barely studied. In the present dissertation, we will study the efficiency of EC materials defined as materials efficiency. It is the ratio of the reversible electrocaloric heat to the reversible electrical work required to drive this heat. In this work, we will study the materials efficiency of the benchmark lead scandium tantalate in different shapes (bulk ceramic and multilayer capacitors). A comparison to other caloric materials is presented in this dissertation. Our work gives more insights on the figure merit of materials efficiency to further improve the efficiency of our devices. [less ▲]

Detailed reference viewed: 52 (16 UL)
Full Text
See detailMetamaterial Design and elaborative approach for efficient selective solar absorber
Khanna, Nikhar UL

Doctoral thesis (2022)

The thesis is focused on developing spectral selective coatings (SSC) composed of multilayer cermets and periodic array of resonating omega structures, turning them to behave like metamaterials, while ... [more ▼]

The thesis is focused on developing spectral selective coatings (SSC) composed of multilayer cermets and periodic array of resonating omega structures, turning them to behave like metamaterials, while showing high thermal stability up to1000°C. The developed SSC is intended to be used for the concentrated solar power (CSP) applications. With the aim of achieving highest possible absorbance in the visible region of the spectrum and highest reflectance in the infrared region of the spectrum. The thesis highlights the numerical design, the synthesis and optical characterization of the SSC of approximately 500 nm thickness. A bottom-up approach was adopted for the preparation of a stack with alternate layers, consisting of a distribution of Titanium Nitride (TiN) nanoparticles with a layer of Aluminum Nitride (AlN) on top. The TiN nanoparticles, laid on a Silicon substrate by wet chemical method, are coated with conforming layer of AlN, via Plasma-enhanced Atomic Layer Deposition (PE-ALD). The control of the morphology at the nanoscale is fundamental for tuning the optical behaviour of the material. For this reason, two composites were prepared. One starting with TiN dispersion made with dry TiN powder and deionized water, and the other with ready-made TiN dispersion. Nano-structured metamaterial based absorbers have many benefits over conventional absorbers, such as miniaturisation, adaptability and frequency tuning. Dealing with the current challenges of producing the new metamaterial based absorber with optimal nanostructure design along with its synthesis within current nano-technological limits, we were capable of turning the cermets into metamaterial. A periodic array of metallic omega structures was patterned on top of both the composites I and II, by using e-beam lithography technique. Parameters, such as the size of TiN nanoparticles, the thickness of AlN thin film and the dimensions of the omega structure were all revealed by the numerical simulations, performed using Wave-Optics module in COMSOL Multiphysics. The work showcased clearly compares the two kinds of composites, using scanning electron microscope, X-ray photoelectron spectroscopy(XPS) and electrical conductivity measurement. The improvement in the optical performance of the SSC after the inclusion of metallic omega structures in the uppermost layer of the two composites has been thoroughly investigated for light absorption boosting. In addition, the optical performance of the two prepared composites and the metamaterial is used as a means of validating the computational model. [less ▲]

Detailed reference viewed: 53 (5 UL)
Full Text
See detailA cosmopolitan international law: the authority of regional inter-governmental organisations to establish international criminal accountability mechanisms
Owiso, Owiso UL

Doctoral thesis (2022)

The overall aim of this thesis is to investigate the potential role of regional inter-governmental organisations (RIGOs) in international criminal accountability, specifically through the establishment of ... [more ▼]

The overall aim of this thesis is to investigate the potential role of regional inter-governmental organisations (RIGOs) in international criminal accountability, specifically through the establishment of criminal accountability mechanisms, and to make a case for RIGOs’ active involvement. The thesis proceeds from the assumption that international criminal justice is a cosmopolitan project that demands that a tenable conception of state sovereignty guarantees humanity’s fundamental values, specifically human dignity. Since cosmopolitanism emphasises the equality and unity of the human family, guaranteeing the dignity and humanity of the human family is therefore a common interest of humanity rather than a parochial endeavour. Accountability for international crimes is one way through which human dignity can be validated and reaffirmed where such dignity has been grossly and systematically assaulted. Therefore, while accountability for international crimes is primarily the obligation of individual sovereign states, this responsibility is ultimately residually one of humanity as a whole, exercisable through collective action. As such, the thesis advances the argument that states as collective representations of humanity have a responsibility to assist in ensuring accountability for international crimes where an individual state is either genuinely unable or unwilling by itself to do so. The thesis therefore addresses the question as to whether RIGOs, as collective representations of states and their peoples, can establish international criminal accountability mechanisms. Relying on cosmopolitanism as a theoretical underpinning, the thesis examines the exercise of what can be considered as elements of sovereign authority by RIGOs in pursuit of the cosmopolitan objective of accountability for international crimes. In so doing, the thesis interrogates whether there is a basis in international law for such engagement, and examines how such engagement can practically be undertaken, using two case studies of the European Union and the Kosovo Specialist Chambers and Specialist Prosecutor’s Office, and the African Union and the (proposed) Hybrid Court for South Sudan. The thesis concludes that general international law does not preclude RIGOs from exercising elements of sovereign authority necessary for the establishment of international criminal accountability mechanisms, and that specific legal authority to engage in this regard can then be determined by reference to the doctrine of attributed/conferred powers and the doctrine of implied powers in interpreting the legal instruments of RIGOs. Based on this conclusion, the thesis makes a normative case for an active role for RIGOs in the establishment of international criminal accountability mechanisms, and provides a practical step-by-step guide on possible legal approaches for the establishment of such mechanisms by RIGOs, as well as guidance on possible design models for these mechanisms. [less ▲]

Detailed reference viewed: 93 (8 UL)
Full Text
See detailAPPLICATION OF NEAR FIELD TECHNOLOGY IN HEAVY COMMERCIAL VEHICLE TIRE MONITORING SYSTEM
Rida, Ahmad UL

Doctoral thesis (2022)

The thesis proposes a near-field Communication (NF) based solution, for the tire pressure monitoring system (TPMS) in heavy commercial vehicles, as an alternative to wireless far-field (FF) based ... [more ▼]

The thesis proposes a near-field Communication (NF) based solution, for the tire pressure monitoring system (TPMS) in heavy commercial vehicles, as an alternative to wireless far-field (FF) based communication used in conventional TPMS. Truck and tire manufacturers have stepped up efforts to develop TPMS solutions, as recent EU regulations will soon make TPMS mandatory in heavy commercial vehicles, but the dense metal content in this application environment attenuates the wireless communication and hinders the development of efficient and robust TPMS solutions. The thesis covers many practical aspects and includes an extensive literature review on the state-of-the-art TPMS solutions commercially available for heavy commercial vehicles. A second literature review was conducted on NF communication and its automotive applications. The researcher then conducted a finite element analysis (FEA) to simulate the application environment, represented by the tire and wheel combination, in order to evaluate the conventional TPMS signal propagation and the proposed system signal propagation. The simulations demonstrated the adverse effect of the application environment on the signal propagation of conventional TPMS and showed the merit of using NF-based communication. The proposed transmitter design was built and evaluated on an actual truck wheel and tire combination in a series of laboratory tests. The proposed transmitter unit was detected with sufficiently high signal-to-noise to establish a communication channel in the presence of limited metal objects in close proximity in laboratory conditions, which could allow for a more advance commercial vehicle TPMS. This industry driven project addresses a serious traffic safety issue and forms a proof-of-concept for the development of a complete TPMS solution. [less ▲]

Detailed reference viewed: 31 (2 UL)
Full Text
See detailMechanisms of Micropollutant Elimination in Vertical Flow Constructed Wetlands
Brunhoferova, Hana UL

Doctoral thesis (2022)

One of the biggest global challenges is the enormous growth of the population. With the growing population rises consequentially production and release of anthropogenic compounds, which then, due to ... [more ▼]

One of the biggest global challenges is the enormous growth of the population. With the growing population rises consequentially production and release of anthropogenic compounds, which then, due to insufficient wastewater treatment system, become pollutants, more precisely micropollutants (MPs). Advanced wastewater technologies presented in this dissertation are solutions applied for targeted elimination of MPs. Ozonation and adsorption on Activated Carbon or their combination belong to the most used advanced wastewater treatment technologies in Europe, however, they are suited for effluents of larger wastewater treatment plants. Therefore, an attempt has been made to test Constructed Wetlands (CWs) as an advanced wastewater treatment technology for small-to-medium sized WWTPs, which are typical for rural areas at the catchment of the river Sûre, the geographical border between Luxembourg and Germany. The efficiency of the CWs for the removal of 27 selected compounds has been tested at different scales (laboratory to pilot) in the Interreg Greater Region project EmiSûre 2017-2021 (Développement de stratégies visant à réduire l'introduction de micropolluants dans les cours d'eau de la zone transfrontalière germanoluxembourgeoise). The results of the project confirmed high ability of CWs to remove MPs from municipal effluents. The quantification of the main mechanisms contributing to the elimination of MPs within the CWs was thus established as the main target of the present PhD research, given the evidence of their high ability in the EmiSûre project. The main mechanisms have been identified as adsorption on the soil of the wetland, phytoremediation by the wetland macrophytes and bioremediation by the wetland microorganisms. The nature of the doctoral thesis is cumulative, the core of the thesis are the following four publications: • Publication [I] describes the usage of CWs as a post-treatment step for municipal effluents. • Publication [II] assesses the role of adsorption of the targeted MPs on the used substrates within the studied CWs and presents characterization of the wetland substrates. • Publication [III] describes the role of the wetland macrophytes in the phytoremediation of the targeted MPs within the studied CWs. Furthermore, it reveals a comparison of the different macrophyte types in varying vegetation stadia. • Publication [IV] outlines the role of the wetland microbes in the bioremediation of the targeted MPs within the studied CWs. Moreover, the wetland microbes known to be able to digest MPs or contribute to the elimination of MPs are identified and quantified. Results suggest adsorption as leading removal mechanism (achieved average removal 18 out of 27 compounds >80%), followed by bioremediation (achieved average removal 18 out of 27 compounds >40%) and phytoremediation (achieved average removal 17 out of 27 compounds <20%). The research described contributes to the extension of knowledge about CWs applied for the elimination of MPs from water. Some of the outcomes (deepened knowledge about soil influencing adsorption, recommendations for adjustment of operational parameters, etc.) could be used as a tool for enhancement of the wetland’s treatment efficiency. The research is concluded by recommendations for further investigations of the individual mechanisms (e.g. application of artificial aeration or circulation of the reaction matrix could result in enhancement of bioremediation). [less ▲]

Detailed reference viewed: 77 (8 UL)
Full Text
See detailWCET and Priority Assignment Analysis of Real-Time Systems using Search and Machine Learning
Lee, Jaekwon UL

Doctoral thesis (2022)

Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations ... [more ▼]

Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations of their time constraints (deadlines), which can have catastrophic consequences. To verify whether the systems meet their time constraints, engineers perform schedulability analysis from early stages and throughout development. However, there are challenges in obtaining precise results from schedulability analysis due to estimating the worst-case execution times (WCETs) and assigning optimal priorities to tasks. Estimating WCET is an important activity at early design stages of real-time systems. Based on such WCET estimates, engineers make design and implementation decisions to ensure that task executions always complete before their specified deadlines. However, in practice, engineers often cannot provide a precise point of WCET estimates and they prefer to provide plausible WCET ranges. Task priority assignment is an important decision, as it determines the order of task executions and it has a substantial impact on schedulability results. It thus requires finding optimal priority assignments so that tasks not only complete their execution but also maximize the safety margins from their deadlines. Optimal priority values increase the tolerance of real-time systems to unexpected overheads in task executions so that they can still meet their deadlines. However, it is a hard problem to find optimal priority assignments because their evaluation relies on uncertain WCET values and complex engineering constraints must be accounted for. This dissertation proposes three approaches to estimate WCET and assign optimal priorities at design stages. Combining a genetic algorithm and logistic regression, we first suggest an automatic approach to infer safe WCET ranges with a probabilistic guarantee based on the worst-case scheduling scenarios. We then introduce an extended approach to account for weakly hard real-time systems with an industrial schedule simulator. We evaluate our approaches by applying them to industrial systems from different domains and several synthetic systems. The results suggest that they are possible to estimate probabilistic safe WCET ranges efficiently and accurately so the deadline constraints are likely to be satisfied with a high degree of confidence. Moreover, we propose an automated technique that aims to identify the best possible priority assignments in real-time systems. The approach deals with multiple objectives regarding safety margins and engineering constraints using a coevolutionary algorithm. Evaluation with synthetic and industrial systems shows that the approach significantly outperforms both a baseline approach and solutions defined by practitioners. All the solutions in this dissertation scale to complex industrial systems for offline analysis within an acceptable time, i.e., at most 27 hours. [less ▲]

Detailed reference viewed: 105 (12 UL)
Full Text
See detailMulti-objective Robust Machine Learning For Critical Systems With Scarce Data
Ghamizi, Salah UL

Doctoral thesis (2022)

With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In ... [more ▼]

With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In particular, the most critical and fundamental technologies that handle our economic systems, transportation, health, and even privacy. However, while these systems are becoming more effective, their complexity inherently decreases our ability to understand, test, and assess the dependability and trustworthiness of these systems. This problem becomes even more challenging under a multi-objective framework: When the ML model is required to learn multiple tasks together, behave under constrained inputs or fulfill contradicting concomitant objectives. Our dissertation focuses on the context of robust ML under limited training data, i.e., use cases where it is costly to collect additional training data and/or label it. We will study this topic under the prism of three real use cases: Fraud detection, pandemic forecasting, and chest x-ray diagnosis. Each use-case covers one of the challenges of robust ML with limited data, (1) robustness to imperceptible perturbations, or (2) robustness to confounding variables. We provide a study of the challenges for each case and propose novel techniques to achieve robust learning. As the first contribution of this dissertation, we collaborate with BGL BNP Paribas. We demonstrate that their overdraft and fraud detection systems are prima facie robust to adversarial attacks because of the complexity of their feature engineering and domain constraints. However, we show that gray-box attacks that take into account domain knowledge can easily break their defense. We propose, CoEva2 adversarial fine-tuning, a new defense mechanism based on multi-objective evolutionary algorithms to augment the training data and mitigate the system’s vulnerabilities. Next, we investigate how domain knowledge can protect against adversarial attacks through multi-task learning. We show that adding domain constraints in the form of additional tasks can significantly improve the robustness of models to adversarial attacks, particularly for the robot navigation use case. We propose a new set of adaptive attacks and demonstrate that adversarial training combined with such attacks can improve robustness. While the raw data available in the BGL or Robot Navigation is vast, it is heavily cleaned, feature-engineered, and annotated by domain experts (which are expensive), and the end training data is scarce. In contrast, raw data is scarce when dealing with an outbreak, and designing robust ML systems to predict, forecast, and recommend mitigation policies is challenging. In particular, for small countries like Luxembourg. Contrary to common techniques that forecast new cases based on previous data in time series, we propose a novel surrogate-based optimization as an integrated loop. It combines a neural network prediction of the infection rate based on mobility attributes and a model-based simulation that predicts the cases and deaths. Our approach has been used by the Luxembourg government’s task force and has been recognized with a best paper award at KDD2020. Our following work focuses on the challenges that pose cofounding factors to the robustness and generalization of Chest X-ray (CXR) classification. We first investigate the robustness and generalization of multi-task models, then demonstrate that multi-task learning, leveraging the cofounding variables, can significantly improve the generalization and robustness of CXR classification models. Our results suggest that task augmentation with additional knowledge (like extraneous variables) outperforms state-of-art data augmentation techniques in improving test and robust performances. Overall, this dissertation provides insights into the importance of domain knowledge in the robustness and generalization of models. It shows that instead of building data-hungry ML models, particularly for critical systems, a better understanding of the system as a whole and its domain constraints yields improved robustness and generalization performances. This dissertation also proposes theorems, algorithms, and frameworks to effectively assess and improve the robustness of ML systems for real-world cases and applications. [less ▲]

Detailed reference viewed: 76 (12 UL)
Full Text
See detailA Holistic Methodology to Deploy Industry 4.0 in Manufacturing Enterprises
Kolla, Sri Sudha Vijay Keshav UL

Doctoral thesis (2022)

In the last decade, the manufacturing industry has seen a shift in the way products are produced due to the integration of digital technologies and existing manufacturing systems. This transformation is ... [more ▼]

In the last decade, the manufacturing industry has seen a shift in the way products are produced due to the integration of digital technologies and existing manufacturing systems. This transformation is often referred to as \textbf{Industry 4.0} (I4.0), which guarantees to deliver cost efficiency, mass customization, operational agility, traceability, and enable service orientation. To realize the potential of I4.0, integration of physical and digital elements using advanced technologies is a prerequisite. Large manufacturing companies have been embracing the I4.0 transformation swiftly. However, Small and Medium-sized Enterprises (SMEs) face challenges in terms of skills and capital requirements required for a smoother digital transformation. The goal of this thesis is to understand the features of a typical manufacturing SME and map them with the existing (e.g. Lean) and I4.0 manufacturing systems. The mapping is then used to develop a Self-Assessment Tool (SAT) to measure the maturity of a manufacturing entity. The SAT developed in this research has a critical SME focus. However, the scope of the SAT is not limited to SMEs and can be used for large companies. The analysis of the maturity of manufacturing companies revealed that the managerial dimensions of the companies are more mature than the technical dimensions. Therefore, this thesis attempts to fill the gap in technical dimensions especially Augmented Reality (AR) and Industrial Internet of Things (IIoT) through laboratory experiments and industrial validation. A holistic method is proposed to introduce I4.0 technologies in manufacturing enterprises based on maturity assessment, observations, technical road map, and applications. The method proposed in this research includes SAT, which measures the maturity of a manufacturing company in five categorical domains (\textbf{dimensions}): Strategy, Process and Value Stream, Organization, Methods and Tools, and Personnel. Furthermore, these dimensions are attributed to 36 modules, which help manufacturing companies measure their maturity level in terms of lean and I4.0. The SAT is tested in 100 manufacturing enterprises in Grande Région consisting of the pilot study (n=20) and maturity assessment (n=63). The observations from the assessment are then used to set up the technological road map for the research. AR and IIoT are the two technologies that are associated with the least mature modules, which are explored in depth in this thesis. A holistic method is incomplete without industry validation. Therefore, the above-mentioned technologies are applied in two manufacturing companies for further validation of the laboratory results. These applications include 1) the application of AR for maintenance and quality inspection in a tire manufacturing industry, and 2) the application of retrofitting technology for IIoT on a production machine in an SME. With the validated assessment model and the industrial applications, this thesis overall presents a holistic approach to introducing I4.0 technologies in manufacturing enterprises. This is accomplished through identifying the status of the company using maturity assessment and deriving the I4.0 roadmap for high potential modules. The skill gap in the addressed technologies is compensated by designing and testing prototypes in the laboratory before applying them in the industry. [less ▲]

Detailed reference viewed: 86 (15 UL)
Full Text
See detailArtificial Intelligence-enabled Automation For Ambiguity Handling And Question Answering In Natural-language Requirements
Ezzini, Saad UL

Doctoral thesis (2022)

Requirements Engineering (RE) quality control is a crucial step for a project’s success. Natural Language (NL) is by far the most commonly used means for capturing requirement specifications. Despite ... [more ▼]

Requirements Engineering (RE) quality control is a crucial step for a project’s success. Natural Language (NL) is by far the most commonly used means for capturing requirement specifications. Despite facilitating communication, NL is prone to quality defects, one of the most notable of which is ambiguity. Ambiguous requirements can lead to misunderstandings and eventually result in a system that is different from what is intended, thus wasting time, money, and effort in the process. This dissertation tackles selected quality issues in NL requirements: • Using Domain-specific Corpora for Improved Handling of Ambiguity in Requirements: Syntactic ambiguity types occurring in coordination and prepositional-phrase attachment structures are prevalent in requirements (in our document collection, as we discuss in Chapter 3, 21% and 26% of the requirements are subject to coordination and prepositional-phrase attachment ambiguity analysis, respectively). We devise an automated solution based on heuristics and patterns for improved handling of coordination and prepositional-phrase attachment ambiguity in requirements. As a prerequisite for this research, we further develop a more broadly applicable corpus generator that creates a domain-specific knowledge resource by crawling Wikipedia. • Automated Handling of Anaphoric Ambiguity in Requirements: A Multi-solution Study: Anaphoric ambiguity is another prevalent ambiguity type in requirements. Estimates from the RE literature suggest that nearly 20% of industrial requirements contain anaphora [1, 2]. We conducted a multi-solution study for anaphoric ambiguity handling. Our study investigates six alternative solutions based on three different technologies: (i) off-the-shelf natural language processing (NLP), (ii) recent NLP methods utilizing language models, and (iii) machine learning (ML). • AI-based Question Answering Assistant for Analyzing NL Requirements: Understanding NL requirements requires domain knowledge that is not necessarily shared by all the involved stakeholders. We develop an automated question-answering assistant that supports requirements engineers during requirements inspections and quality assurance. Our solution uses advanced information retrieval techniques and machine reading comprehension models to answer questions from the same requirement specifications document and/or an external domain-specific knowledge resource. All the research components in this dissertation are tool-supported. Our tools are released with open-source licenses to encourage replication and reuse. [less ▲]

Detailed reference viewed: 207 (22 UL)
Full Text
See detailTOPICS IN COMPUTATIONAL NUMBER THEORY AND CRYPTANALYSIS - On Simultaneous Chinese Remaindering, Primes, the MiNTRU Assumption, and Functional Encryption
Barthel, Jim Jean-Pierre UL

Doctoral thesis (2022)

This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem ... [more ▼]

This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem asks to find all integer solutions to a given system of congruences where each congruence is defined by one modulus and one remainder. The Simultaneous Chinese Remainder Problem is a direct generalization of its classical counterpart where for each modulus the single remainder is replaced by a non-empty set of remainders. The solutions of a Simultaneous Chinese Remainder Problem instance are completely defined by a set of minimal positive solutions, called primitive solutions, which are upper bounded by the lowest common multiple of the considered moduli. However, contrary to its classical counterpart, which has at most one primitive solution, the Simultaneous Chinese Remainder Problem may have an exponential number of primitive solutions, so that any general-purpose solving algorithm requires exponential time. Furthermore, through a direct reduction from the 3-SAT problem, we prove first that deciding whether a solution exists is NP-complete, and second that if the existence of solutions is guaranteed, then deciding whether a solution of a particular size exists is also NP-complete. Despite these discouraging results, we studied methods to find the minimal solution to Simultaneous Chinese Remainder Problem instances and we discovered some interesting statistical properties. A Conjecture On Primes In Arithmetic Progressions And Geometric Intervals: Dirichlet’s theorem on primes in arithmetic progressions states that for any positive integer q and any coprime integer a, there are infinitely many primes in the arithmetic progression a + nq (n ∈ N), however, it does not indicate where those primes can be found. Linnik’s theorem predicts that the first such prime p0 can be found in the interval [0;q^L] where L denotes an absolute and explicitly computable constant. Albeit only L = 5 has been proven, it is widely believed that L ≤ 2. We generalize Linnik’s theorem by conjecturing that for any integers q ≥ 2, 1 ≤ a ≤ q − 1 with gcd(q, a) = 1, and t ≥ 1, there exists a prime p such that p ∈ [q^t;q^(t+1)] and p ≡ a mod q. Subsequently, we prove the conjecture for all sufficiently large exponent t, we computationally verify it for all sufficiently small modulus q, and we investigate its relation to other mathematical results such as Carmichael’s totient function conjecture. On The (M)iNTRU Assumption Over Finite Rings: The inhomogeneous NTRU (iNTRU) assumption is a recent computational hardness assumption, which claims that first adding a random low norm error vector to a known gadget vector and then multiplying the result with a secret vector is sufficient to obfuscate the considered secret vector. The matrix inhomogeneous NTRU (MiNTRU) assumption essentially replaces vectors with matrices. Albeit those assumptions strongly remind the well-known learning-with-errors (LWE) assumption, their hardness has not been studied in full detail yet. We provide an elementary analysis of the corresponding decision assumptions and break them in their basis case using an elementary q-ary lattice reduction attack. Concretely, we restrict our study to vectors over finite integer rings, which leads to a problem that we call (M)iNTRU. Starting from a challenge vector, we construct a particular q-ary lattice that contains an unusually short vector whenever the challenge vector follows the (M)iNTRU distribution. Thereby, elementary lattice reduction allows us to distinguish a random challenge vector from a synthetically constructed one. A Conditional Attack Against Functional Encryption Schemes: Functional encryption emerged as an ambitious cryptographic paradigm supporting function evaluations over encrypted data revealing the result in plain. Therein, the result consists either in a valid output or a special error symbol. We develop a conditional selective chosen-plaintext attack against the indistinguishability security notion of functional encryption. Intuitively, indistinguishability in the public-key setting is based on the premise that no adversary can distinguish between the encryptions of two known plaintext messages. As functional encryption allows us to evaluate functions over encrypted messages, the adversary is restricted to evaluations resulting in the same output only. To ensure consistency with other primitives, the decryption procedure of a functional encryption scheme is allowed to fail and output an error. We observe that an adversary may exploit the special role of these errors to craft challenge messages that can be used to win the indistinguishability game. Indeed, the adversary can choose the messages such that their functional evaluation leads to the common error symbol, but their intermediate computation values differ. A formal decomposition of the underlying functionality into a mathematical function and an error trigger reveals this dichotomy. Finally, we outline the impact of this observation on multiple DDH-based inner-product functional encryption schemes when we restrict them to bounded-norm evaluations only. [less ▲]

Detailed reference viewed: 58 (5 UL)
Full Text
See detailSecure, privacy-preserving and practical collaborative Genome-Wide Association Studies
Pascoal, Túlio UL

Doctoral thesis (2022)

Understanding the interplay between genomics and human health is a crucial step for the advancement and development of our society. Genome-Wide Association Study (GWAS) is one of the most popular methods ... [more ▼]

Understanding the interplay between genomics and human health is a crucial step for the advancement and development of our society. Genome-Wide Association Study (GWAS) is one of the most popular methods for discovering correlations between genomic variations associated with a particular phenotype (i.e., an observable trait such as a disease). Leveraging genome data from multiple institutions worldwide nowadays is essential to produce more powerful findings by operating GWAS at larger scale. However, this raises several security and privacy risks, not only in the computation of such statistics, but also in the public release of GWAS results. To that extent, several solutions in the literature have adopted cryptographic approaches to allow secure and privacy-preserving processing of genome data for federated analysis. However, conducting federated GWAS in a secure and privacy-preserving manner is not enough since the public releases of GWAS results might be vulnerable to known genomic privacy attacks, such as recovery and membership attacks. The present thesis explores possible solutions to enable end-to-end privacy-preserving federated GWAS in line with data privacy regulations such as GDPR to secure the public release of the results of Genome-Wide Association Studies (GWASes) that are dynamically updated as new genomes become available, that might overlap with their genomes and considered locations within the genome, that can support internal threats such as colluding members in the federation and that are computed in a distributed manner without shipping actual genome data. While achieving these goals, this work created several contributions described below. First, the thesis proposes DyPS, a Trusted Execution Environment (TEE)-based framework that reconciles efficient and secure genome data outsourcing with privacy-preserving data processing inside TEE enclaves to assess and create private releases of dynamic GWAS. In particular, DyPS presents the conditions for the creation of safe dynamic releases certifying that the theoretical complexity of the solution space an external probabilistic polynomial-time (p.p.t.) adversary or a group of colluders (up to all-but-one parties) would need to infer when launching recovery attacks on the observation of GWAS statistics is large enough. Besides that, DyPS executes an exhaustive verification algorithm along with a Likelihood-ratio test to measure the probability of identifying individuals in studies. Thus, also protecting individuals against membership inference attacks. Only safe genome data (i.e., genomes and SNPs) that DyPS selects are further used for the computation and release of GWAS results. At the same time, the remaining (unsafe) data is kept secluded and protected inside the enclave until it eventually can be used. Our results show that if dynamic releases are not improperly evaluated, up to 8% of genomes could be exposed to genomic privacy attacks. Moreover, the experiments show that DyPS’ TEE-based architecture can accommodate the computational resources demanded by our algorithms and present practical running times for larger-scale GWAS. Secondly, the thesis offers I-GWAS that identifies the new conditions for safe releases when considering the existence of overlapping data among multiple GWASes (e.g., same individuals participating in several studies). Indeed, it is shown that adversaries might leverage information of overlapping data to make both recovery and membership attacks feasible again (even if they are produced following the conditions for safe single-GWAS releases). Our experiments show that up to 28.6% of genetic variants of participants could be inferred during recovery attacks, and 92.3% of these variants would enable membership attacks from adversaries observing overlapping studies, which are withheld by I-GWAS. Lastly yet importantly, the thesis presents GenDPR, which encompasses extensions to our protocols so that the privacy-verification algorithms can be conducted distributively among the federation members without demanding the outsourcing of genome data across boundaries. Further, GenDPR can also cope with collusion among participants while selecting genome data that can be used to create safe releases. Additionally, GenDPRproduces the same privacy guarantees as centralized architectures, i.e., it correctly identifies and selects the same data in need of protection as with centralized approaches. In the end, the thesis presents a homogenized framework comprising DyPS, I-GWAS and GenDPR simultaneously. Thus, offering a usable approach for conducting practical GWAS. The method chosen for protection is of a statistical nature, ensuring that the theoretical complexity of attacks remains high and withholding releases of statistics that would impose membership inference risks to participants using Likelihood-ratio tests, despite adversaries gaining additional information over time, but the thesis also relates the findings to techniques that can be leveraged to protect releases (such as Differential Privacy). The proposed solutions leverage Intel SGX as Trusted Execution Environment to perform selected critical operations in a performant manner, however, the work translates equally well to other trusted execution environments and other schemes, such as Homomorphic Encryption. [less ▲]

Detailed reference viewed: 135 (13 UL)
Full Text
See detailInterleukin-6 signalling and long non-coding RNAs in liver cancer
Minoungou, Wendkouni Nadège UL

Doctoral thesis (2022)

Hepatocellular carcinoma (HCC), the main form of primary liver cancer, is the second leading cause of cancer-related deaths worldwide after lung cancer. Multiple aetiologies have been associated with the ... [more ▼]

Hepatocellular carcinoma (HCC), the main form of primary liver cancer, is the second leading cause of cancer-related deaths worldwide after lung cancer. Multiple aetiologies have been associated with the development of HCC, which arises in most cases in the context of a chronically inflamed liver. HCC is in fact an inflammation-driven cancer, with the TNF and IL6 families of cytokines playing key roles in maintaining a chronic inflammatory state, promoting hepatocarcinogenesis. IL6 signals mainly through the JAK1/STAT3 signal transduction pathway and is known to play key roles in liver physiology and disease. In the interest of identifying novel players and downstream effectors of the IL6/JAK1/STAT3 signalling pathway that may contribute to the signal transduction of IL6 in liver-derived cells, we have been investigating the expression of long non-coding RNAs (lncRNAs) in response to treatment with the designer cytokine Hyper-IL6. Indeed, lncRNAs have recently emerged as a key layer of biological regulation and have been shown to be differentially expressed in cancer, including HCC. Upon analysis of time series transcriptomics data, we have identified hundreds of lncRNAs to be differentially expressed in HepG2, HuH7, and Hep3B hepatoma cells upon cytokine stimulation; 26 of which are common to the three cell lines tested. qPCR validation experiments have been performed for several lncRNAs, such as for the liver-specific lncRNA linc-ELL2. By functionally characterising identified clusters of IL6-regulated coding and non-coding genes in hepatoma cells, we propose, based on a guilt-by-association hypothesis, novel functions for previously poorly characterized lncRNAs and pseudogenes such as AL391422.4 or TUBA5P. Several lncRNA genes seem to be co-regulated with a protein-coding gene localized in their vicinity. For example, Hyper-IL-6 increases the mRNA and protein levels of XBP1, a well-known regulator of the unfolded protein response. At the same time, the expression of lncRNA AF086143 increases, which is expressed from the same gene locus in a bidirectional way. The targeted as well as a genome-wide analysis of lncRNA/mRNA gene pairs indicates a possible cis-regulatory role of lncRNAs with regards to their antisense and bidirectional protein coding counterparts. Taken together, these results provide a comprehensive characterisation of the lncRNA and pseudogene repertoire of IL6-regulated genes in hepatoma cells. Our results emphasize lncRNAs as crucial components of the gene regulatory networks affected by cytokine signalling pathways. [less ▲]

Detailed reference viewed: 53 (16 UL)
Full Text
See detailMagnetic Guinier Law and Uniaxial Polarization Analysis in Small Angle Neutron Scattering
Malyeyev, Artem UL

Doctoral thesis (2022)

The present PhD thesis is devoted to the development of the use of the magnetic small-angle neutron scattering (SANS) technique for analyzing the magnetic microstructures of magnetic materials. The ... [more ▼]

The present PhD thesis is devoted to the development of the use of the magnetic small-angle neutron scattering (SANS) technique for analyzing the magnetic microstructures of magnetic materials. The emphasis is on the three aspects: (i) analytical development of the magnetic Guinier law; (ii) the application the magnetic Guinier law and of the generalized Guinier-Porod model to the analysis of experimental neutron data on various magnets such as a Nd-Fe-B nanocomposite, nanocrystalline cobalt, and Mn-Bi rare-earth-free permanent magnets; (iii) development of the theory of uniaxial neutron polarization analysis and experimental testing on a soft magnetic nanocrystalline alloy. The conventional “nonmagnetic” Guinier law represents the low-q approximation for the small-angle scattering curve from an assembly of particles. It has been derived for nonmagnetic particle-matrix-type systems and is routinely employed for the estimation of particle sizes in e.g., soft-matter physics, biology, colloidal chemistry, materials science. Here, the extension of the Guinier law is provided for magnetic SANS through the introduction of the magnetic Guinier radius, which depends on the applied magnetic field, on the magnetic interactions (exchange constant, saturation magnetization), and on the magnetic anisotropy-field radius. The latter quantity characterizes the size over which the magnetic anisotropy field is coherently aligned into the same direction. In contrast to the conventional Guinier law, the magnetic version can be applied to fully dense random-anisotropy-type ferromagnets. The range of applicability is discussed and the validity of the approach is experimentally demonstrated on a Nd-Fe-B-based ternary permanent magnet and on a nanocrystalline cobalt sample. Rare-earth-free permanent magnets in general and the Mn-Bi-based ones in particular have received a lot of attention lately due to their application potential in electronics devices and electromotors. Mn-Bi samples with three different alloy compositions were studied by means of unpolarized SANS and by very small-angle neutron scattering (VSANS). It turns out that the magnetic scattering of the Mn-Bi samples is determined by long-wavelength transversal magnetization fluctuations. The neutron data is analyzed in terms of the generalized Guinier-Porod model and the distance distribution function. The results for the so-called dimensionality parameter obtained from the Guinier-Porod model indicate that the magnetic scattering of a Mn$_{45}$Bi$_{55}$ specimen has its origin in slightly shape-anisotropic structures and the same conclusions are drawn from the distance distribution function analysis. Finally, based on Brown’s static equations of micromagnetics and the related theory of magnetic SANS, the uniaxial polarization of the scattered neutron beam of a bulk magnetic material is computed. The theoretical expressions are tested against experimental data on a soft magnetic nanocrystalline alloy, and both qualitative and quantitative correspondence is discussed. The rigorous analysis of the polarization of the scattered neutron beam establishes the framework for the emerging polarized real-space techniques such as spin-echo small-angle neutron scattering (SESANS), spin-echo modulated small-angle neutron scattering (SEMSANS), and polarized neutron dark-field contrast imaging (DFI), and opens up a new avenue for magnetic neutron data analysis on nanoscaled systems. [less ▲]

Detailed reference viewed: 76 (13 UL)
Full Text
See detailPortfolioarbeit und Lernen – eine qualitative Studie in einer inklusionsorientierten Grundschule im Kontext der luxemburgischen Bildungsreform
Noesen, Melanie UL

Doctoral thesis (2022)

Die Dissertation thematisiert Portfolioarbeit in einer inklusionsorientierten Grundschule im Rahmen der luxemburgischen Bildungsreform im Spannungsfeld zwischen Kompetenzstandardisierung und ... [more ▼]

Die Dissertation thematisiert Portfolioarbeit in einer inklusionsorientierten Grundschule im Rahmen der luxemburgischen Bildungsreform im Spannungsfeld zwischen Kompetenzstandardisierung und Inklusionsorientierung. Das Portfolio wurde als wichtiges Instrument zur Umsetzung eines Kompetenzmodells qua Reform der Leistungsbeurteilung angedacht (Winter et al. 2012; MENFP 2011) und sollte nicht nur dem Lernen der Kinder, sondern auch der Unterrichtsentwicklung dienen (ebd.). So geht die Arbeit der Frage nach, wie sich Lernen im Rahmen der Portfolioarbeit in einem zweiten Zyklus bei Kindern zu Beginn des Schriftspracherwerbs in einer inklusionsorientierten luxemburgischen Grundschule im Kontext der Bildungsreform gestaltet. Spezifischer fragt die Studie nach der Gestaltung und dem Einsatz des Portfolios seitens der Kinder, der Lehrpersonen sowie der Eltern. Einer Beschreibung der Funktionen der Portfolioarbeit in der erforschten Lerngruppe folgt die Einordnung des Zusammenhangs zwischen Lernen (spezifischer auch des Sprachenlernens) und Portfolioarbeit unter Berücksichtigung der Repräsentationen der Lehrpersonen auf ihre Praxis vor dem Hintergrund der luxemburgischen Grundschulreform. [less ▲]

Detailed reference viewed: 43 (2 UL)
Full Text
See detailModeling and Control of Laser Wire Additive Manufacturing
Mbodj, Natago Guilé UL

Doctoral thesis (2022)

Metal Additive Manufacturing (MAM) offers many advantages such as fast product manufacturing, nearly zero material waste, prototyping of complex large parts and the automatization of the manufacturing ... [more ▼]

Metal Additive Manufacturing (MAM) offers many advantages such as fast product manufacturing, nearly zero material waste, prototyping of complex large parts and the automatization of the manufacturing process in the aerospace, automotive and other sectors. In the MAM, several parameters influence the product creation steps, making the MAM challenging. In this thesis, we modelize and control the deposition process for a type of MAM where a laser beam is used to melt a metallic wire to create the metal parts called the Laser Wire Additive Manufacturing Process (LWAM). In the dissertation, first, a novel parametric modeling approach is created. The goal of this approach is to use parametric product design features to simulate and print 3D metallic objects for the LWAM. The proposed method includes a pattern and the robot toolpath creation while considering several process requirements of LWAM, such as the deposition sequences and the robot system. This technique aims to develop adaptive robot toolpaths for a precise deposition process with nearly zero error in the product creation process. Second, a layer geometry (width and height) prediction model to improve deposition accuracy is proposed. A machine learning regression algorithm is applied to several experimental data to predict the bead geometry across layers. Furthermore, a neural network-based approach was used to study the influence of different deposition parameters, namely laser power, wire-feed rate and travel speed on bead geometry. The experimental results shows that the model has an error rate of (i.e., 2∼4%). Third, a physics-based model of the bead geometry including known process parameters and material properties was created. The model developed for the first time includes critical process parameters, the material properties and the thermal history to describe the relationship between the layer height with different process inputs (i.e., the power, the standoff distance, the temperature, the wire-feed rate and the travel speed). The numerical results show a match of the model with the experimental measurements. Finally, a Model Predictive Controller (MPC) was designed to keep the layer height trajectory constant, considering the constraints and the operating range of the parameters of the process inputs. The model simulation result shows an acceptable tracking of the reference height. [less ▲]

Detailed reference viewed: 179 (6 UL)
Full Text
See detailModelling complex systems in the context of the COVID-19 pandemics
Kemp, Francoise UL

Doctoral thesis (2022)

Systems biology is an interdisciplinary approach investigating complex biological systems at different levels by combining experimental and modelling approaches to understand underlying mechanisms of ... [more ▼]

Systems biology is an interdisciplinary approach investigating complex biological systems at different levels by combining experimental and modelling approaches to understand underlying mechanisms of health and disease. Complex systems including biological systems are affected by a plethora of interactions and dynamic processes often with the aim to ensure robustness to emer- gent system properties. The need for interdisciplinary approaches became very evident in the recent COVID-19 pandemic spreading around the globe since the end of 2019. This pandemic came with a bundle of urgent epidemiological open questions including the infection and transmis- sion mechanisms of the virus, its pathogenicity and the relation to clinical symptoms. During the pandemic, mathematical modelling became an essential tool to integrate biological and healthcare data into mechanistic frameworks for projections of future developments and the assessment of different mitigation strategies. In this regard, systems biology with its interdisciplinary approach was a widely applied framework to support society in the COVID-19 crisis. In my thesis, I applied different mathematical modelling approaches as a tool to identify underlying mechanisms of the complex dynamics of the COVID-19 pandemic with a specific focus on the situation in Luxembourg. For this purpose, I analysed the COVID-19 pandemic at its different phases and from various perspectives by investigating mitigation strategies, consequences in the healthcare and economical system, and pandemic preparedness in terms of early-warning signals for re-emergence of new COVID-19 outbreaks by extended and adapted epidemiological Susceptible-Exposed-Infectious-Recovered (SEIR) models. [less ▲]

Detailed reference viewed: 65 (7 UL)
See detailDocteur
Iglesias González, Alba UL

Doctoral thesis (2022)

The last century has been characterized by the increasing presence of synthetic chemicals in human surroundings, with as consequence, the increasing exposure of individuals to a wide variety of chemical ... [more ▼]

The last century has been characterized by the increasing presence of synthetic chemicals in human surroundings, with as consequence, the increasing exposure of individuals to a wide variety of chemical substances on a regular basis. The Lancet Commission on Pollution and Health estimated that since synthetic chemicals started to be available for common use at the end of the 1940s, more than 140,000 new chemicals have been produced, including five thousand used globally in massive volume. In parallel, awareness of the adverse effects of pollutant mixtures, possibly more severe than single-chemical exposures, has drawn attention towards the need of multi-residue analytical methods to obtain the most comprehensive information on human chemical exposome. Human biomonitoring, consisting in the measurement of pollutants in biological matrices, provides information that integrates all the possible sources of exposure, and is specific to the subject the sample is collected from. For this purpose, hair appears as a particularly promising matrix to assess chemical exposure thanks to its multiple benefits. Hair enables to detect both parent chemicals and metabolites, it is suitable to investigate exposure to chemicals from different families, and allows the detection of persistent and non-persistent chemicals. Moreover, contrary to fluids such as urine and blood, which only give information on the short-term exposure and present great variability in chemical concentration, hair is representative of wider time windows that can easily cover several months. Children represent the most vulnerable part of the population, and exposure to pollutants at young ages has been associated with severe health effects during childhood, but also during the adult life. Nevertheless, most epidemiological studies investigating exposure to pollutants are still conducted on adults, and data on children remain much more limited. The present study named “Biomonitoring of children exposure to pollutants based on hair analysis” investigated the relevance of hair analysis for assessing children exposure to pollutants. In this study, 823 hair samples were collected from children and adolescents living in 9 different countries (Luxembourg, France, Spain, Uganda, Indonesia, Ecuador, Suriname, Paraguay and Uruguay), and 117 hair samples were also collected from French adults. All samples were analysed for the detection of 153 organic compounds (140 were pesticides, 4 PCBs, 7 BDEs and 2 bisphenols). Moreover, the hair samples of French adults and children were also analysed for the detection of polycyclic aromatic hydrocarbons (PAH) and their metabolites (n = 62), nicotine, cotinine and metals (n = 36). The results obtained here clearly demonstrated that children living in different geographical areas are simultaneously exposed to multiple chemicals from different chemical classes. Furthermore, the presence of persistent organic pollutants in all children, and not only in adults, suggests that exposure to these chemicals is still ongoing, although these chemicals were banned decades ago. In the sub-group of Luxembourgish children, information collected within questionnaires in parallel to hair sample collection allowed to identify some possible determinant of exposure, such as diet (organic vs conventional), residence area (urban vs countryside), and presence of pets at home. Moreover, results showed higher levels of concentration in younger children, and higher exposure of boys to non-persistent pesticides than girls, which could possibly be attributed to differences in metabolism, behaviour and gender-specific activities. Finally, the study also highlighted high level of similarity in the chemical exposome between children from the same family compared to the rest of the population. The present study strongly supports the use of hair analysis for assessing exposure to chemical pollutants, and demonstrates the relevance of multi-residue methods to investigate exposome. [less ▲]

Detailed reference viewed: 92 (3 UL)
Full Text
See detailLimit theorems with Malliavin calculus and Stein's method
Garino, Valentin UL

Doctoral thesis (2022)

We use recent tools from stochastic analysis (such as Stein's method and Malliavin calculus) to study the asymptotic behaviour of some functionals of a Gaussien Field.

Detailed reference viewed: 61 (8 UL)
Full Text
See detailExploring the Institutionalisation of Science Diplomacy: A Comparison of German and Swiss Science and Innovation Centres
Epping, Elisabeth UL

Doctoral thesis (2022)

This thesis explains and investigates the development and the institutionalisation of Science and Innovation Centres (SICs) as being distinct instruments of science diplomacy. SICs are a unique and ... [more ▼]

This thesis explains and investigates the development and the institutionalisation of Science and Innovation Centres (SICs) as being distinct instruments of science diplomacy. SICs are a unique and underexplored instrument in the science diplomacy toolbox and they are increasingly being adopted by highly innovative countries. This research responds to a growing interest in the field. Science diplomacy is commonly understood as a distinct governmental approach that mobilises science for wider foreign policy goals, such as improving international relations. However, science diplomacy discourse is characterised by a weak empirical basis and driven by normative perspectives. This research responds to these shortcomings and aims to lift the smokescreen of science diplomacy by providing an insight into its governance while also establishing a distinctly actor-centred perspective. In order to achieve this, two distinct SICs, Germany’s Deutsche Wissenschafts- und Innovationshäuser (DWIH) and Switzerland’s Swissnex are closely analysed in an original comparative and longitudinal study. While SICs are just one instrument in the governmental toolbox for promoting international collaboration and competition, they are distinct due to their holistic set- up and their role as a nucleus for the wider research and innovation system they represent. Moreover, SICs appear to have the potential to create a significant impact, despite their limited financial resources. This thesis takes a historical development perspective to outline how these two SICs were designed as well as their gradual development and institutionalisation. The thesis further probes why actors participate in SICs by unpacking their differing rationales, developing a distinctly actor-centred perspective on science diplomacy. This study has been designed in an inductive and exploratory way to account for the novelty of the topic; the research findings are based on the analysis of 41 interviews and a substantial collection of documents. The study finds evidence that SICs developed as a response to wider societal trends, although these trends differed for the two case studies. Moreover, the development of SICs has been characterised by aspects such as timing, contingency and critical junctures. SICs are inextricably connected to their national contexts and mirror distinct system characteristics, such as governance arrangements or degree of actor involvement. These aspects were also seen as explaining the exact shape that SICs take. Furthermore, this study finds evidence of an appropriation of SICs by key actors, in line with their organisational interests. In the case of the DWIH, this impacted and even limited its (potential) design and ways of operating. However, the analysis of SICs’ appropriation also revealed a distinct sense of collectivity, which developed among actors in the national research and innovation ecosystem due to this joint instrument. The research findings reaffirm that science diplomacy is clearly driven by national interests, while further highlighting that the notion of science diplomacy and its governance (actors, rationales and instruments) can only be fully understood by analysing the national context. [less ▲]

Detailed reference viewed: 154 (5 UL)
See detailDeciphering the role of colorectal cancer-associated bacteria in the fibroblast-tumor cell interaction
Karta, Jessica UL

Doctoral thesis (2022)

Dysbiosis is an imbalance in the gut microbiome that is often associated with inflammation and cancer. Several microbial species, such as Fusobacterium nucleatum, have been suggested to be involved in ... [more ▼]

Dysbiosis is an imbalance in the gut microbiome that is often associated with inflammation and cancer. Several microbial species, such as Fusobacterium nucleatum, have been suggested to be involved in colorectal cancer (CRC). To date, most studies have focused on the interaction between CRC-associated bacteria and tumor cells. However, the tumor microenvironment (TME) is composed of various types of cells, among which cancer-associated fibroblasts (CAFs), one of the most vital players in the TME. The interaction between CRC-associated bacteria and CAFs and especially the impact of their cross-talk on tumor cells, remains largely unknown. In this regard, this thesis investigated the interaction between a well described and accepted CRC-associated bacteria, Fusobacterium nucleatum, and CAFs and their subsequent effects on tumor progression in CRC. Our findings show that F.nucleatum binds to CAFs and induces phenotypic changes. F.nucleatum promotes CAFs to secrete several pro-inflammatory cytokines and membrane-associated proteases. Upon exposure with F.nucleatum, CAFs also undergo metabolic rewiring with higher mitochondrial ROS and lactate secretion. Importantly, F.nucleatum-treated CAFs increase the migration ability of tumor cells in vitro through secreted cytokines, among which CXCL1. Furthermore, the co-injection of F.nucleatum-treated CAFs with tumor cells in vivo leads to a faster tumor growth as compared to the co-injection of untreated CAFs with tumor cells. Taken together, our results show that CAFs are an important player in the gut microbiome-CRC axis. Targeting the CAF-microbiome crosstalk might represent a novel therapeutic strategy for CRC. [less ▲]

Detailed reference viewed: 100 (16 UL)
Full Text
See detailElectrocaloric coolers and pyroelectric energy harvesters based on multilayer capacitors of Pb(Sc0.5Ta0.5)O3
Torelló Massana, Àlvar UL

Doctoral thesis (2022)

The following work investigates the development of heat pumps that exploit electrocaloric effects in Pb(Sc,Ta)03 (PST) multilayer capacitors (MLCs). The electrocaloric effect refers to reversible thermal ... [more ▼]

The following work investigates the development of heat pumps that exploit electrocaloric effects in Pb(Sc,Ta)03 (PST) multilayer capacitors (MLCs). The electrocaloric effect refers to reversible thermal changes in a material upon application (and removal) of an electric field. Electrocaloric cooling is interesting because 1) it has the potential to be more efficient than competing technologies, such as vapour-compression systems, and 2) it does not compel the use of greenhouse gases, which is crucial in order to slow down global warming and mitigate the effects of climate change. The continuous progress in the field of electrocalorics has promoted the creation of several electrocaloric based heat pump prototypes. Despite the different designs and working principles utilized, these prototypes have struggled to maintain temperature variations as large as 10 K, discouraging their industrial development. In this work, bespoke PST-MLCs exhibiting large electrocaloric effects near room temperature were embodied in a novel heat pump with the motivation to surpass the 10 K-barrier. The experimental design of the heat pump was based on the outcome of a numerical model. After implementing some of the modifications suggested by the latter, consistent temperature spans of 13 K at 30 °C were reported, with cooling powers of 12 W / kg. Additional simulations predicted temperature spans as large as 50 K and cooling powers in the order of 1000 W / kg, if a new set of plausible modifications were to be put in place. Similarly, these very same PST-MLCs samples were implemented into pyroelectric harvesters revisiting Olsen's pioneering work from 1980. The harvested energies were found to be as large as 11.2 J, with energy densities reaching up to 4.4 J / cm3 of active material, when undergoing temperature oscillations of 100 K under electric fields applied of 140-200 kV / cm. These findings are two and four times, respectively, larger than the best reported values in the literature. The results obtained in this dissertation are beyond the state-of-the-art and show that 1) electrocaloric heat pumps can indeed achieve temperature spans larger than 10 K, and 2) pyroelectric harvesters can generate electrical energy in the Joule-range. Moreover, numerical models indicated that there is still room for improvement, especially when it comes to the power of these devices. This should encourage the development of these kinds of electrocaloric- and pyroelectric-based applications in the near future. [less ▲]

Detailed reference viewed: 122 (6 UL)
Full Text
See detailThe Multi-Level System of Space Mining: Regulatory Aspects and Enforcement Options
Salmeri, Antonino UL

Doctoral thesis (2022)

Few contests that space mining holds the potential to revolutionize the space sector. The utilization of space resources can reduce the costs of deep space exploration and kick-off an entirely new economy ... [more ▼]

Few contests that space mining holds the potential to revolutionize the space sector. The utilization of space resources can reduce the costs of deep space exploration and kick-off an entirely new economy in our solar system. However, whether such a revolution will happen for good or for worse depends also on the enactment of appropriate regulation. Under the right framework, space mining will be able to deliver on its promise of a new era of prosperous and sustainable space exploration. But with the wrong rules (or lack thereof), unbalanced space resource activities can destabilize the space community to a truly unprecedented scale. With companies planning mining operations on the Moon already during this decade, the regulation of space resource activities has thus become one of the most pressing and crucial topics to be addressed by the global space community. In this context, this thesis provides a first-of-its-kind, comprehensive and innovative analysis of the regulatory and enforcement options currently shaping the multi-level governance of space mining. In addition to this, the thesis also suggests a series of correctives that can improve the system and ensure the peaceful, rational, safe, and sustainable conduct of space mining. Structurally, the thesis moves from general to particular and is divided in three chapters. Chapter 1 discusses the relationship between space law and international law to contextualize the specific assessment of space mining. Chapter 2 analyses the current regulatory framework applicable to space mining, considering both the international and national levels. Finally, Chapter 3 identifies potential enforcement options, assesses them in terms of effectiveness and legitimacy, and further proposes some pragmatic correctives to reinforce the governance system. [less ▲]

Detailed reference viewed: 152 (18 UL)
See detailMULTI-STAGE PROCESS FOR A HIGHER FLEXIBILITY OF BIOGAS PLANTS WITH (CO-) FERMENTATION OF WASTE – OPTIMISATION AND MODELLING
Sobon-Mühlenbrock, Elena Katarzyna UL

Doctoral thesis (2022)

The European Union has been striving to become the first climate-neutral continent by 2050. This implicates an intensified transition towards sustainability. The most applied renewable energy sources are ... [more ▼]

The European Union has been striving to become the first climate-neutral continent by 2050. This implicates an intensified transition towards sustainability. The most applied renewable energy sources are the sun and wind, which are intermittent. Thus, great fluctuating shares in the energy network are expected within the next years. Consequently, there might occur periods of no congruence between energy demand and energy supply leading to destabilization of the electricity grid. Therefore, an urgency to overcome the intermittency arises. One feasible option is to use a third renewable energy source, biomass, which can be produced demand-oriented. Hence, a flexible biogas plant running on a two-stage mode, where the first stage would serve as a storage for liquid intermediates, could be a viable option to create demand-driven and need-orientated electricity. Since vast amounts of food waste are thrown away each year (in 2015 they amounted 88 mio. tones within the EU-28, accounting for ca. 93 TWh of energy), one could energetically recover this substrate in the above-described process. This is a promising concept, however, not widely applicable as it faces many challenges, such as technical and economical. Additionally, food waste is inhomogeneous, and its composition is country- and collecting season dependent. The motivation of this work was to contribute to a vaster understanding of the two-stage anaerobic digestion process by using food waste as a major substrate. At first, an innovative substitute for a heterogeny food waste was introduced and examined at two different loadings and temperature modes. It was proven that the Model Kitchen Waste (MKW) was comparable to the real Kitchen Waste (KW), at mesophilic and thermophilic mode for an organic loading in accordance with the guideline VDI 4630 (2016). For an “extreme” loading, and mesophilic mode, the MKW generated similar biogas, methane, and volatile fatty acid (VFA) patterns as well. Furthermore, another two MKW versions were developed, allowing covering a variety of different organic wastes and analyzing the impact of fat content on the biogas production. Afterwards, a semi-continuous one-stage experiment of 122 days was conducted. It was followed by an extensive semi-continuous two-stage study of almost 1.5-year runtime. Different loadings and hydraulic retention times were investigated in order to optimize this challenging process. Additionally, the impact of co-digestion of lignocellulose substrate was analyzed. It was concluded that two-stage mode led to a higher biogas and methane yield than the one-stage. However, the former posed challenges related to the stability and the process maintenance. Additionally, it was found that co-digestion of food waste and maize silage results in methane yield, atypical for the acidic stage. Apart from the experiments, the Anaerobic Digestion Model No. 1 (ADM1), originally developed for wastewater, was modified so that it would suit the anaerobic digestion of food waste of different fat contents, at batch and semi-continuous mode consisting of one- and two-stages. The goodness of fit was assessed by the Normalized Root Mean Square Error (NRMSE) and coefficient of efficiency (CE). For the batch mode, two temperature modes could be properly simulated at loadings conform and nonconform to the VDI 4630 (2016). For each mode, two different sets of parameters were introduced, namely for substrates of low-fat content and for substrates of middle/high fat content (ArSo LF and ArSo MF, with LF standing for low fat and MF for middle fat). The models could be further validated in another experiment, also using a co-digestion of lignocellulose substances. Further, parameters estimated for the batch mode, were applied for the semi-continuous experiment. It proved successful, however, due to a high amount of butyrate (HBu) and valerate (HVa), the model underwent calibration so that it could better predict the acids (model developed for one-stage semi-continuous experiment was called: ArSo M LF*). This could be validated on another semi-continuous reactor running on one-stage. Finally, the acidic-stage of the two-stage mode was analyzed. The model applied for one-stage fitted the data of the two-stage mode as far as the VFA are concerned. Nevertheless, due to a vast amount of acids, it was adjusted and called ArSo M LF**. [less ▲]

Detailed reference viewed: 46 (2 UL)
Full Text
See detailEssays on Market Microstructure and Financial Markets Stability
Levin, Vladimir UL

Doctoral thesis (2022)

The present doctoral thesis consists of three main chapters. The chapters of the thesis can be considered independently. Each of the three chapters raises a research question, reviews the related ... [more ▼]

The present doctoral thesis consists of three main chapters. The chapters of the thesis can be considered independently. Each of the three chapters raises a research question, reviews the related literature, proposes a method for the analysis, and, finally, reports results and conclusions. Chapter 1 is entitled Dark Trading and Financial Markets Stability and it is based on a working paper co-authored with Prof. Dr. Jorge Goncalves and Prof. Dr. Roman Kraussl. This paper examines how the implementation of a new dark order -- Midpoint Extended Life Order (M-ELO) on Nasdaq -- impacts financial markets stability in terms of occurrences of mini-flash crashes in individual securities. We use high-frequency order book data and apply panel regression analysis to estimate the effect of dark order trading activity on market stability and liquidity provision. The results suggest a predominance of a speed bump effect of M-ELO rather than a darkness effect. We find that the introduction of M-ELO increases market stability by reducing the average number of mini-flash crashes, but its impact on market quality is mixed. Chapter 2 is entitled Dark Pools and Price Discovery in Limit Order Markets and it is a single-authored work. This paper examines how the introduction of a dark pool impacts price discovery, market quality, and aggregate welfare of traders. I use a four-period model where rational and risk-neutral agents choose the order type and the venue and obtain the equilibrium numerically. The comparative statics on the order submission probability suggests a U-shaped order migration to the dark pool. The overall effect of dark trading on market quality and aggregate welfare was found to be positive but limited in size and depended on market conditions. I find mixed results for the process of price discovery. Depending on the immediacy need of traders, price discovery may change due to the presence of the dark venue. Chapter 3 is entitled Machine Learning and Market Microstructure Predictability and it is another single-authored piece of work. This paper illustrates the application of machine learning to market microstructure research. I outline the most insightful microstructure measures, that possess the highest predictive power and are useful for the out-of-sample predictions of such features of the market as liquidity volatility and general market stability. By comparing the models' performance during the normal time versus the crisis time, I come to the conclusion that financial markets remain efficient during both periods. Additionally, I find that high-frequency traders activity is not able to forecast accurately neither of the market features. [less ▲]

Detailed reference viewed: 86 (3 UL)
Full Text
See detailReductions of algebraic numbers and Artin's conjecture on primitive roots
Sgobba, Pietro UL

Doctoral thesis (2022)

Detailed reference viewed: 75 (12 UL)
Full Text
See detailNon-Orthogonal Multiple Access for Next-Generation Satellite Systems: Flexibility Exploitation and Resource Optimization
Wang, Anyue UL

Doctoral thesis (2022)

In conventional satellite communication systems, onboard resource management follows pre-design approaches with limited flexibility. On the one hand, this can simplify the satellite payload design. On the ... [more ▼]

In conventional satellite communication systems, onboard resource management follows pre-design approaches with limited flexibility. On the one hand, this can simplify the satellite payload design. On the other hand, such limited flexibility hardly fits the scenario of irregular traffic and dynamic demands in practice. As a consequence, the efficiency of resource utilization could be deteriorated, evidenced by mismatches between offered capacity and requested traffic in practical operations. To overcome this common issue, exploiting multi-dimension flexibilities and developing advanced resource management approaches are of importance for next-generation high-throughput satellites (HTS). Non-orthogonal multiple access (NOMA), as one of the promising new radio techniques for future mobile communication systems, has proved its advantages in terrestrial communication systems. Towards future satellite systems, NOMA has received considerable attention because it can enhance power-domain flexibility in resource management and achieve higher spectral efficiency than orthogonal multiple access (OMA). From ground to space, terrestrial-based NOMA schemes may not be directly applied due to distinctive features of satellite systems, e.g., channel characteristics and limited onboard capabilities, etc. To investigate the potential synergies of NOMA in satellite systems, we are motivated to enrich this line of studies in this dissertation. We aim at resolving the following questions: 1) How to optimize resource management in NOMA-enabled satellite systems and how much performance gain can NOMA bring compared to conventional schemes? 2) For complicated resource management, how to accelerate the decision-making procedure and achieve a good tradeoff between complexity reduction and performance improvement? 3) What are the mutual impacts among multiple domains of resource optimization, and how to boost the underlying synergies of NOMA and exploit flexibilities in other domains? The main contributions of the dissertation are organized in the following four chapters: First, we design an optimization framework to enable efficient resource allocation in general NOMA-enabled multi-beam satellite systems. We investigate joint optimization of power allocation, decoding orders, and terminal-timeslot assignment to improve the max-min fairness of the offered-capacity-to-requested-traffic ratio (OCTR). To solve the mixed-integer non-convex programming (MINCP) problem, we develop an optimal fast-convergence algorithmic framework and a heuristic scheme, which outperform conventional OMA in matching capacity to demand. Second, to accelerate the decision-making procedure in resource optimization, we attempt to solve optimization problems for satellite-NOMA from a machine-learning perspective and reveal the pros and cons of learning and optimization techniques. For complicated resource optimization problems in satellite-NOMA, we introduce deep neural networks (DNN) to accelerate decision making and design learning-assisted optimization schemes to jointly optimize power allocation and terminal-timeslot assignment. The proposed learning-optimization schemes achieve a good trade-off between complexity and performance. Third, from a time-domain perspective, beam hopping (BH) is promising to mitigate the capacity-demand mismatches and inter-beam interference by selectively and sequentially illuminating suited beams over timeslots. Motivated by this, we investigate the synergy and mutual influence of NOMA and BH for satellite systems to jointly exploit power- and time-domain flexibilities. We jointly optimize power allocation, beam scheduling, and terminal-timeslot assignment to minimize the capacity-demand gap. The global optimal solution may not be achieved due to the NP-hardness of the problem. We develop a bounding scheme to tightly gauge the global optimum and propose a suboptimal algorithm to enable efficient resource assignment. Numerical results demonstrate the synthetic synergy of combining NOMA and BH, and their individual performance gains compared to the benchmarks. Fourth, from the spatial domain, adaptive beam patterns can adjust the beam coverage to serve irregular traffic demand and alleviate co-channel interference, motivating us to investigate joint resource optimization for satellite systems with flexibilities in power and spatial domains. We formulate a joint optimization problem of power allocation, beam pattern selection, and terminal association, which is in the format of MINCP. To tackle the integer variables and non-convexity, we design an algorithmic framework and a low-complexity scheme based on the framework. Numerical results show the advantages of jointly optimizing NOMA and beam pattern selection compared to conventional schemes. In the end, the dissertation is concluded with the main findings and insights on future works. [less ▲]

Detailed reference viewed: 129 (12 UL)
Full Text
See detailMoral Decision-Making in Video Games
Holl, Elisabeth UL

Doctoral thesis (2022)

The present dissertation focuses on moral decision-making in single player video games. The thesis comprises four manuscripts: a theoretical book chapter (Melzer & Holl, 2021), a qualitative focus group ... [more ▼]

The present dissertation focuses on moral decision-making in single player video games. The thesis comprises four manuscripts: a theoretical book chapter (Melzer & Holl, 2021), a qualitative focus group study (Holl et al., 2020), a quantitative case study on the video game Detroit: Become Human (Holl & Melzer, 2021), and results from a large experimental laboratory study (Holl et al., 2022). With more than 2.6 billion players worldwide (Entertainment Software Association, 2018) gaming has become increasingly present in society. In addition to this growing interest, technological advances allow for more complex narratives and deeper character design. Thus, meaningful and morally-laden storylines have become increasingly popular in recent years both in popular AAA (e.g., Detroit: Become Human, The Last of Us 2) and smaller Indie titles (e.g., Papers please, Undertale). At the same time, scholars suggested that not only hedonic but also eudaimonic experiences are an essential part of (gaming) entertainment (Daneels, Bowman, et al., 2021; Oliver et al., 2015; Wirth et al., 2012). This dissertation explores in greater detail one aspect of eudaimonic gameplay, namely single player games that feature meaningful moral decision-making. Prior research on morality and gaming has relied on a variety of theoretical concepts, such as moral disengagement (Bandura, 1990; Klimmt et al., 2008) or moral foundations and intuitions (Haidt, 2001; Haidt & Joseph, 2007; Tamborini, 2013). Thus, the first task of the dissertation was to establish a previously missing model of moral processing in video games the unifies existing theories (cf. chapter 5.13; Melzer & Holl, 2021). Furthermore, the model proposes factors (e.g., moral disengagement cues, limited cognitive capacities/time pressure) promoting or hampering moral engagement while playing, thus fostering moral versus strategic processing. The model not only integrates relevant theoretical publications but was also designed using data collected in focus groups with frequent gamers (Holl et al., 2020). These qualitative results showed that moral gameplay is not a niche anymore. Furthermore, players expressed they deliberately chose between hedonic and eudaimonic gaming depending on their mood and motivation. Lastly, players mentioned several factors influencing their emotional and moral engagement while playing (e.g., identification, framing). To test parts of the proposed theoretical model, the game Detroit: Become Human, which has been praised for its emotional storytelling and meaningful choices (Pallavicini et al., 2020), was investigated in a case study (Holl & Melzer, 2021). Extensive coding of large-scale online data revealed that 73% of in-game decisions in Detroit: Become Human were morally relevant with a high prevalence for situations relating to harm/care- and authority-based morality. Overall, players preferred to choose moral options over immoral options. This tendency to act “good” was even more pronounced under time pressure and when non-human characters were involved. Furthermore, behavioral variations were found depending on what character was played. To test findings of the case study in greater detail and to also gather individual data in an experimental setup, Holl et al. (2022) conducted a laboratory study. A total of 101 participants played several chapters of Detroit: Become Human featuring up to 13 moral decisions after being randomly assigned to one of three conditions (i.e., playing a morally vs. immorally framed character vs. no framing/control). As expected, players again preferred to act morally sound. Contrary to expectations, character framing did not affect decision-making or physiological responses (i.e., heart rate variability). However, time pressure again increased the likelihood of moral decision-making. Unfortunately, anticipated effects of personality traits (i.e., trait moral disengagement, empathy) were inconclusive both regarding the outcome of decision-making and participants’ perceived guilt after playing. In summary, the work of this dissertation further underlines the relevance of eudaimonic entertainment. Studying moral decision-making in games may provide insights for moral decision-making in general. Additionally, the presented results have the potential to defuse the heated debate over violent gaming. Novel insights are gained using a mixed methods approach combining qualitative with quantitative data from a large-scale case study of worldwide user behavior and an experimental setup. [less ▲]

Detailed reference viewed: 144 (21 UL)
Full Text
See detailmmWave Cognitive Radar: Adaptive Waveform Design and Implementation
Raei, Ehsan UL

Doctoral thesis (2022)

Detailed reference viewed: 75 (7 UL)
Full Text
See detailGrain boundaries and potassium post-deposition treatments in chalcopyrite solar cells
Martin Lanzoni, Evandro UL

Doctoral thesis (2022)

Over the last years, alkali post-deposition treatments (PDT) have been attributed as the main driver for the continuous improvements in the power conversion efficiency (PCE) of Cu(In,Ga)Se2 (CIGSe) solar ... [more ▼]

Over the last years, alkali post-deposition treatments (PDT) have been attributed as the main driver for the continuous improvements in the power conversion efficiency (PCE) of Cu(In,Ga)Se2 (CIGSe) solar cells. All the alkali elements have shown beneficial optoelectronic effects, ranging from sodium to cesium, with many reports linking the improvements to grain boundary (GB) passivation. The most common process for alkali incorporation into the CIGS absorber is based on the thermal evaporation of alkali fluorides in a selenium atmosphere. Besides the demonstrated improvements in performance, disentangling the individual contributions of the PDTs on the GB, surface, and bulk is very challenging because of the many concurring chemical reactions and diffusion processes. This thesis aims to investigate how pure metallic potassium interacts with CIGSe epitaxially grown on GaAs (100) and multi-crystalline GaAs. Surface sensitive Kelvin probe force microscopy (KPFM) and X-ray photoelectron spectroscopy (XPS) measurements are used to, in-situ, analyze changes in workfunction and compositional changes before and after each deposition step. Inert gas transfer systems and ultrahigh vacuum (UHV) are used to keep the pristine surface properties of the CIGSe. An in-depth understanding of how different KPFM operation modes and environments influence the measured workfunction is discussed in detail in this thesis. It is shown that AM-KPFM, the most common KPFM operation mode, leads to misinterpretations of the measured workfunction at GBs on rough samples. Frequency modulation KPFM (FM-KPFM), on the other hand, turns out to be the most suitable KPFM mode to investigate GB band bending. Pure metallic potassium evaporation on CIGSe epitaxially grown on GaAs (100) leads to diffusion of K from the surface down to the CIGS/GaAs interface even in the absence of GBs. Evaporation of metallic K is performed using a metallic dispenser, in which the evaporation rate can be controlled to deposit a few monolayers of K. The deposition is done in UHV, and an annealing step is used to diffuse K from the surface to the bulk. Pure metallic potassium is also evaporated on CIGSe epitaxially grown on multicrystalline GaAs substrate, where well-defined GBs are present. Negligible workfunction changes at the GB were observed. XPS shows a strong Cu depletion after K deposition followed by annealing. Interestingly, the amount of K on the absorber surface after the K-deposition and subsequent annealing is almost equal to the amount of Cu that diffused into the bulk, suggesting a 1:1 exchange mechanism and no KInSe2 secondary phase. [less ▲]

Detailed reference viewed: 69 (10 UL)
Full Text
See detailDigital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques
Solanke, Abiodun Abdullahi UL

Doctoral thesis (2022)

Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have ... [more ▼]

Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings. [less ▲]

Detailed reference viewed: 55 (4 UL)
Full Text
See detailThe Interpretation of UN Security Council Resolutions
di Gianfrancesco, Laura UL

Doctoral thesis (2022)

Detailed reference viewed: 58 (5 UL)
Full Text
See detailMALDI-TOF-Enabled Subtyping and Antimicrobial Screening of the Food- and Waterborne Pathogen Campylobacter
Feucherolles, Maureen UL

Doctoral thesis (2022)

For decades, antimicrobial resistance has been considered as a global long-lasting challenge. If no action is taken, antimicrobial resistance-related diseases could give a rise up to 10 million deaths ... [more ▼]

For decades, antimicrobial resistance has been considered as a global long-lasting challenge. If no action is taken, antimicrobial resistance-related diseases could give a rise up to 10 million deaths each year by 2050 and 24 million people might end into extreme poverty. The ever-increasing spread and cross-transmission of drug-resistant foodborne pathogens such as Campylobacter spp. between reservoirs, such as human, animal and environment are of concern. Indeed, because of the over-exposition and overuse of antibiotics in food-producing animals, the latter could carry multidrug resistant Campylobacter that could be transmitted to humans via food sources or from direct animal contacts. One of the solutions to tackle antimicrobial resistances is the development of rapid diagnostics tests to swiftly detect resistances in routine laboratories. By detecting earlier AMR, adapted antibiotherapy might be administrated promptly shifting from empirical to evidence-based practices, conserving effectiveness of antimicrobials. The already implemented cost- and time-efficient MALDI-TOF MS in routine laboratories for the identification of microorganisms based on expressed protein profiles was successfully applied for bacterial typing and detection of specific AMR peak in a research context. In the line of developing rapid tests for diagnostics, MALDI-TOF MS appeared to be an ideal candidate for a powerful and promising “One fits-all” diagnostics tool. Therefore, the present study aimed to get more insights on the ability of MALDI-TOF MS-protein based signal to reflect the AMR and genetic diversity of Campylobacter spp. The groundwork of this research consisted into the phenotypic and genotypic characterization of a One-Health Campylobacter collection. Then, isolates were submitted to protein extraction for MALDI-TOF MS analysis. Firstly, mass spectra were investigated to screen AMR to different classes of antibiotics and to retrieve putative biomarkers related to already known AMR mechanisms. The second part evaluated the ability of MALDI-TOF MS to cluster mass spectra according to the genetic relatedness of isolates and congruently compare it to reference genomic-based methods. MALDI-TOF MS protein profiles combined to machine learning displayed promising results for the prediction of the susceptibility and the ciprofloxacin and tetracycline Campylobacter’s resistances. Additionally, MALDI-TOF MS C. jejuni protein clusters were highly concordant to conventional DNA-based typing methods, such as MLST and cgMLST, when a similarity cut-off of 94% was applied. A similar discriminatory power between 2-20 kDa expressed protein and cgMLST profiles was underlined as well. Finally, putative biomarkers either linked to known or unknown AMR mechanisms, or genetic structural population of Campylobacter were identified. Overall, a single spectrum based on bacterial expressed protein could be used for species identification, AMR screening and potentially as a complete pre-screening for daily surveillance, including genetic diversity and source attribution after further analysis. [less ▲]

Detailed reference viewed: 216 (4 UL)
Full Text
See detailThe European Approach to Open Science and Research Data
Paseri, Ludovica UL

Doctoral thesis (2022)

This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm. The paradigm of Open Science indicates a new way of ... [more ▼]

This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm. The paradigm of Open Science indicates a new way of doing science, oriented towards the openness of every phase of the scientific research process, and able to take full advantage of the digital Information and Communication Technologies (ICTs). The emergence of this paradigm is relatively recent, but in the last couple of years it has become increasingly relevant. The European institutions expressed a clear intention to embrace the Open Science paradigm, with several interventions and policies on this matter. Among many, consider, for example, the project of the European Open Science Cloud (EOSC), a federated and trusted environment for access and sharing of research data and services for the benefit of the European researchers; or the establishment of the new research funding programme, i.e., the Horizon Europe programme, laid down in the EU Regulation 2021/695, which links research funding to the adoption of the Open Science tenets. This dissertation examines the European approach to Open Science, providing a conceptual framework for the multiple interventions of the European institutions in the field of Open Science, as well as addressing the major legal challenges that the implementation of this new paradigm is generating. To this aim, the study first investigates the notion of Open Science, in order to understand what specifically falls under the umbrella of this broad term: it is proposed a definition that takes into account all its dimensions and an analysis of the human and fundamental rights framework in which Open Science is grounded. After that, the inquiry addresses the legal challenges related to the openness of research data, in light of the European legislative framework on Open Data. This also requires drawing attention to the European data protection framework, analysing the impact of the General Data Protection Regulation (GDPR) on the context of Open Science. The last part of the study is devoted to the infrastructural dimension of the Open Science paradigm, exploring the digital infrastructures that are increasingly an integral part of the scientific research process. In particular, the focus is on a specific type of computational infrastructure, namely the High Performance Computing (HPC) facility. The adoption of HPC for research is analysed both from the European perspective, investigating the EuroHPC project, and the local perspective, proposing the case study of the HPC facility of the University of Luxembourg, namely the ULHPC. This dissertation intends to underline the relevance of the legal coordination approach, between all actors and phases of the scientific research process, in order to develop and implement the Open Science paradigm, adhering to the underlying human and fundamental rights. [less ▲]

Detailed reference viewed: 162 (6 UL)
Full Text
See detailMethods and tools for analysis and management of risks and regulatory compliance in the healthcare sector: the Hospital at Home – HaH
Amantea, Ilaria Angela UL

Doctoral thesis (2022)

Changing or creating a new organization means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural risks and legal ... [more ▼]

Changing or creating a new organization means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural risks and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Therefore, managing the risks implies proposing changes to the processes that allow the desired result: an optimized process. In order to manage a company and optimize it in the best possible way, not only should the organizational aspect, risk management and legal compliance be taken into account, but it is important that they are all analyzed simultaneously with the aim of finding the right balance that satisfies them all. This is exactly the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimization, ICT support is used. This work is not intended to be a computer science or law thesis but an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not so much to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analyzed separately, which however have an impact on each other and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, a particular use case was chosen to show its application. The case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalized to their home. This provide the possibility to perform experiments using real hospital database. [less ▲]

Detailed reference viewed: 64 (6 UL)
See detailINTERTWINED DESTINIES & STRENGTHENED TIES: “A COLÔNIA LUXEMBURGUESA”. A PARTICIPATORY TRANSMEDIA PROJECT ON STEEL-FRAMED MIGRATION FROM LUXEMBOURG TO BRAZIL (1921-2022)
Santana, Dominique UL

Doctoral thesis (2022)

A Colônia Luxemburguesa unveils a century of steel-framed migration between Luxembourg and Brazil. This transmedia documentary delves into intersecting stories from different angles and across different ... [more ▼]

A Colônia Luxemburguesa unveils a century of steel-framed migration between Luxembourg and Brazil. This transmedia documentary delves into intersecting stories from different angles and across different platforms – an interactive and participatory experience to draw a multifaceted portrait of a curious Colônia forged by steel. www.colonia.lu [less ▲]

Detailed reference viewed: 82 (6 UL)
Full Text
See detailHybrid Artificial Intelligence to extract patterns and rules from argumentative and legal texts
Liga, Davide UL

Doctoral thesis (2022)

This Thesis is composed of a selection of studies realized between 2019 and 2022, whose aim is to find working methodologies of Artificial Intelligence (AI) and Machine Learning for the detection and ... [more ▼]

This Thesis is composed of a selection of studies realized between 2019 and 2022, whose aim is to find working methodologies of Artificial Intelligence (AI) and Machine Learning for the detection and classification of patterns and rules in argumentative and legal texts. We define our approach as “hybrid”, since different methods have been employed combining symbolic AI (which involves “top-dow” structured knowledge) and sub-symbolic AI (which involves “bottom-up” data-driven knowledge). The first group of these works was dedicated to the classification of argumentative patterns. Following the Waltonian model of argument (according to which arguments are composed by a set of premises and a conclusion), and the theory of Argumentation Schemes, this group of studies was focused on the detection of argumentative evidences of support and opposition. More precisely, the aim of these first works was to show that argumentative patterns of opposition and support could be classified at fine-grained levels and without resorting to highly engineered features. To show this, we firstly employed methodologies based on Tree Kernel classifiers and TFIDF. In these experiments, we explored different combinations of Tree Kernel calculation and different data structures (i.e., different tree structures). Also, some of these combinations employs a hybrid approach where the calculation of similarity among trees is influenced not only by the tree structures but also by a semantic layer (e.g. those using “smoothed” trees and “compositional” trees). After the encouraging results of this first phase, we explored the use of a new methodology which was deeply changing the NLP landscape exactly in that year, fostered and promoted by actors like Google, i.e. Transfer Learning and the use of language models. These newcomer methodologies markedly improved our previous results and provided us with stronger NLP tools. Using Transfer Learning, we were also able to perform a Sequence Labelling task for the recognition of the exact span of argumentative components (i.e. claims and premises), which is crucial to connect the sphere of natural language to the sphere of logic. The last part of this work was finally dedicated to show how to use Transfer Learning for the detection of rules and deontic modalities. In this case, we tried to explore a hybrid approach which combines structured knowledge coming from two LegalXML formats (i.e., Akoma Ntoso and LegalRuleML) with sub-symbolic knowledge coming from pre-trained (and then fine-tuned) neural architectures. [less ▲]

Detailed reference viewed: 30 (6 UL)
Full Text
See detailEnvironmental performance assessment of an innovative modular construction concept composed of a permanent structure and flexible modular units
Rakotonjanahary, Tahiana Roland Michaël UL

Doctoral thesis (2022)

To face the challenges of global warming, the building sector is currently undergoing a noticeable revolution. Buildings are tending to consume less energy, use more renewable energy sources, be built ... [more ▼]

To face the challenges of global warming, the building sector is currently undergoing a noticeable revolution. Buildings are tending to consume less energy, use more renewable energy sources, be built with eco-friendly materials, and generate less wastes during their construction and end-of-life stage. Yet, they could be more resilient or else capable of quickly responding to the housing demand, which may fluctuate in time and in space. Innovative concepts therefore need to be developed to allow buildings to expand and/or shrink. Modular buildings could be a solution to combine these criteria, since they offer faster construction process, provide better construction quality, allow reducing construction waste and are potentially flexible. Frames of modular units can be made of metal, timber, concrete, or mixed materials but lightweight structures do not always allow erecting high-rise buildings and generally present a higher risk of overheating and/or overcooling. To reconcile these pros and cons, a building typology called Slab was designed by a group of architects jointly with the team of the Eco-Construction for Sustainable Development (ECON4SD) research project. The Slab building is an innovative modular building concept based on plug-in architecture, which is composed of a permanent concrete structure on which relocatable timber modular units come to slot in. With respect to flexibility, the Slab building was designed to adapt to any orientation and location in Luxembourg. This doctoral thesis mainly deals with the environmental performance assessment of the Slab building but also involves the development of an energy concept for this one. In this regard, the minimum required wall thicknesses of the Slab building’s modules were determined in compliance with the Luxembourg standard although the current regulation does not yet cover flexible buildings. In this process, two module variants were designed; the first one fulfils the passive house requirements which match with the AAA energy class requirements, and the second one complies with the current building codes requirements, also known as the requirements for building permit application, which in principle correspond to low energy house requirements. Calculations showed that 40 cm wall thickness is sufficient to fulfil both requirements. The environmental performance assessment focused on the appraisal of specific CO2 footprint, which considers on the one hand the operational energy and on the other hand the building materials. The operational energy of modules was determined by carrying out energy balance calculations on LuxEeB-Tool software by considering the worst-case and best-case scenarios. Besides, a method was developed to estimate the space heating demand and CO2 emissions of module aggregation, which can have different configurations over time. The method proposed in this thesis was established for the Slab building but could potentially be applicable to flexible buildings. A comparative study of the CO2 footprint considering the embodied and operational energy showed that there is no environmental benefit in having the modules comply with the passive house requirements in the worst-case scenario (window facing north and high wind exposure). A thermal comfort assessment was also done by realizing DTS on TRNSYS software, to check the necessity of active cooling. Simulations showed that with adequate solar shading and reinforced natural ventilation by window opening, summertime overheating risk could be avoided for the normal residential use scenario for both module variants. Finally, the LCA of the Slab building consisted, on the one hand of optimizing its life cycle and, on the other hand, of comparing its specific CO2 footprint with benchmarks. LCA based on 100 years of lifetime concluded that the total specific CO2 footprint of the Slab building for a low module occupancy rate is lower than that of the Slab building bis, which is a building designed based on the Slab building. The latter would be built according to conventional construction method and thereby does not provide the same level of flexibility as the Slab building. However, for a high module occupancy rate, the Slab building does not environmentally perform any better than the Slab building bis. Some solutions could be proposed to further reduce the specific CO2 footprint of the Slab building, but these would impact the architectural aspect or even the functionalities of the Slab building. [less ▲]

Detailed reference viewed: 81 (7 UL)
Full Text
See detailEssays on the Economics of Migration, Inequalities, and Culture
Maleeva, Victoria UL

Doctoral thesis (2022)

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the outline of the thesis and ... [more ▼]

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the outline of the thesis and shortly discuss the research questions of each chapter. The second chapter explores the effects of mass migration on individual attitudes towards migrants. Using several data sources for the mass migration of Ukrainians in Poland between 2014-2016, this chapter is focused on how a massive exogenous increase in the stock of migrant residents and migrant co-workers affects the perception of migrants. Using both an IV methodology and a difference-in-difference analysis, I test two hypotheses: the labor market competition and contact theory and find some evidence favoring the second. First, difference-in-difference analysis shows that Poles become more welcoming to migrants in regions with more job opportunities for migrants. Second, I find that an increase in the size of the migrant group affects attitudes towards migrants positively inside a group of natives with similar demographic and job skills characteristics. The third chapter explores how poverty can be explained by marital status and gender using the RLMS-HSE household survey. This research shows that divorced women exhibit lower poverty levels than divorced men by employing longitudinal data from the Russian National Survey (RLMS-HSE) from 2004 to 2019. The result remains qualitatively invariant when considering a theoretical probability to divorce for married couples that take into account the age of the partners, labor force participation, and education. A higher probability to divorce impacts positively only men's poverty level. Investigating an inter-related dynamic model of poverty and labor market participation, we find that divorced women work more than divorced men, which is why divorce hits harder on husbands than on wives. In the fourth chapter of the thesis, we study the effect of past exposure to communist indoctrination during early age (9-14 years) on a set of crucial attitudes in the communist ideology aiming to create the \emph{new communist man/woman}. We focus on the indoctrination received by children during their pioneering years. School pupils automatically became pioneers when they reached 3rd or 4th grade. The purpose of the pioneer years was to educate soviet children to be loyal to the ideals of communism and the Party. We use a regression discontinuity design exploiting the discontinuity in the exposure to pioneering years due to the fall of the USSR in 1991, implying a strong association that hints to causality. We find robust evidence that has been a pioneer has long-lasting effects on interpersonal trust, life satisfaction, fertility, income, and perception of own economic rank. Overall, these results suggest that past pioneers show a higher level of optimism than non-pioneers. Finally, we look for gender differences because various forms of emulation campaigns were used to promote the desired virtues of the new communist woman. However, we find no evidence of the effect of exposure to communism on women. The indoctrination seems to have left more substantial effects on men. [less ▲]

Detailed reference viewed: 72 (8 UL)
Full Text
See detailEconomics of Migration, Inequalities, and Culture
Maleeva, Victoria UL

Doctoral thesis (2022)

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the thesis outline and discuss ... [more ▼]

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the thesis outline and discuss each chapter's research questions. The second chapter explores the effects of mass migration on individual attitudes towards migrants. Using several data sources for the mass migration of Ukrainians in Poland between 2014-2016, this chapter is focused on how a massive exogenous increase in the stock of migrant residents and migrant co-workers affects the perception of migrants. Using both an IV methodology and a difference-in-difference analysis, I test two hypotheses: the labor market competition and contact theory and find some evidence favoring the second. First, difference-in-difference analysis shows that Poles become more welcoming to migrants in regions with more job opportunities for migrants. Second, I find that an increase in the size of the migrant group affects attitudes towards migrants positively, inside a group of natives with similar demographic and job skills characteristics. The third chapter explores how poverty can be explained by marital status and gender, using the RLMS-HSE household survey. This research shows that divorced women exhibit lower poverty levels than divorced men by employing longitudinal data from the Russian National Survey (RLMS-HSE) from 2004 to 2019. The result remains qualitatively invariant when considering a theoretical probability to divorce for married couples that take into account the age of the partners, labor force participation, and education. A higher probability to divorce impacts positively only men's poverty level. Investigating an inter-related dynamic model of poverty and labor market participation, we find that divorced women work more than divorced men, which is why divorce hits harder on husbands than on wives. In the fourth chapter of the thesis, we study the effect of past exposure to communist indoctrination during early age (9-14 years) on a set of crucial attitudes in the communist ideology aiming to create the \emph{new communist man/woman}. We focus on the indoctrination received by children during their pioneering years. School pupils automatically became pioneers when they reached 3rd or 4th grade. The purpose of the pioneer years was to educate soviet children to be loyal to the ideals of communism and the Party. We use a regression discontinuity design exploiting the discontinuity in the exposure to pioneering years due to the fall of the USSR in 1991, implying a strong association that hints to causality. We find robust evidence that has been a pioneer has long-lasting effects on interpersonal trust, life satisfaction, fertility, income, and perception of own economic rank. Overall, these results suggest that past pioneers show a higher level of optimism than non-pioneers. Finally, we look for gender differences because various forms of emulation campaigns were used to promote the desired virtues of the new communist woman. However, we find no evidence of the effect of exposure to communism on women. The indoctrination seems to have had more substantial effects on men. [less ▲]

Detailed reference viewed: 78 (4 UL)
Full Text
See detailFirst-principles investigation of ferroelectricity and related properties of HfO2
Dutta, Sangita UL

Doctoral thesis (2022)

Nonvolatile memories are in increasing demand as the world moves toward information digitization. The ferroelectric materials offer a promising alternative for this. Since the existing perovskite ... [more ▼]

Nonvolatile memories are in increasing demand as the world moves toward information digitization. The ferroelectric materials offer a promising alternative for this. Since the existing perovskite materials have various flaws, including incompatibility with complementary metal-oxide-semiconductor processes in memory applications, the discovery of new optimized FE thin films was necessary. In 2011, the disclosure of ferroelectricity in hafnia (HfO$_2$) reignited interest in ferroelectric memory devices because this material is well integrated with CMOS technology. Although the reporting of ferroelectricity in HfO$_2$ has been a decade, researchers are still enthralled by this material's properties as well as its possible applications. The ferroelectricity in HfO$_{2}$ has been attributed to the orthorhombic phase with spacegroup $Pca2_1$. This phase is believed to be the metastable phase of the system. Many experimental and theoretical research groups joined the effort to understand the root causes for the stability of this ferroelectric phase of HfO$_{2}$ by considering the role of the surface energy effects, chemical dopants, local strain, oxygen vacancies. However, the understanding was not conclusive. In this part of this work, we will present our first-principles results, predicting a situation where the ferroelectric phase becomes the thermodynamic ground state in the presence of a ordered dopant forming layers. Since the main focus was on understanding and optimizing the ferroelectricity in HfO$_{2}$, we observed that the electro-mechanical response of the system has garnered comparatively less attention. The recent discovery of the negative longitudinal piezoelectric effect in HfO$_2$ has challenged our thinking about piezoelectricity, which was molded by what we know about ferroelectric perovskites. In this work, we will discuss the atomistic underpinnings behind the negative longitudianl piezoelectric effect in HfO$_{2}$. We will also discuss the behavior of the longitudinal piezoelectric coefficient ($e_{33}$) under the application of epitaxial strain, where we find that $e_{33}$ changes sign even though the polarization does not switch. Aside from a basic understanding of piezoelectric characteristics in HfO$_2$, the application aspect is also worth considering. The piezoelectric properties of the material can be tuned to meet the needs of the applications. In this work, we will describe our findings on how the piezoelectric characteristics of the material change as a function of isovalent dopants. [less ▲]

Detailed reference viewed: 107 (2 UL)
Full Text
See detailIs universal healthcare truly universal? Socioeconomic and migrant inequalities in healthcare
Paccoud, Ivana UL

Doctoral thesis (2022)

Through the principle of Universal Healthcare Coverage, many governments across Europe and beyond seek to ensure that all people have equal access to good quality healthcare services, without facing a ... [more ▼]

Through the principle of Universal Healthcare Coverage, many governments across Europe and beyond seek to ensure that all people have equal access to good quality healthcare services, without facing a financial burden. Despite this, studies have highlighted persistent migrant and socio-economic inequalities in the use of healthcare services, and personal health records. Therefore, understanding the complex mechanisms that produce and maintain social inequalities in the effective use of healthcare services is thus an important step towards advancing equity in healthcare. This thesis draws on Bourdieu's forms of capital (cultural, social, economic, and symbolic) to conceptualise and empirically test social inequalities related to healthcare. In doing so, it investigates the factors contributing to socioeconomic and migrant inequalities in the use, navigation and optimisation of healthcare services as well as personal health records. The three studies that make up this thesis empirically test these ideas through statistical modelling on population-based datasets as well as through the analysis of two cross-sectional surveys in Luxembourg and the Greater region. The first study draws on the fifth wave of the Survey of Health, Aging, and Retirement in Europe (SHARE). It used cluster analysis and regression models to explain how the unequal distribution of material and non-material capitals acquired in childhood shape health practices, leading to different levels of healthcare utilisation in later life. The results suggest that although related, both material and non-material capitals independently contribute to health practices associated with the use of healthcare services. The second study used data from a cross-sectional survey to investigate inequalities in the navigation and optimisation of healthcare services, taking into consideration the interplay between perceived racial discrimination and socioeconomic position. It revealed disparities between individuals born in Eastern Europe and the Global South and those born in Luxembourg which were explained by the experience of racial discrimination. It also found that the impact of discrimination on both health service navigation and optimisation was reduced after accounting for social capital. The last study used data from a cross-sectional survey developed as a part of a collaborative project (INTERREG-APPS) to examine the socioeconomic and behavioural determinants in the intention to use personal health record in the Greater region of Luxembourg (Baumann et al., 2020). This study found that people’s desire and actual access to personal health electronic records is determined by different socioeconomic factors, while educational inequalities in the intention to regularly use personal health records were explained by the role of behavioural factors. Taking together, the findings presented in this thesis thus show the value of mobilising Bourdieu’s theoretical framework to understand the mechanisms through which social inequalities in healthcare develop. In addition, it showed the importance of considering racial discrimination when examining migrant, and racial/ethnic differences in health. [less ▲]

Detailed reference viewed: 79 (14 UL)
Full Text
See detailFRACTAL DIMENSION AND POINT-WISE PROPERTIES OF TRAJECTORIES OF FRACTIONAL PROCESSES
Daw, Lara UL

Doctoral thesis (2022)

The topics of this thesis lie at the interference of probability theory with dimensional and harmonic analysis, accentuating the geometric properties of random paths of Gaussian and non-Gaussian ... [more ▼]

The topics of this thesis lie at the interference of probability theory with dimensional and harmonic analysis, accentuating the geometric properties of random paths of Gaussian and non-Gaussian stochastic processes. Such line of research has been rapidly growing in past years, paying off clear local and global properties for random paths associated to various stochastic processes such as Brownian and fractional Brownian motion. In this thesis, we start by studying the level sets associated to fractional Brownian motion using the macroscopic Hausdorff dimension. Then as a preliminary step, we establish some technical points regarding the distribution of the Rosenblatt process for the purpose of studying various geometric properties of its random paths. First, we obtain results concerning the Hausdorff (both classical and macroscopic), packing and intermediate dimensions, and the logarithmic and pixel densities of the image, level and sojourn time sets associated with sample paths of the Rosenblatt process. Second, we study the pointwise regularity of the generalized Rosenblatt and prove the existence of three kinds of local behavior: slow, ordinary and rapid points. In the last chapter, we illustrate several methods to estimate the macroscopic Hausdorff dimension, which played a key role in our results. In particular, we build the potential theoretical methods. Then, relying on this, we show that the macroscopic Hausdorff dimension of the projection of a set E ⊂ R^2 onto almost all straight lines passing through the origin in R^2 depends only on E, that is, they are almost surely independent of the choice of straight line. [less ▲]

Detailed reference viewed: 94 (11 UL)
Full Text
See detailLengths and intersections of curves on surfaces
Vo, Thi Hanh UL

Doctoral thesis (2022)

Detailed reference viewed: 97 (21 UL)
Full Text
See detailNext Generation Mutation Testing: Continuous, Predictive, and ML-enabled
Ma, Wei UL

Doctoral thesis (2022)

Software has been an essential part of human life, and it substantially improves production and enriches our life. However, flaws in software can lead to tragedies, e.g. the failure of the Mariner 1 ... [more ▼]

Software has been an essential part of human life, and it substantially improves production and enriches our life. However, flaws in software can lead to tragedies, e.g. the failure of the Mariner 1 Spacecraft in 1962. At the moment, modern software systems are much different from what before. The issue gets even more severe since the complexity of software systems grows larger than before and Artificial Intelligence(AI) models are integrated into software (e.g., Tesla Deaths Report ). Testing such modern artificial software systems is challenging. Due to new requirements, software systems evolve and frequently change, and artificial intelligence(AI) models have non-determination issues. The non-determination of AI models is related to many factors, e.g., optimization algorithms, numerical problems, the labelling threshold, data of the same object but under different collecting conditions or changing the backend libraries. We have witnessed many new testing techniques emerge to guarantee the trustworthiness of modern software systems. Coverage-based Testing is one early technique to test Deep Learning(DL) systems by analyzing the neuron values statistically, e.g., Neuron Coverage(NC) . In recent years, Mutation Testing has drawn much attention. Coverage-based testing metrics can be misleading and easily be fooled by generating tests to satisfy test coverage requirements just by executing the code line. The test suite with one hundred percent coverage may detect no flaw in software. On the contrary, Mutation Testing is a robust approach to approximating the quality of a test suite. Mutation Testing is a technique based on detecting artificial defects from many crafted code perturbations (i.e., mutant) to assess and improve the quality of a test suite. The behaviour of a mutant is likely to be located on the border between correctness and non-correctness since the code perturbation is usually tiny. Through mutation testing, the border behaviour of the subject under test can be explored well, which leads to a high quality of software. It has been generalized to test software systems integrated with DL systems, e.g., image classification systems and autonomous driving systems. However, the application of Mutation Testing encounters some obstacles. One main challenge is that Mutation Testing is resource-intensive. Large resource consumption makes it unskilled in modern software development because the code frequently evolves every day. This dissertation studies how to apply Mutation Testing for modern software systems, exploring and exploiting the usages and innovations of Mutation Testing encountering AI algorithms, i.e., how to employ Mutation Testing for modern software systems under test. AI algorithms can improve Mutation Testing for modern software systems, and at the same time, Mutation Testing can test modern software integrated with DL models well. First, this dissertation adapts Mutation Testing to modern software development, Continuous Integration. Most software development teams currently employ Continuous Integration(CI) as the pipeline where the changes happen frequently. It is problematic to adopt Mutation Testing in Continuous Integration because of its expensive cost. At the same time, traditional Mutation Testing is not a good test metric for code changes as it is designed for the whole software. We adapt Mutation Testing to test these program changes by proposing commit-relevant mutants. This type of mutant affects the changed program behaviours and represents the commit-relevant test requirements. We use the benchmarks from C and Java to validate our proposal. The experiment results indicate that commit-relevant mutants can effectively enhance code change testing. Second, based on the aforementioned work, we introduce MuDelta, an AI approach that identifies commit-relevant mutants, i.e., some mutants that interact with the code change. MuDelta uses manually-designed features that require expert knowledge. MuDelta leverages a combined scheme of static code characteristics as the data feature. Our evaluation results indicate that commit-based mutation testing is suitable and promising for evolving software systems. Third, this dissertation proposes a new approach GraphCode2Vec to learn the general software code representation. Recent works utilize natural language models to embed the code into the vector representation. Code embedding is a keystone in the application of machine learning on several Software Engineering (SE) tasks. Its target is to extract universal features automatically. GraphCode2Vec considers program syntax and semantics simultaneously by combining code analysis and Graph Neural Networks(GNN). We evaluate our approach in the mutation testing task and three other tasks (method name prediction, solution classification, and overfitted patch classification). GraphCode2Vec is better or comparable to the state-of-the-art code embedding models. We also perform an ablation study and probing analysis to give insights into GraphCode2Vec. Finally, this dissertation studies Mutation Testing to select test data for deep learning systems. Since deep learning systems play an essential role in different fields, the safety of DL systems takes centre stage. Such DL systems are much different from traditional software systems, and the existed testing techniques are not supportive of guaranteeing the reliability of the deep learning systems. It is well-known that DL systems usually require extensive data for learning. It is significant to select data for training and testing DL systems. A good dataset can help DL models have a good performance. There are several metrics to guide choosing data to test DL systems. We compare a set of test selection metrics for DL systems. Our results show that uncertainty-based metrics are competent in identifying misclassified data. These metrics also improve classification accuracy faster when retraining DL systems. In summary, this dissertation shows the usage of Mutation Testing in the artificial intelligence era. The first, second and third contributions are on Mutation Testing helping modern software test in CI. The fourth contribution is a study on selecting training and testing data for DL systems. Mutation Testing is an excellent technique for testing modern software systems. At the same time, AI algorithms can alleviate the main challenges of Mutation Testing in practice by reducing the resource cost. [less ▲]

Detailed reference viewed: 222 (8 UL)
Full Text
See detailINTERROGATING INTRA-TUMORAL HETEROGENEITY AND TREATMENT RESISTANCE IN GLIOBLASTOMA PATIENT-DERIVED XENOGRAFT MODELS USING SINGLE-CELL RNA SEQUENCING
Yabo, Yahaya Abubakar UL

Doctoral thesis (2022)

Despite available treatment options for glioblastoma (GBM), GBM has one of the poorest prognosis, resist treatment, and recur aggressively in the majority of cases. Intra-tumoral heterogeneity and ... [more ▼]

Despite available treatment options for glioblastoma (GBM), GBM has one of the poorest prognosis, resist treatment, and recur aggressively in the majority of cases. Intra-tumoral heterogeneity and phenotypic plasticity are major factors contributing to treatment resistance and underlie tumor escape in GBM. Several potential therapeutic agents showing promising therapeutic effects against GBMs at the preclinical level failed to translate into effective therapies for GBM patients. This is partly attributed to the inadequacy of preclinical models to fully recapitulate the complex biology of human GBMs. This project aimed to characterize the transcriptomic heterogeneity and understand the dynamic GBM ecosystem in patient-derived xenograft (PDOX) models at the single-cell level. To achieve this aim, I established cell purification and cryopreservation protocols that enable the generation of high-quality single-cell RNA seq data from PDOX models including longitudinal and treated PDOXs. Different computational strategies were used to interrogate the transcriptomic features as well as the interactions between GBM cells and the surrounding microenvironment. This work critically analyzed and discussed key components contributing to intra-tumoral heterogeneity and phenotypic plasticity within the GBM ecosystem and their potential contributions to treatment resistance. Here, we provide evidence that PDOX models retain histopathologic and transcriptomic features of parental human GBMs. PDOX models were further shown to recapitulate major tumor microenvironment (TME) components identified in human GBMs. Cells within the GBM ecosystem were shown to display GBM-specific transcriptomic features, indicating active TME crosstalk in PDOX models. Tumor-associated microglia/macrophages were shown to be heterogeneous and display the most prominent transcriptomic adaptations following crosstalk with GBM cells. The myeloid cells in PDOXs and human GBM displayed a microglia-derived TAMs signature. Notably, GBM-educated microglia display immunologic features of migration, phagocytosis, and antigen presentation that indicates the functional role of microglia in the GBM TME. Taking advantage of a cohort of longitudinal PDOXs and treated PDOX models, I demonstrated the utility of PDOX models in elucidating longitudinal changes in GBM. We show that temozolomide treatment leads to transcriptomic adaptation of not only the GBM tumor cells but also adjacent TME components. Overall, this work further highlights the importance and the clinical relevance of PDOX models for the testing of novel therapeutics including immunotherapies targeting certain tumor TME components in GBM. [less ▲]

Detailed reference viewed: 54 (3 UL)
Full Text
See detailMachine Learning-Based Efficient Resource Scheduling for Future Wireless Communication Networks
Yuan, Yaxiong UL

Doctoral thesis (2022)

The next-generation mobile communication system, e.g., 6G communication system, is envisioned to support unprecedented performance requirements such as exponentially increasing data requests ... [more ▼]

The next-generation mobile communication system, e.g., 6G communication system, is envisioned to support unprecedented performance requirements such as exponentially increasing data requests, heterogeneous service demands, and massive connectivity. When these challenging tasks meet the scarcity of wireless resources, efficient resource management becomes crucial. Conventionally, optimization algorithms, either optimal or suboptimal, are the main approaches for solving resource allocation problems. However, the efficiency of these iterative optimization algorithms can significantly degrade when the problems become large or difficult, e.g., non-convex or combinatorial optimization problems. Over the past few years, machine learning (ML), as an emerging approach in the toolbox, is widely investigated to accelerate the decision-making process. Since applying ML-based approaches to solve complex resource management problems is in its early-stage study, many open issues and challenges need to be solved towards the maturity and practical applications. The motivation and objective of this dissertation lie at investigating and providing answers to the following research questions: 1) How to overcome the shortcomings of extensively adopted end-to-end learning in addressing resource management problems, and which types of features are suited to be learned if supervised learning is applied? 2) What are the limitations and benefits when widely-used deep reinforcement learning (DRL) approaches are used to address constrained and combinatorial optimization problems in wireless networks, and are there tailored solutions to overcome the inherent drawbacks? 3) How to enable ML-based approaches to timely adapt to dynamic and complex wireless environments? 4) How to enlarge the performance gains when the paradigm shifts from centralized learning to distributed learning? The main contributions are organized by the following four research works. Firstly, from a supervised-learning perspective, we address common issues, e.g., unsatisfactory pre- diction performance and resultant infeasible solutions, when end-to-end learning approaches are applied to resource scheduling problems. Based on the analysis of optimal results, we design suited-to-learn features for a class of resource scheduling problems, and develop combined learning-and-optimization approaches to enable time-efficient and energy-efficient resource scheduling in multi-antenna systems. The original optimization problems are mixed-integer programming problems with high-dimensional decision vectors. The optimal solution requires exponential complexity due to the inherent difficulties of the problems. Towards an efficient and competitive solution, we apply fully-connected deep neural network (DNN) and convolutional neural network (CNN) to learn the designed features. The predicted information can effectively reduce the large search space and accelerate the optimization process. Compared to the conventional optimization and pure ML algorithms, the proposed method achieves a good trade-off between quality and complexity. Secondly, we address typical issues when DRL is adopted to deal with combinatorial and non-convex scheduling problems. The original problem is to provide energy-saving solutions via resource scheduling in energy-constrained networks. An optimal algorithm and a golden section search suboptimal approach are developed to serve as offline benchmarks. For online operations, we propose an actor-critic-based deep stochastic online scheduling (AC-DSOS) algorithm. Compared to supervised learning, DRL is suitable for dynamic environments and capable of making decisions based on the current state without an offline training phase. However, for the specific constrained scheduling problem, conventional DRL may not be able to handle two major issues of exponentially-increased action space and infeasible actions. The proposed AC-DSOS is developed to overcome these drawbacks. In simulations, AC-DSOS is able to provide feasible solutions and save around more energy compared to the conventional DRL algorithms. Compared to the offline benchmarks, AC-DSOS reduces the computational time from second-level to millisecond-level. Thirdly, the dissertation pays attention to the performance of the ML-based approaches in highly dynamic and complex environments. Most of the ML models are trained by the collected data or the observed environments. They may not be able to timely respond to the large variations of environments, such as dramatically fluctuating channel states or bursty data demands. In this work, we develop ML-based approaches in a time-varying satellite-terrestrial network and address two practical issues. The first is how to efficiently schedule resources to serve the massive number of connected users, such that more data and users can be delivered/served. The second is how to make the algorithmic solution more resilient in adapting to the time-varying wireless environments. We propose an enhanced meta-critic learning (EMCL) algorithm, combining a DRL model with a meta-learning technique, where the meta-learning can acquire meta-knowledge from different tasks and fast adapt to the new task. The results demonstrate EMCL’s effectiveness and fast-response capabilities in over-loaded systems and in adapting to dynamic environments compare to previous actor-critic and meta-learning methods. Fourthly, the dissertation focuses on reducing the energy consumption for federated learning (FL), in mobile edge computing. The power supply and computation capabilities are typically limited in edge devices, thus energy becomes a critical issue in FL. We propose a joint sparsification and resource optimization scheme (JSRO) to jointly reduce computational and transmission energy. In the first part of JSRO, we introduce sparsity and adopt sparse or binary neural networks (SNN or BNN) as the learning model to complete the local training tasks at the devices. Compared to fully-connected DNN, the computational operations can be significantly reduced, and thus requires less energy consumption and fewer transmitted data to the central node. In the second part, we develop an efficient scheduling scheme to minimize the overall transmission energy by optimizing wireless resources and learning parameters. We develop an enhanced FL algorithm in JSRO, i.e., non-smoothness and constraints - stochastic gradient descent, to handle the non-smoothness and constraints of SNN and BNN, and provide guarantees for convergence. Finally, we conclude the thesis with the main findings and insights on future research directions. [less ▲]

Detailed reference viewed: 195 (21 UL)
Full Text
See detailINVESTIGATING NEUROINFLAMMATION IN SPORADIC AND LRRK2-ASSOCIATED PARKINSON'S DISEASE
Badanjak, Katja UL

Doctoral thesis (2022)

Inflammatory responses are evolutionarily conserved reactions to pathogens, injury, or any form of a serious perturbation of a human organism. These mechanisms evolved together with us and, although ... [more ▼]

Inflammatory responses are evolutionarily conserved reactions to pathogens, injury, or any form of a serious perturbation of a human organism. These mechanisms evolved together with us and, although capable of somewhat adapting, innate responses are gravely impacted by prolonged human lifespan. Better sanitary measures, health systems, food and medicine supply have prolonged human life expectancy to ~72 years. Aging is characterized by prolonged, chronic (often low-grade) inflammation. With tissue and cellular defense mechanisms becoming dysfunctional over time, this inflammation becomes detrimental and destructive to the human body. Aging is a major risk factor for Parkinson’s disease (PD), a movement disorder characterized by the loss of dopaminergic neurons. Even though the disease is predominantly idiopathic, genetic cases are contributing to a better understanding of the underlying cellular and neuropathological mechanisms. In comparison to neuronal demise, the contribution of microglia (the immune cells of the brain) to PD is relatively understudied. Initially studied in PD patient-derived post-mortem tissue, novel in vitro technologies, such as induced pluripotent stem cells (iPSCs), are permitting the generation of specific cell types of interest in order to study disease mechanisms. We derived microglia cells from iPSCs of patients and healthy or isogenic controls to explore (shared) pathological immune responses in LRRK2-PD and idiopathic PD. Our findings suggest a significant involvement of microglia cells in the pathogenesis of PD and highlight potential therapeutic targets in alleviating overactive immune responses. [less ▲]

Detailed reference viewed: 194 (52 UL)
Full Text
See detailCharacterization of the surface properties of polycrystalline Cu(In,Ga)Se2 using a combination of scanning probe microscopy and X-ray photoelectron spectroscopy
Kameni Boumenou, Christian UL

Doctoral thesis (2022)

Polycrystalline Cu(In,Ga)Se2 (CIGSe) exhibit excellent properties for high power conversion efficiency (PCE) thin film solar cells. In recent years, photovoltaic cells made from CIGSe reached a PCE of 23 ... [more ▼]

Polycrystalline Cu(In,Ga)Se2 (CIGSe) exhibit excellent properties for high power conversion efficiency (PCE) thin film solar cells. In recent years, photovoltaic cells made from CIGSe reached a PCE of 23.4\%, surpassing that of multicrystalline silicon photovoltaic cells. Nevertheless, the changes in surface composition and electronic properties of the absorbers after various solution-based surface treatments are still under intensive investigation and are widely discussed in the literature. In this thesis, the front, the rear surface properties as well as the impact of post-deposition treatments (PDT) on CIGSe absorbers with different elemental compositions were analyzed by scanning tunneling microscopy and spectroscopy, Kelvin probe force microscopy, and X-ray photoelectron spectroscopy. I show that potassium cyanide (KCN) etching reduces the Cu content at the surface of Cu-rich absorbers substantially. The reduction of the Cu-content is accompanied with the formation of a large number of defects at the surface. Scanning tunneling spectroscopy measurements showed that most of these defects could be passivated with Cd ions. A semiconducting surface and no changes in the density of states were measured across the grain boundaries. In addition to the defect passivation an increase in surface band bending due to the substitution of Cu vacancies by Cd ions, which act as shallow donor defects was observed. As in the case of the front surface, the analyses carried out on the back surface of Cu-rich absorbers showed that a detrimental CuxSe secondary phase was also formed at the interface between the MoSe2 layer and CISe absorber after growth. This CuxSe secondary phase at the back contact was not present in Cu-poor absorbers. Regarding the alkali metal post-treated absorbers, I show that the occurrence of an enlarged surface bandgap, often reported on CIGSe absorbers after PDT treatment is only present after H2O rinsing. After ammonia (NH4OH) washing, which is always applied before buffer layer deposition, all the high bandgap precipitates disappeared and an increased amount of an ordered vacancy compound was observed. The thesis thereby gives a comprehensive overview of the CIGSe surfaces after various chemical and post deposition treatments. [less ▲]

Detailed reference viewed: 58 (8 UL)
Full Text
See detailUnderstaining and explaining cross-border mobility : a free will / predisposition approach
Nonnenmacher, Lucas UL

Doctoral thesis (2022)

This dissertation investigates the drivers of cross-border mobility from a multidisciplinary perspective. Both qualitative and quantitative methodologies are used in order to understand and explain why ... [more ▼]

This dissertation investigates the drivers of cross-border mobility from a multidisciplinary perspective. Both qualitative and quantitative methodologies are used in order to understand and explain why workers cross borders. The major contribution of this dissertation is to highlight new determinants of cross-border mobility such as previous migration experience and health state. These drivers have been disregarded in the literature in the past. Moreover, this dissertation validates the motivations of the workers as a relevant driver of cross-border mobility and provides a state of play of the situation of the cross-border workers in Europe, with a specific focus on French cross- border workers. Firstly, this dissertation provides a review of the explanations of cross- border mobility in the existing literature. Secondly, this dissertation analyses the subjective drivers of cross-border mobility using a qualitative dataset composed of 30 interviews of French workers in Luxembourg collected between January 2018 and May 2019. Results highlight that cross-border workers motivate their decision to commute abroad with financial, professional and personal reasons. Furthermore, the motivations of the cross-border workers vary with respect to their socioeconomic profile. Based on these empirical findings, a model of cross-border labour supply was designed. Thirdly, this dissertation assesses the association between migration capital and cross-border mobility using the French part of the European Labour Force Survey called the Enquête Emploi between 2010 and 2018. Results indicate that migrants commute abroad more than non migrants and are also more likely to do so. Migrant children are more likely to commute abroad, suggesting that the capacity to deal with distance and borders can be transmitted throughout generations. The migration capital is a relevant predictor of commuting behaviour, since the higher the capital endowment, the higher the likelihood is to commute abroad. Additional findings can be mentioned. Internal migration does not increase the likelihood to commute abroad. The acquired migration experience is more useful than the inherited migration experience to be engaged in cross-border mobility. Fourthly, this dissertation examines health disparities between cross-border workers and non cross-border workers using the Enquête Emploi between 2013 and 2018. Results 6 suggest a healthy cross-border phenomenon, the existence of major health disparities among cross-border workers and the rejection of the spillover phenomenon for this specific population. Finally, this dissertation concludes that cross-border mobility is a complex phenomenon still partially explained, probably because of the lack of harmonised dataset about cross-border workers within the EU. Further research on cross- border mobility is needed to better understand this population, especially in public health, where everything remains to be done. [less ▲]

Detailed reference viewed: 80 (6 UL)
Full Text
See detailCognitive Pain Modulation in Young and Older Adults: Understanding the Role of Individual Differences in Frontal Functioning
Rischer, Katharina Miriam UL

Doctoral thesis (2022)

Cognitive pain modulation is integral to our quality of life and deeply interwoven with the success of pain treatments but is also characterized by large interindividual variations. Emerging evidence ... [more ▼]

Cognitive pain modulation is integral to our quality of life and deeply interwoven with the success of pain treatments but is also characterized by large interindividual variations. Emerging evidence suggests that one of the driving factors behind these variations are individual differences in frontal functioning. Further evidence indicates that pain-related cognitions, and possibly also emotional distress, may influence the efficacy of pain modulation. Central aim of this project was to assess the role of individual differences in frontal functioning in cognitive pain modulation, with a specific focus on older adults. With respect to this, we also wanted to assess whether individual differences in frontal functions could explain conflicting previous results on age-related changes in the efficacy of cognitive pain modulation. In addition, we wanted to address the role of negative pain-related mindsets and emotional distress on the efficacy of cognitive pain modulation. We tested these research questions across four different studies using two prime paradigms of cognitive pain modulation, namely distraction from pain and placebo analgesia. In Study I, we assessed the role of individual differences in executive functions, emotional distress, and pain-related cognitions in modulating heat pain thresholds in healthy young adults in virtual reality environments with different levels of cognitive load. We found that emotional distress and visuo-spatial short-term memory significantly predicted how participants responded to the low vs high load environment. In Study II, we investigated the role of different forms of cognitive inhibition abilities and negative pain-related cognitions in modulating the efficacy of distraction from (heat) pain by cognitive demand in healthy young adults. We found a significant influence of better cognitive inhibition and selective attention abilities on the size of the distraction effect; however, this association was moderated by the participant’s level of pain catastrophizing, i.e., high pain catastrophizers showed an especially strong association. In Study III, we tested potential age-related differences in distraction from pain in a group of young and older adults while simultaneously acquiring functional brain images. We found no age-related changes at the behavioural level, but a slightly reduced neural distraction effect in older adults. The neural distraction effect size in older adults was furthermore significantly positively related to better cognitive inhibition abilities. In Study IV, we explored potential age-related differences in placebo analgesia in a group of young and older adults (who were partly re-recruited from Study III) while recording their brain activity with an electroencephalogram. Results revealed no age-related differences in the magnitude of the behavioural or electrophysiological placebo response, but older adults showed a neural signature of the placebo effect that was distinct from young participants. Regression analyses revealed that executive functions that showed an age-related decline (as established via group comparisons) were significant predictors of the behavioural placebo response. We furthermore found that better executive functions significantly moderated the association between age group and placebo response magnitude: older adults with better executive functions showed a larger placebo response than young adults whereas worse executive functions were associated with a smaller placebo response, possibly explaining why we found no significant difference at the group level. In summary, all studies provide converging evidence that differences in cognitive functions can significantly affect the efficacy of cognitive pain modulation. Although older adults showed a significant decline in most cognitive functions that we assessed, we found no systematic reduction in the efficacy of cognitive pain modulation (except for a slight reduction in the neural distraction effect size). Closer inspection of the data revealed that older adults may have engaged compensatory mechanisms that enabled them to experience the same (or even higher) level of pain relief as younger adults. We furthermore found evidence for the notion that pain-related cognitions and emotional distress may affect how individuals respond to cognitive pain modulation although this association was less systematic than for cognitive functions. Overall, the present thesis adds to the emerging body of evidence highlighting the importance of executive functions, as indicators of frontal functioning, in cognitive pain modulation. [less ▲]

Detailed reference viewed: 64 (10 UL)
See detailThree Essays in Narrative Risk Disclosure Tone, Meta-analysis and Cost Asymmetry
Hajikhanov, Nijat UL

Doctoral thesis (2022)

The thesis is divided in the following three chapters: Chapter 1 analyzes firms’ tone in risk disclosure using a sample of listed firms in the European Economic Area from 2002 to 2016. Firstly, findings ... [more ▼]

The thesis is divided in the following three chapters: Chapter 1 analyzes firms’ tone in risk disclosure using a sample of listed firms in the European Economic Area from 2002 to 2016. Firstly, findings show that firms, on average, use more negative than positive words in risk disclosure. This linguistic negativity bias has increased over time, suggesting that efforts to discourage companies’ propensity for overly positive risk disclosure had been potentially effective. Secondly, this negativity bias in tone increases more when receiving bad news than it decreases when receiving good news. Chapter refers to this phenomenon as ‘conditional risk disclosure tone conservatism’. Thirdly, we show that risk tone conservatism and stock price crash risk are negatively associated within a certain range of accounting conservatism. Chapter 2 aims to advance the understanding of the generic firm characteristics and to provide a meta-analysis of the relationship between generic firm characteristics and stock price crash risk. It analyzes the existing findings of the relationship between firm size, investor heterogeneity, growth, leverage, financial performance, volatility, earnings management and crash risk across 99 prior empirical studies. In addition, it investigates the potential covariates that moderate the variation in the results. Meta-analysis is used to investigate and aggregate the association between generic firm characteristics and stock price crash risk. Meta-regression analyses are conducted to examine whether potential moderators affect this association. Findings indicate that firm size, investor heterogeneity, and growth opportunities have a significant positive association with crash risk. However, leverage has a negative significant relationship with crash risk. Meta-regression results show that the variation in the firm characteristics and crash risk relationship is moderated by the measurement of generic determinants, publication status, citations, journal ranking, countries, financial sector and crisis period inclusion in the sample of studies, author’s country, position, and gender. Chapter 3 shows that Communist Party Committee (CPC) involvement in corporate governance is a determinant of the asymmetric behavior of selling, general, and administrative (SG&A) costs in Chinese state-owned enterprises (SOEs). SOEs having CPC direct control show a higher level of asymmetric cost behavior. In addition, the moderating effect of regional institutional quality on the relationship between CPC involvement and cost asymmetry is examined. Results indicate that firms located in regions with strong market-based institutions exhibit a stronger association between CPC direct control and cost asymmetry, thus the CPC counteracts pressure from markets to cut costs. This chapter contributes to the cost asymmetry literature by introducing a new political determinant that is specific to the growing Chinese market, CPC direct control. [less ▲]

Detailed reference viewed: 107 (7 UL)
Full Text
See detailWhen does finance win? A set-theoretic analysis of the conditions of European financial interest groups' lobbying success on post-crisis bank capital requirements
Commain, Sébastien Romain Jean-Louis UL

Doctoral thesis (2022)

Acknowledging the failure of the existing regulatory framework after the global financial crisis of 2008, world leaders vowed to reform financial regulation to strengthen stability and restore trust. The ... [more ▼]

Acknowledging the failure of the existing regulatory framework after the global financial crisis of 2008, world leaders vowed to reform financial regulation to strengthen stability and restore trust. The reform of bank capital requirements was a major item on this agenda: the Group of Twenty (G20) entrusted the reform to the Basel Committee on Bank Supervision (BCBS), whose so-called "Basel framework" constitutes the global standard for the prudential regulation of banking activities. While scholars have highlighted the important concessions that were made to financial interests in this reform, a series of demanding new policy tools—which were strongly opposed by financial industry representatives—were also introduced into the new Basel III framework. This dissertation explores this empirical puzzle and seeks to identify under what conditions European financial interests’ lobbying on the reform of capital requirements was successful, and whether these successes constitute cases of interest group influence. Defining influence as a situation where a proposed reform evolves during the decision-making process (policy shift) in the direction advocated by an actor (lobbying success) and where that evolution is caused by the actors’ lobbying activity vis-à-vis the proposed reform (causal path), this dissertation then considers influence as a multilevel concept, which can be considered present if and only if all three of its components—policy shift, lobbying success and a causal path—are also present. In other words, policy shift, lobbying success and causal paths are the three individually necessary and jointly sufficient conditions for influence, which this study investigates in turn in the case of post-crisis bank capital requirements. The presence or absence of a policy shift is assessed qualitatively by comparing, for twenty-nine policy issues contained in the Basel III framework, the initial BCBS reform proposals with the rules finally enacted at international and European level. The positions of financial and non-financial interest groups on each of these twenty-nine issues are then determined—through a quantitative text analysis of the position papers submitted by interest groups to BCBS and European Commission consultations on Basel III and the CRD and CRR—to determine whether the identified policy shift on a given issue constitutes a case of lobbying success for the interest group. Finally, using fuzzy-set Qualitative Comparative Analysis (fsQCA) to compare in a systematic manner cases in which success is observed and cases where it is absent, I uncover the configurations of conditions sufficient to produce successful lobbying and those sufficient to produce the absence of success, configurations which I then interpret in terms of causal mechanisms. Strong collective action is found, in several forms, to form the basis of causal mechanisms producing successful lobbying. The observed sufficient configurations of conditions however suggest that the causal mechanisms producing success also include key contextual factors that are beyong the control of financial interest groups. The absence of these enabling contextual factors is shown, conversely, to lead to the absence of success. This dissertation contributes to the existing academic literature in several ways. Empirically, first, it adds to the scholarship on bank capital requirements at the international and European level, using novel data to reassess, after the completion of the Basel III reform, the extent to which the final framework meets the initial ambitions. Methodologically, second, this dissertation employs a range of new methods and techniques to take on the challenges of measuring lobbying success and identifying multiple pathways to influence, two fundamental issues for empirical studies of interest group influence. Theoretically, third, the combinatorial approach used here to explore conditions of lobbying success permits an examination of multiple conjunctural causation patterns in interest group influence. [less ▲]

Detailed reference viewed: 107 (12 UL)
Full Text
See detailSYSTEMS METHODS FOR ANALYSIS OF HETEROGENEOUS GLIOBLASTOMA DATASETS TOWARDS ELUCIDATION OF INTER-TUMOURAL RESISTANCE PATHWAYS AND NEW THERAPEUTIC TARGETS
Tching Chi Yen, Romain Mana Hiao Woun UL

Doctoral thesis (2022)

In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare ... [more ▼]

In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare results with quantitative analysis of multi-omics datasets in order to investigate Glioblastoma resistance mechanisms. The work also integrated implementation of Data Management good practices and procedures. [less ▲]

Detailed reference viewed: 34 (4 UL)
Full Text
See detailInvestigation in reusable composite flooring systems in steel and concrete based on composite behaviour by friction
Fodor, Jovan UL

Doctoral thesis (2022)

The steel-concrete composite systems proved to be very efficient structural solution in terms of material consumption and mechanical response regarding the construction of the structural floor systems ... [more ▼]

The steel-concrete composite systems proved to be very efficient structural solution in terms of material consumption and mechanical response regarding the construction of the structural floor systems whether in the case of industrial and residential buildings or especially in the case of car parks. However, their contemporary application that relies on the utilization of the welded headed studs as a mean to provide the shear connection between the steel section and the concrete chord renders the system unable to be disassembled (in the best case its steel and concrete parts are recycled). Considering the ongoing push from the linear to circular economical models and the application of 3R principle (Reduce, Reuse & Recycle) such systems are unable to furtherly improve their environmental and economic efficiency through reuse schemes. The profound task in this research is development and the verification of the new demountable shear connector solutions that could allow modularity and demountability (hence reusability) of the steel-concrete composite floor systems while retaining their inherent structural advantages. Based on the previous investigations of the demountable shear connector systems (at the first-place bolted solutions) and investigations of mechanical components that were not strictly related to the shear connectors, four demountable shear connector devices were developed. Having in mind the drawbacks of the earlier solutions, adequate detailing and structural measures were applied and the ease of assembly and disassembly was proved on the constructed prototypes. Afterwards, the mechanical properties of devised demountable connector systems were investigated thoroughly through experimental campaign (push tests) and numerical investigation. Based on the experimental and numerical results of the shear connector behaviour it is concluded that the proposed shear connector device Type B possess adequate strength and stiffness and might be considered ductile in accordance with the EN 1994-1-1 allowing for the application of existent design strategies in accordance with the same design code. The force-slip behaviour of the proposed shear connector is explained and adequate analytic model is proposed. Based on the force-slip behaviour model the applicability of the shear connector is verified on a range of composite beams that represent the demountable floor. [less ▲]

Detailed reference viewed: 157 (14 UL)
Full Text
See detailDIGITAL TWIN FRAMEWORK FOR HUMANROBOT INTERACTION BY MEANS OF INDUSTRY 4.0 ENABLING TECHNOLOGIES
Gallala, Abir UL

Doctoral thesis (2022)

The introduction of Industry 4.0 technologies has shaped the old form of manufactures. Despite the enormous existence of technologies such as IoT, CPS, AI or collaborative and autonomous robots in the ... [more ▼]

The introduction of Industry 4.0 technologies has shaped the old form of manufactures. Despite the enormous existence of technologies such as IoT, CPS, AI or collaborative and autonomous robots in the industrial environment and while the main objective of Industry 4.0 is to implement a better connected, flexible and smarter industrial environment, some aspects still lack to be better integrated and implemented. Among these aspects, the human-robot interaction, collaborative robot programming and simulation which still need many improvements in order to fit in the new smart environments where cobots and humans work together in hybrid teams. This research envisions the future of robot programming and robot simulation in industrial environment where humans and robots work side by side in hybrid teams. The main objective of this work was to build and demonstrate a new digital twin-based framework that is designed to enhance the human-robot interaction, robot programming and realtime in-real-environment simulation. The proposed approach required to afford a flexible real-time service-based framework for both vertical and horizontal integration. It also needed to provide an intuitive and human-friendly usage for any unskilled worker. This dissertation introduces the main six steps of the digital twin for human-robot interaction proposed framework which was adapted and modified from the common 5-C architectural design of CPSs. Its flexible architecture grants a robust integration of new devices, systems or APIs. Since this framework was initially designed for human-robot interaction, its capabilities was demonstrated through a use case study and implementation. The first three-C steps of the method (Connect, Collect and Combine) should be initiated at the beginning but executed only one time during each process life-cycle. Connection establishment between physical and digital worlds is guaranteed in step one. Data Collection from physical devices was done in step two. Combining both worlds in one scene and synchronization between twin models was accomplished during step three. Data analysis, algorithms generation and motion planning are processed in step four. Then, a simulation of digital model generated motions was visualized through mixed reality interfaces and while enabling user interaction was executed during step five. At t he e nd, after approval, robot movements are generated and actions are made by the physical twin. All-along the six steps, an horizontal technological architecture was used. First, an IoT Gateway infrastructure was established to maintain the real-time data exchange between the system’s different components. Then, a MR-based immersive interface was developed through many phases to enable digital world set-up, visualization, simulation and interaction using human gestures. At the meanwhile, a broker was implemented to handle diverse tasks mainly citing the motion planning and the AI-based object pose estimation defining. The broker is also responsible on new elements integration. At the end, implemented system approved the main objectives of the proposed research methodology which are: • Intuitive robot programming: any unskilled worker can program the robot thanks to the human-friendly interface and the autonomous assistance capabilities of the robot while estimating position and planning motions. • Realistic simulation: a simulation done in real environment with unpredicted real conditions and objects. • Flexible system integration: it is easy to integrate new devices and features thanks to the broker master interface that connects all separated elements with all their diverse interfaces and platforms. [less ▲]

Detailed reference viewed: 47 (1 UL)
Full Text
See detailSmart cloud collocation: a unified workflow from CAD to enhanced solutions
Jacquemin, Thibault Augustin Marie UL

Doctoral thesis (2022)

Computer Aided Design (CAD) software packages are used in the industry to design mechanical systems. Then, calculations are often performed using simulation software packages to improve the quality of the ... [more ▼]

Computer Aided Design (CAD) software packages are used in the industry to design mechanical systems. Then, calculations are often performed using simulation software packages to improve the quality of the design. To speed up the development costs, companies and research centers have been trying to ease the integration of the computation phase in the design phase. The collocation methods have the potential of easing such integration thanks to their meshless nature. The geometry discretization step which is a key element of all computational method is simplified compared to mesh-based methods such as the finite element method. We propose in this thesis a unified workflow that allows the solution of engineering problems defined by partial differential equations (PDEs) directly from input CAD files. The scheme is based on point collocation methods and proposed techniques to enhance the solution. We introduce the idea of “smart clouds”. Smart clouds refer to point cloud discretizations that are aware of the exact CAD geometry, appropriate to solve a defined problem using a point collocation method and that contain information used to improve locally the solution. We introduce a unified node selection algorithm based on a generalization of the visibility criterion. The proposed algorithm leads to a significant reduction of the error for concave problems and does not have any drawback for convex problems. The point collocation methods rely on many parameters. We select in this thesis parameters for the Generalized Finite Difference (GFD) method and the Discretization-Corrected Particle Strength Exchange (DC PSE) method that we deem appropriate for most problems from the field of linear elasticity. We also show that solution improvement techniques, based on the use of Voronoi diagrams or on a stabilization of the PDE, do not lead to a reduction of the error for all of the considered benchmark problems. These methods shall therefore be used with care. We propose two types of a posteriori error indicators that both succeed in identifying the areas of the domain where the error is the greatest: a ZZ-type and a residual-type error indicator. We couple these indicators to a h-adaptive refinement scheme and show that the approach is effective. Finally, we show the performance of Algebraic Multigrid (AMG) preconditions on the solution of linear systems compared to other preconditioning/solution methods. This family of preconditioners necessitates the selection of a large number of parameters. We assess the impact of some of them on the solution time for a 3D problem from the field of linear elasticity. Despite the performance of AMG preconditions, ILU preconditioners may be preferred thanks to their ease of usage and robustness to lead to a convergence of the solution. [less ▲]

Detailed reference viewed: 80 (3 UL)
Full Text
See detailTowards a Unified and Robust Data-Driven Approach. A Digital Transformation of Production Plants in the Age of Industry 4.0
Benedick, Paul-Lou UL

Doctoral thesis (2022)

Nowadays, industrial companies are engaging their global transition toward the fourth industrial revolution (the so-called Industry 4.0). The main objective is to increase the Overall Equipment ... [more ▼]

Nowadays, industrial companies are engaging their global transition toward the fourth industrial revolution (the so-called Industry 4.0). The main objective is to increase the Overall Equipment Effectiveness (OEE), by collecting, storing and analyzing production data. Several challenges have to be tackled to propose a unified data-driven approach to rely on, from the low-layers data collection on the machine production lines using Operational Technologies (OT), to the monitoring and more importantly the analysis of the data using Information Technologies (IT). This is all the more important for companies having decades of existence – as Cebi Luxembourg S.A., our partner in a Research, Development and Innovation project subsidised by the ministry of the Economy in Luxembourg – to upgrade their on-site technologies and move towards new business models. Artificial Intelligence (AI) now knows a real interest from industrial actors and becomes a cornerstone technology for helping humans in decision-making and data-analysis tasks, thanks to the huge amount of (sensors-based) univariate time-series available in the production floor. However, such amount of data is not sufficient for AI to work properly and to make right decisions. This also requires a good data quality. Indeed, good theoretical performance and high accuracy can be obtained when trained and tested in isolation, but AI models may still provide degraded performance in real/industrial conditions. In that context, the problem is twofold: • Industrial production systems are vertically-oriented closed systems that make difficult their communication and their cooperation with each other, and intrinsically the data collection. • Industrial companies used to implement deterministic processes. Introducing AI - that can be classified as stochastic - in the industry requires a full understanding of the potential deviation of the models in order to be aware of their domain of validity. This dissertation proposes a unified strategy for digitizing an industrial system and methods for evaluating the performance and the robustness of AI models that can be used in such data-driven production plants. In the first part of the dissertation, we propose a three-steps strategy to digitize an industrial system, called TRIDENT, that enables industrial actors to implement data collection on production lines, and in fine to monitor in real-time the production plant. Such strategy has been implemented and evaluated on a pilot case-study at Cebi Luxembourg S.A. Three protocols (OPC-UA, MQTT and O-MI/O-DF) are used for investigating their impact on the real-time performance. The results show that, even if these protocols have some disparity in terms of performance, they are suitable for an industrial deployment. This strategy has now been extended and implemented by our partner - Cebi Luxembourg S.A - in its production environment. In the second part of the thesis dissertation, we aim at investigating the robustness of AI models in industrial settings. We then propose a systematic approach to evaluate the robustness under perturbations. Assuming that i) real perturbations - in particular on the data collection - cannot be recorded or generated in real industrial environment (that could lead to production stops) and ii) a model would not be implemented before evaluating its potential deviations, limits or weaknesses, our approach is based on artificial injections of perturbations into the data sets, and is evaluated on state-of-the-art classifiers (both Machine-Learning and Deep-Learning) and data sets (in particular, public sensors-based univariate time series). First, we propose a coarse-grained study, with two artificial perturbations - called swapping effect and dropping effect - in which simple random algorithms are used. This already highlights a great disparity of the models’ robustness under such perturbations that industrial actors need to be aware of. Second, we propose a fine-grained study where instead of testing randomly some parameters' values, we used Genetic Algorithms to look for the models' limits. To do so, we define our multi-objectives optimisation problem with a fitness function as: maximising the impact of the perturbations (i.e. decreasing the most the model's accuracy), while minimising the changes in the time-series (with regards to our two parameters). This can be seen as an adversarial case, where the goal is not to exploit these weaknesses in a malicious way but to be aware of. Based on such a study, methods for making more robust the model and/or for observing such behaviour on the infrastructure could be investigated and implemented if needed. The tool developed in this latter study is therefore ready for being used in a real industrial case, where data sets and perturbations can now be fitted to the scenario. [less ▲]

Detailed reference viewed: 135 (19 UL)
Full Text
See detailData Analysis for Insurance: Recommendation System Based on a Multivariate Hawkes Process
Lesage, Laurent UL

Doctoral thesis (2022)

The objective of the thesis is to build a recommendation system for insurance. By observing the behaviour and the evolution of a customer in the insurance context, customers seem to modify their insurance ... [more ▼]

The objective of the thesis is to build a recommendation system for insurance. By observing the behaviour and the evolution of a customer in the insurance context, customers seem to modify their insurance cover when a significant event happens in their life. In order to take into account the influence of life events (e.g. marriage, birth, change of job) on the insurance covering selection from customers, we model the recommendation system with a Multivariate Hawkes Process (MHP), which includes several specific features aiming to compute relevant recommendations to customers from a Luxembourgish insurance company. Several of these features are intent to propose a personalized background intensity for each customer thanks to a Machine Learning model, to use triggering functions suited for insurance data or to overcome flaws in real-world data by adding a specific penalization term in the objective function. We define a complete framework of Multivariate Hawkes Processes with a Gamma density excitation function (i.e. estimation, simulation, goodness-of-fit) and we demonstrate some mathematical properties (i.e. expectation, variance) about the transient regime of the process. Our recommendation system has been back-tested over a full year. Observations from model parameters and results from this back-test show that taking into account life events by a Multivariate Hawkes Process allows us to improve significantly the accuracy of recommendations. The thesis is presented in five chapters. Chapter 1 explains how the background intensity of the Multivariate Hawkes Process is computed thanks to a Machine Learning algorithm, so that each customer has a personalized recommendation. Chapter 1 is shown an extended version of the method presented in [1], in which the method is used to make the algorithm explainable. Chapter 2 presents a Multivariate Hawkes Processes framework in order to compute the dependency between the propensity to accept a recommendation and the occurrence of life events: definitions, notations, simulation, estimation, properties, etc. Chapter 3 presents several results of the recommendation system: estimated parameters of the model, effects of contributions, backtesting of the model’s accuracy, etc. Chapter 4 presents the implementation of our work into a R package. Chapter 5 concludes on the contributions and perspectives opened by the thesis. [less ▲]

Detailed reference viewed: 62 (5 UL)
Full Text
See detailUser Experience Design for Cybersecurity & Privacy: addressing user misperceptions of system security and privacy
Stojkovski, Borce UL

Doctoral thesis (2022)

The increasing magnitude and sophistication of malicious cyber activities by various threat actors poses major risks to our increasingly digitized and inter-connected societies. However, threats can also ... [more ▼]

The increasing magnitude and sophistication of malicious cyber activities by various threat actors poses major risks to our increasingly digitized and inter-connected societies. However, threats can also come from non-malicious users who are being assigned too complex security or privacy-related tasks, who are not motivated to comply with security policies, or who lack the capability to make good security decisions. This thesis posits that UX design methods and practices are necessary to complement security and privacy engineering practices in order to (1) identify and address user misperceptions of system security and privacy; and (2) inform the design of secure systems that are useful and appealing from end-users’ perspective. The first research objective in this thesis is to provide new empirical accounts of UX aspects in three distinct contexts that encompass security and privacy considerations, namely: cyber threat intelligence, secure and private communication, and digital health technology. The second objective is to empirically contribute to the growing research domain of mental models in security and privacy by investigating user perceptions and misperceptions in the afore-mentioned contexts. Our third objective is to explore and propose methodological approaches to incorporating users’ perceptions and misperceptions in the socio-technical security analyses of systems. Qualitative and quantitative user research methods with experts as well as end users of the applications and systems under investigation were used to achieve the first two objectives. To achieve the third objective, we also employed simulation and computational methods. Cyber Threat Intelligence: CTI sharing platforms Reporting on a number of user studies conducted over a period of two years, this thesis offers a unique contribution towards understanding the constraining and enabling factors of security information sharing within one of the leading CTI sharing platforms, called MISP. Further, we propose a conceptual workflow and toolchain that would seek to detect user (mis)perceptions of key tasks in the context of CTI sharing, such as verifying whether users have an accurate comprehension of how far information travels when shared in a CTI sharing platform, and discuss the benefits of our socio-technical approach as a potential security analysis tool, simulation tool, or educational / training support tool. Secure & Private Communication: Secure Email We propose and describe multi-layered user journeys, a conceptual framework that serves to capture the interaction of a user with a system as she performs certain goals along with the associated user beliefs and perceptions about specific security or privacy-related aspects of that system. We instantiate the framework within a use case, a recently introduced secure email system called p≡p, and demonstrate how the approach can be used to detect misperceptions of security and privacy by comparing user opinions and behavior against system values and objective technical guarantees offered by the system. We further present two sets of user studies focusing on the usability and effectiveness of p≡p’s security and privacy indicators and their traffic-light inspired metaphor to represent different privacy states and guarantees. Digital Health Technology: Contact Tracing Apps Considering human factors when exploring the adoption as well as the security and privacy aspects of COVID-19 contact tracing apps is a timely societal challenge as the effectiveness and utility of these apps highly depend on their widespread adoption by the general population. We present the findings of eight focus groups on the factors that impact people’s decisions to adopt, or not to adopt, a contact tracing app, conducted with participants living in France and Germany. We report how our participants perceived the benefits, drawbacks, and threat model of the contact tracing apps in their respective countries, and discuss the similarities and differences between and within the study groups. Finally, we consolidate the findings from these studies and discuss future challenges and directions for UX design methods and practices in cybersecurity and digital privacy. [less ▲]

Detailed reference viewed: 554 (11 UL)
Full Text
See detailMetal-oxide nanostructures for low-power gas sensors
Bhusari, Rutuja Dilip UL

Doctoral thesis (2022)

For gas sensing applications, metal oxide (MOx) nanostructures have demonstrated attractive properties due their large surface-over-volume ratio, combined with the possibility to use multiple materials ... [more ▼]

For gas sensing applications, metal oxide (MOx) nanostructures have demonstrated attractive properties due their large surface-over-volume ratio, combined with the possibility to use multiple materials and multi-functional properties. For MOx chemiresistive gas sensors, the temperature activated interaction of atmospheric oxygen with MOx surface plays a major role in the sensor kinetics as it leads to oxygen adsorption-desorption reactions, that eventually affects the gas sensing performance. Thus, MOx sensors are operated at high temperatures to achieve the desired sensitivity. This high temperature operation of MOx sensors limits their application in explosive gas detection, reduces the sensor lifetime and causes power consumption. To overcome these drawbacks of MOx sensors, researchers have proposed the use of heterostructures and light activation as alternatives. In this thesis, we aim to develop low power consuming MOx sensors using these solutions. We show the template-free bottom-up synthesis and shape control of copper hydroxide-based nanostructures grown in liquid phase which act as templates for formation of CuO nanostructures. Precise control over the pH of the solution and the reaction temperature led to intended tuning of the morphology and chemical composition of the nanostructures. We contemplate upon the rationale behind this change in shape and material as CuO nanostructures are further used in a heterostructure. We discuss synthesis and characterisation of CuO bundles and Cu2O truncated cubes, former of which lead to very interesting gas sensing properties and application. Devices made from CuO bundles network are investigated for their electrical and oxygen adsorption- desorption properties as a gas sensor. It was observed that the sensor has faster response and recovery in as deposited condition in comparison to annealed sensor. A detailed inspection of response and recovery curves enabled us to derive parameters like time constants, reaction constants and diffusion coefficients for CuO bundles, an analysis that is scarcely performed on p-type materials. Investigation of the derived parameters, role of network junctions and a hydroxylated CuO surface leads us to discuss the hypotheses for the contributing processes. CuO bundles show conduction transients upon exposure to reducing gas H2 and temperature-based inversion of response upon exposure to reducing gas CO. This has not been reported in literature for CuO exposed to H2 and/or CO. Armed with this fundamental knowledge of gas sensing, we choose ZnO, n type transducer material, and CuO, p type materials with lower band gap and higher absorption in the visible range to synthesise a heterostructure. However, sol-gel synthesis of ZnO and CuO nanostructures have different reactions parameters, like temperature, pH, etc., and do not show natural affinity to grow on the other material. These challenges are overcome by implementing a stepped synthesis procedure to fabricate a heterostructure with Cu-based nanoplatelets on ZnO Nanorods, also represented as CuO@ZnO heterostructure in this thesis. We finally demonstrate electrical and functional characterisation of CuO@ZnO heterostructure. The heterostructure responds differently to tested gasses as compared to its constituent nanostructure ZnO nanorods and a reference CuO nanostructure, CuO bundles. This is an unexpected result as heterostructures usually show response type similar to their base material but with an enhanced sensor response. We present a possible application of e-nose that can differentiate qualitatively between CO, NO2 and ethanol, using the heterostructure, ZnO nanorods and CuO bundles together. [less ▲]

Detailed reference viewed: 101 (5 UL)
See detailCOMBINED HEATING AND VENTILATION SYSTEMS FOR LOW-ENERGY RESIDENTIAL BUILDINGS; OPTIMIZATION OF ENERGY EFFICIENCY AND USER COMFORT
Shirani, Arsalan UL

Doctoral thesis (2022)

Combined heating and ventilation systems are applied here in highly energy-efficient residential buildings to save construction cost. Combining a Heat Pump with a Heat Recovery Ventilation system to heat ... [more ▼]

Combined heating and ventilation systems are applied here in highly energy-efficient residential buildings to save construction cost. Combining a Heat Pump with a Heat Recovery Ventilation system to heat and cool the building offers faster response times, a smaller footprint and an increased cooling capacity, compared to floor heating systems. As a result, such systems are expected to have a larger market share in the future. The available research on Ventilation Based Heating systems focuses mostly on comparing Exhaust Air Heat Pumps with conventional systems in energy-efficient buildings. The majority of published research neglects the usual existence of electrical backup heaters as well as the need to develop and use an adapted and optimized control strategy for such systems. This work compares the energy efficiency of the common-standard ventilation-based heating concepts including Exhaust Air Heat Pumps with the conventional floor heating systems using single room control strategy to achieve similar user comfort. The comparison is carried out in a simulation environment in order to optimize the systems under exactly reproducible boundary conditions. Additionally, two field tests were performed to achieve a better understanding and validation of the simulation models. The measured data was used to model the dynamic behavior of the Exhaust Air Heat Pump and the air distribution system. These field tests revealed that the overall run time and heating output of the heat pump were much lower than expected. This was the motivation to investigate and optimize the heat pump and electric heater control strategy. It could be demonstrated that the applied control strategy has a significant impact on the overall performance of the system. The suggested control strategy was tested and validated in a third field measurement. Based on the gained knowledge using the system simulation tool and the conducted field tests, an improved second concept for Ventilation Based Heating systems was defined with three optimization steps. It could be demonstrated that using the suggested methodologies in the hard- and software of such a system, can significantly improve its overall efficiency. However, Ventilation Based Heating systems cannot compete with floor heating systems in terms of total system energy efficiency, due to the necessity of electrical backup heaters and due to the higher supply temperatures. [less ▲]

Detailed reference viewed: 102 (13 UL)
Full Text
See detailIL-6 Signaling and long non-coding RNAs in liver cancer
Chowdhary, Anshika UL

Doctoral thesis (2022)

Detailed reference viewed: 49 (4 UL)
Full Text
See detailEvaluation von Synergieeffekten zentraler Speichersysteme in Niederspannungsnetzen durch integrative Modellbildung
Zugschwert, Christina UL

Doctoral thesis (2022)

Security of supply, affordability, and sustainability form the pillars of a new energy policy towards renewable generation and decarbonization. However, the dynamics of the power generation due to the ... [more ▼]

Security of supply, affordability, and sustainability form the pillars of a new energy policy towards renewable generation and decarbonization. However, the dynamics of the power generation due to the increasing amount of renewable energies cause temporal and local discrepancies between generation and consumption. Resulting energy transports between grid sections and different voltage levels cause additional load flows. To ensure grid stability, the grid operator provides system services and grid extension measures. With the help of energy storage systems with grid-serving control and placement strategies, the flexibility of the electricity supply can be increased. Besides, a high amount of renewable energy can be used locally while maintaining grid stability. A centralized installation approach focussing single grid sections, instead of many decentralized home storage units, offers economic and environmental advantages. Furthermore, the operation strategy can be optimized by the global view of the grid operator and thus be adapted to local conditions. This research evaluates synergy effects of central storage systems by integrative computational analysis using a rural low-voltage grid section in Luxembourg. Three linked simulation levels are used to calculate operational strategies, storage dimensioning as well as placement based on 15-minute smart meter data. The operation strategy is developed within a power system simulation and is used to control a parameterizable simulation model of a vanadium-redox-flow-battery. The operating strategy focuses on reducing the maximum power flow at the transformer and reactive power compensation to maintain voltage stability. A future photovoltaic scenario is being adopted by doubling the status quo photovoltaic generation. The simultaneous optimization of storage utilization and power reduction at the transformer provides the storage design parameters power and capacity. Storage placement is determined by the system boundary and the resulting data selection. A final sensitivity analysis evaluates an optimized storage placement while enhancing the voltage profiles. The results of this work are a differentiated active as well as reactive power related operating strategy, automated calculation algorithms to determin control parameters, optimized battery design parameters as well as the methodical approach to transfer calculation algorithms to further grid sections. [less ▲]

Detailed reference viewed: 115 (5 UL)
Full Text
See detailEXCESSIVE MICROBIAL MUCIN FORAGING INDUCED BY DIETARY FIBER DEPRIVATION MODULATES SUSCEPTIBILITY TO INFECTIOUS AND AUTOIMMUNE DISEASES
Wolter, Mathis UL

Doctoral thesis (2022)

The gastrointestinal (GI) mucus layer is a protective and lubricating hydrogel of polymer-forming glycoproteins that covers our intestinal epithelium. This mucus layer serves as an interface between the ... [more ▼]

The gastrointestinal (GI) mucus layer is a protective and lubricating hydrogel of polymer-forming glycoproteins that covers our intestinal epithelium. This mucus layer serves as an interface between the intestinal epithelium and environment as well as a as first line of defense against the potentially harmful microorganisms. While the GI mucus layer closer to the gut epithelium is highly condensed and acts as a physical barrier for invading microorganisms, further away from the epithelium, proteolytic degradation makes it loose. This looser part of the mucus layer serves as an attachment site and a nutrient source for some commensal gut bacteria. The molecular mechanisms that drive the mucus–microbe interactions are emerging and are important to understand the functional role of the gut microbiome in health and disease. Previous work by my research group showed that a dietary fiber-deprived gut microbiota erodes the colonic mucus barrier and enhances susceptibility to a mucosal pathogen Citrobacter rodentium, a mouse model for human Escherichia coli infections. In this PhD thesis, I studied role of the gut mucus layer in the context of various other infectious and autoimmune diseases by inducing the natural erosion of the mucus layer by dietary fiber deprivation. In order to unravel the mechanistic details in the intricate interactions between diet, mucus layer and gut microbiome, I leveraged our previously established gnotobiotic mouse model hosting a synthetic human gut microbiota of fully characterized 14 commensal bacteria (14SM). I employed three different types of infectious diseases for the following reasons: 1) attaching and effacing (A/E) pathogen (C. rodentium), to better understand which commensal bacteria aid in enhancing the pathogen susceptibility when a fiber-deprived gut microbiota erodes the mucus barrier; 2) human intracellular pathogens (Listeria monocytogenes and Salmonella Tyhimurium) to investigate, whether like the A/E pathogen, erosion of the mucus layer could affect the infection dynamics; and 3) a mouse nematode parasite – Trichuris muris, which is a model for the human parasite Trichuris trichiura – to study how changes in the mucin–microbiome interactions drive the worm infection, as mucins play an important role in worm expulsion. In my thesis, I used various combinations of 14SM by dropping out individual or all mucin-degrading bacteria from the microbial community to show that, in the face of reduced dietary fiber, the commensal gut bacterium Akkermansia muciniphila is responsible for enhancing susceptibility to C. rodentium, most likely by eroding the protective gut mucus layer. For my experiments with intracellular pathogens (L. monocytogenes and S. Tyhimurium, I found that dietary fiber deprivation provided protection against the infection by both L. monocytogenes and S. Typhimurium. This protective effect against the pathogens was driven directly by diet and not by the microbial erosion of the mucus layer, since a similar protective effect was observed in both gnotobiotic and germ-free mice. Finally, for the helminth model, I showed that that fiber deprivation-led elevated microbial mucin foraging promotes clearance of the parasitic worm by shifting the host immune response from a susceptible, Th1 type to a resistant, Th2 type. In the context of autoimmune disease, I focused on inflammatory bowel disease (IBD). Although IBD results from genetic predisposition, the contribution of environmental triggers is thought to be crucial. Diet–gut microbiota interactions are considered to be an important environmental trigger, but the precise mechanisms are unknown. As a model for IBD, I employed IL-10-/- mice which are known to spontaneously develop IBD-like colitis in conventional mice. Using our 14SM gnotobiotic mouse model, I showed that in a genetically susceptible host, microbiota-mediated erosion of the mucus layer following dietary fiber deprivation is sufficient to induce lethal colitis. Furthermore, my results show that this effect was clearly dependent on interaction all three factors: microbiome, diet and genetic susceptibility. Leaving out only one of these factors eliminated the lethal phenotype. The novel findings arising from my PhD thesis will help the scientific community to enhance our understanding of the functional role of mucolytic bacteria and the GI mucus layer in shaping our health. Overall, given a reduced consumption of dietary fiber in industrialized countries compared to developing countries, my results have profound implications for potential treatment and prevention strategies by leveraging diet to engineer the gut microbiome, especially in the context of personalized medicine. [less ▲]

Detailed reference viewed: 90 (6 UL)
Full Text
See detailSEMKIS: A CONTRIBUTION TO SOFTWARE ENGINEERING METHODOLOGIES FOR NEURAL NETWORK DEVELOPMENT
Jahic, Benjamin UL

Doctoral thesis (2022)

Today, there is a high demand for neural network-based software systems supporting humans during their daily activities. Neural networks are computer programs that simulate the behaviour of simplified ... [more ▼]

Today, there is a high demand for neural network-based software systems supporting humans during their daily activities. Neural networks are computer programs that simulate the behaviour of simplified human brains. These neural networks can be deployed on various devices e.g. cars, phones, medical devices...) in many domains (e.g. automotive industry, medicine...). To meet the high demand, software engineers require methods and tools to engineer these software systems for their customers. Neural networks acquire their recognition skills e.g. recognising voice, image content...) from large datasets during a training process. Therefore, neural network engineering (NNE) shall not be only about designing and implementing neural network models, but also about dataset engineering (DSE). In the literature, there are no software engineering methodologies supporting DSE with precise dataset selection criteria for improving neural networks. Most traditional approaches focus only on improving the neural network’s architecture or follow crafted approaches based on augmenting datasets with randomly gathered data. Moreover, they do not consider a comparative evaluation of the neural network’s recognition skills and customer’s requirements for building appropriate datasets. In this thesis, we introduce a software engineering methodology (called SEMKIS) supported by a tool for engineering datasets with precise data selection criteria to improve neural networks. Our method considers mainly the improvement of neural networks through augmenting datasets with synthetic data. SEMKIS has been designed as a rigorous iterative process for guiding software engineers during their neural network-based projects. The SEMKIS process is composed of many activities covering different development phases: requirements’ specification; dataset and neural network engineering; recognition skills specification; dataset augmentation with synthetized data. We introduce the notion of key-properties, used all along the process in cooperation with a customer, to describe the recognition skills. We define a domain-specific language (called SEMKIS-DSL) for the specification of the requirements and recognition skills. The SEMKIS-DSL grammar has been designed to support a comparative evaluation of the customer’s requirements with the key-properties. We define a method for interpreting the specification and defining a dataset augmentation. Lastly, we apply the SEMKIS process to a complete case study on the recognition of a meter counter. Our experiment shows a successful application of our process in a concrete example. [less ▲]

Detailed reference viewed: 265 (31 UL)
Full Text
See detailARGUMENT MINING AND ITS APPLICATIONS IN POLITICAL DEBATES
Haddadan, Shohreh UL

Doctoral thesis (2022)

Presidential debates are significant moments in the history of presidential campaigns. In these debates, candidates are challenged to discuss the main contemporary and historical issues in the country and ... [more ▼]

Presidential debates are significant moments in the history of presidential campaigns. In these debates, candidates are challenged to discuss the main contemporary and historical issues in the country and attempt to persuade the voters to their benefit. These debates offer a legitimate ground for argumentative analysis to investigate political discourse argument structure and strategy. The recent advances in machine learning and Natural Language Processing (NLP) algorithms with the rise of deep learning have revolutionized many natural language applications, and argument analysis from textual resources is no exception. This dissertation targets argument mining from political debates data, a platform rifled with the arguments put forward by politicians to convince a general public in voting for them and discourage them from being appealed by the other candidates. The main contributions of the thesis are: i) Creation, release and reliability assessment of a valuable resource for argumentation research. ii) Implementation of a complete argument mining pipeline applying cutting-edge technologies in NLP research. iii) Launching of a demo tool for argumentative analysis of political debates. The original dataset is composed of the transcripts of 41 presidential election debates in the U.S. from 1960 to 2016. Beside argument extraction from political debates, this research also aims at investigating the practical applications of argument structure extraction, such as fallacious argument classification and argument retrieval. In order to apply supervised machine learning and NLP methods to the data, an excessive annotation study has been conducted on the data and led to the creation of a unique dataset with argument structures composed of argument components (i.e., claim and premise) and argument relations (i.e., support and attack). This dataset includes also another annotation layer with six fallacious argument categories and 14 sub-categories annotated on the debates. The final dataset is annotated with 32,296 argument components (i.e., 16,982 claims and 15,314 premises) and 25,012 relations (i.e., 3,723 attacks and 21,289 supports), and 1628 fallacious arguments. As the methodological approach, a complete argument mining pipeline is designed and implemented, composed of the two main stages of argument component detection and argument relation prediction. Each stage takes advantage of various NLP models outperforming standard baselines in the area, with an average F-score of 0.63 for argument components classification and 0.68 for argument relation classification. Additionally, DISPUTool, an argumentative analysis online tool, is developed as proof-of-concept. DISPUTool incorporates two main functionalities. Firstly, it provides the possibility of exploring the arguments which exist in the dataset. And secondly, it allows for extracting arguments from text segments inserted by the user leveraging the embedded trained model. [less ▲]

Detailed reference viewed: 120 (7 UL)
Full Text
See detailAvoiding the Inappropriate: The European Commission and Sanctions under EU Fiscal Policy Coordination
Sacher, Martin UL

Doctoral thesis (2022)

Since the beginning of the European Economic and Monetary Union, fiscal non-compliance has been subject to the potential imposition of sanctions. However, the extent to which punitive action should be ... [more ▼]

Since the beginning of the European Economic and Monetary Union, fiscal non-compliance has been subject to the potential imposition of sanctions. However, the extent to which punitive action should be automatic – rather than political – is a point of constant discussion among European Union decision-makers. The most recent reform of the Stability and Growth Pact, in the aftermath of the European Sovereign Debt Crisis, has attempted to make sanctions more automatic and has created the possibility to trigger them at earlier stages of the surveillance procedure. With this in mind, the reform has enhanced the powers and autonomy of the European Commission in the application of the new rules. Despite the reinforcement of punitive provisions, the Commission has so far refrained from proposing the imposition of sanctions. Against this background, this thesis aims to answer the question of how we can best explain that the European Commission does not propose financial sanctions because of Member State non-compliance with the Pact’s fiscal objectives. The thesis draws upon four post-crisis cases in which sanctions for fiscal non-compliance might have been imposed – Belgium in 2013, France in 2015, Portugal and Spain in 2016, and Italy in 2018. The thesis uses theory-testing process-tracing methods and applies an adaptation of normative institutionalism that takes into account strategic actor behaviour. Based on this theoretical and methodological framework, it is argued that the normative-strategic minimum enforcement mechanism explains the Commission’s behaviour. Given that the imposition of sanctions is perceived as inappropriate in the cases at hand, Commission actors strategically refrain from applying the enforcement provisions to their full extent. [less ▲]

Detailed reference viewed: 104 (8 UL)
Full Text
See detailMACHINE LEARNING IN THE DESIGN SPACE EXPLORATION OF TSN NETWORKS
Mai, Tieu Long UL

Doctoral thesis (2022)

Real-time systems are systems that have specific timing requirements. They are critical systems that play an important role in modern societies, be it for instance control systems in factories or ... [more ▼]

Real-time systems are systems that have specific timing requirements. They are critical systems that play an important role in modern societies, be it for instance control systems in factories or automotives. In recent years, Ethernet has been increasingly adopted as layer 2 protocol in real-time systems. Indeed, the adoption of Ethernet provides many benefits, including COTS and cost-effective components, high data rates and flexible topology. The main drawback of Ethernet is that it does not offer "out-of-the-box" mechanisms to guarantee timing and reliability constraints. This is the reason why time-sensitive networking (TSN) mechanisms have been introduced to provide Quality-of-Service (QoS) on top of Ethernet and satisfy the requirements of real-time communication in critical systems. The promise of Ethernet TSN is the possibility to use a single network for different criticality levels, e.g, critical control traffic and infotainment traffic sharing the same network resources. This thesis is about the design of Ethernet TSN networks, and specifically about techniques that help quantify the extent to which a network can support current and future communication needs. The context of this work is the increasing use of design-space exploration (DSE) in the industry to master the complexity of designing (e.g. in terms of architectural and technological choices) and configuring a TSN network. One of the main steps in DSE is performing schedulability analysis to conclude about the feasibility of a network configuration, i.e., whether all traffic streams satisfy their timing constraints. This step can take weeks of computations for a large set of candidate solutions with the simplest TSN mechanisms, while more complicated TSN mechanisms will require even longer time. This thesis explores the use of Artificial Intelligence (AI) techniques to assist in the design of TSN networks by speeding up the DSE. Specifically, the thesis proposes the use of machine learning (ML) as an alternative approach to schedulability analysis. The application of ML involves two steps. In the first step, ML algorithms are trained with a large set of TSN configurations labeled as feasible or non-feasible. Due to its pattern recognition ability, ML algorithms can predict the feasibility of unseen configurations with a good accuracy. Importantly, the execution time of an ML model is only a fraction of conventional schedulability analysis and remains constant whatever the complexity of the network configurations. Several contributions make up the body of the thesis. In the first contribution, we observe that the topology and the traffic of a TSN network can be used to derive simple features that are relevant to the network feasibility. Therefore, standard and simple machine learning (ML) algorithms such as k-Nearest Neighbors are used to take these features as inputs and predict the feasibility of TSN networks. This study suggests that ML algorithms can provide a viable alternative to conventional schedulability analysis due to fast execution time and high prediction accuracy. A hybrid approach combining ML and schedulability analyses is also introduced to control the prediction uncertainty. In the next studies, we aim at further automating the feasibility prediction of TSN networks with the Graph Neural Network (GNN) model. GNN takes as inputs the raw data from the TSN configurations and encodes them as graphs. Synthetic features are generated by GNN, thus the manual feature selection step is eliminated. More importantly, the GNN model can generalize to a wide range of topologies and traffic patterns, in contrast to the standard ML algorithms tested before that can only work with a fixed topology. An ensemble of individual GNN models shows high prediction accuracies on many test cases containing realistic automotive topologies. We also explore possibilities to improve the performance of GNN with more advanced deep learning techniques. In particular, semi-supervised learning and self-supervised learning are experimented. Although these learning paradigms provide modest improvements, we consider them promising techniques due to the ability to leverage the massive amount of unlabeled training data. While this thesis focuses on the feasibility prediction of TSN configurations, AI techniques have huge potentials to automate other tasks in real-time systems. A natural follow-up work of this thesis is to apply GNN to multiple TSN mechanisms and predict which mechanism can provide the best scheduling solution for a given configuration. Although we need distinct ML models for each TSN mechanism, this research direction is promising as TSN mechanisms may share similar feasibility features and thus transfer learning techniques can be applied to facilitate the training process. Furthermore, GNN can be used as a core block in deep reinforcement learning to find the feasible priority assignment of TSN configurations. This thesis aims to make a contribution towards DSE of TSN networks with AI. [less ▲]

Detailed reference viewed: 249 (8 UL)
Full Text
See detailOptical measurement under space conditions
Bremer, Mats UL

Doctoral thesis (2022)

The growing interest in space by governmental and private institutions has increased significantly in recent years. The issue of quality control plays an extremely important role in space travel, as ... [more ▼]

The growing interest in space by governmental and private institutions has increased significantly in recent years. The issue of quality control plays an extremely important role in space travel, as possible defects can cause enormous damage. The present work deals with a possible method to improve already existing quality control procedures for space flight. With the help of a 3D scanner, different components are measured and evaluated under space conditions. In particular, the linear thermal expansions are analyzed. The work has shown that the elaborated procedure works for metallic materials. For composites or joints between different materials, positive approaches were shown, which, however, could not be validated within the scope of this work. Components made of pure carbon fiber material cannot be evaluated with the technical equipment used. [less ▲]

Detailed reference viewed: 72 (5 UL)
Full Text
See detailDiscussing the Past: The Production of Historical Knowledge on Wikipedia
Apostolopoulos, Petros UL

Doctoral thesis (2022)

This dissertation investigates how historical knowledge is produced in one of the most central digital communities of knowledge, Wikipedia. In 2001, the American Internet entrepreneur Jimmy Wales founded ... [more ▼]

This dissertation investigates how historical knowledge is produced in one of the most central digital communities of knowledge, Wikipedia. In 2001, the American Internet entrepreneur Jimmy Wales founded the online encyclopedia, its main concept being that “anyone can edit any page at any time.” This concept allowed Wikipedia to function also as a common and public space for personal reflection. Wikipedia provides this opportunity through the portal of “talk,” as each Wikipedia entry has its own “talk” area. This study explores how historical knowledge is produced on Wikipedia. The project is based on multiple methodologies ranging from qualitative analysis of Wikipedia pages related to history, survey with Wikipedia editors, to quantitative analysis of participatory practices within the Wikipedia community. The main argument is that Wikipedia allows people to discuss the past, express their opinions and emotions about history and its significance in the present and the future through the portal of “talk” that Wikipedia provides. Wikipedia offers a public and digital space for personal engagement and reflection on the production of historical knowledge. Wikipedia users develop multiple relations with the past, take part in discussions and debates about history and its representation, and in that way produce historical knowledge. This does not mean that all Wikipedia users have the same role and power in the production of historical knowledge. Historical knowledge is not just a product of collaboration and public discussion but result of hierarchy and power. That explains why there is so much discussion behind the main articles, which leads in so little editing. Wikipedia allows all its users to discuss the editing process of a Wikipedia article and express their own historical understandings in the “talk page” of the article, but few of them, the most experienced editors, can make their contributions part of the main entry. [less ▲]

Detailed reference viewed: 48 (2 UL)
Full Text
See detailEffects of an e-learning platform to improve primary care physicians’ response to domestic violence
Gomez Bravo, Raquel UL

Doctoral thesis (2022)

Family Violence (FV) is a broad term that includes different types of violence and abuse that occur within the family, like domestic violence (DV) or Intimate Partner Violence (IPV), Child Abuse or ... [more ▼]

Family Violence (FV) is a broad term that includes different types of violence and abuse that occur within the family, like domestic violence (DV) or Intimate Partner Violence (IPV), Child Abuse or neglect (CA), Elder Abuse (EA) and Female Genital Mutilation (FGM), inter alia. The most prevalent is DV or IPV, declared to be one of the most serious human rights violations, even prior to the aggravated situation brought about by the COVID-19 in 2020. Margaret Chan, 2013 WHO director, called it a global public health care issue of epidemic proportions, but COVID-19 has transformed it into a pandemic as well, earning it the dubious distinction of being the shadow pandemic. Governments’ responses to stop the spread of the infection have forced many families to stay at home, triggering or aggravating cases of IPV and abuse. The prevalence of IPV around the world has been estimated at 30%, although this percentage varies depending on the region of the world or the country, ranging from 20% in the Western Pacific to 33% in the World Health Organisation (WHO) South-East Asia region. Unfortunately, the United Nations Population Fund (UNFPA) predicts that the impact of the pandemic will increase IPV by 20% during the pandemic. Despite the epidemic proportions of this healthcare problem and its enormous and horrendous consequences at all levels (social well-being, physical and mental health), not only for the individual but also their families, IPV remains largely underdiagnosed. Although victims tend to use healthcare services more, and trust health care professionals (HCP) to disclose abuse, they do not do so unless professionals specifically ask for it. Nevertheless, one of the most common barriers that prevent HCP from enquiring is that they do not feel adequately trained to tackle it. Although research on the effectiveness of IPV training for HCP suggests that they improve their knowledge, attitudes, self-perceived readiness to approach this problem, and actual response, this topic has not yet been formally included in the curricula. The World Health Organization and the National Institute for Health have published guidelines for health services and recommendations to facilitate the development and implementation of effective training, underlining the need to improve HCP education. The overall objectives of this thesis are to describe current training provisions on FV in the European Region (FAVICUE), and to investigate the effects of a digital education intervention in improving primary care physicians’ response to DV (E-DOVER). [less ▲]

Detailed reference viewed: 36 (7 UL)
See detailClient Protection in Digital Financial Services: A Comparative Legal Analysis of Deposit-Based Lending and Peer-to-Peer Lending in the European Union and Indonesia
Dewi, Tsany Ratna UL

Doctoral thesis (2022)

As the Digital Financial Services (DFS)-based lending sector has gained unprecedented importance, demands for client protection have risen. Despite the growing role of DFS-based lending in both emerging ... [more ▼]

As the Digital Financial Services (DFS)-based lending sector has gained unprecedented importance, demands for client protection have risen. Despite the growing role of DFS-based lending in both emerging and advanced economies, financial law researchers have paid little attention to assessing the risks of DFS-based lending, particularly those in connection with violations of client rights and welfare. Fiduciary risk, insolvency risk, information risk, and technology risk, and their respective legal mitigation, are the focus of this dissertation. This doctoral research examines how regulation shall cope with such risks to have occurred in various fintech lending sectors in different institutional and cultural contexts. The work analyzes which risks are effectively mitigated by existing regulations, which gaps possibly exist in client protection frameworks in either of the two regions, which specific regulation may improve client protection, and whether DFS-based lending providers in regions with a particular regulatory framework serve their clients better. [less ▲]

Detailed reference viewed: 34 (5 UL)
Full Text
See detailTHERMODYNAMICS OF CHEMICAL ENGINES: A CHEMICAL REACTION NETWORK APPROACH
Penocchio, Emanuele UL

Doctoral thesis (2022)

Chemical processes in closed systems inevitably relax to equilibrium. Energy can be employed to counteract such tendency and drive reactions against their spontaneous direction. This nonequilibrium ... [more ▼]

Chemical processes in closed systems inevitably relax to equilibrium. Energy can be employed to counteract such tendency and drive reactions against their spontaneous direction. This nonequilibrium driving is implemented in open systems, which living organisms provide the most spectacular examples of. In recent years, experiments in supramolecular chemistry, photochemistry and electrochemistry demonstrated that, by opening synthetic systems to matter and/or energy exchanges with the environment, artificial systems with life-like behaviours can be realized and used to convert energy inputs of different nature into work at both the nanoscopic and the macroscopic level. However, one tool that is still lacking is a firm grasp of the thermodynamics of these chemical engines. In this thesis, we provide it by leveraging the most recent developments of the thermodynamic description of deterministic chemical reaction networks. As main theoretical results, we extend the current theory to encompass nonideal and light-driven systems, thus providing the fundamental tools to treat electrochemical and photochemical systems in addition to the chemically driven ones. We also expand the scope of information thermodynamics to bipartite chemical reaction networks characterized by macroscopic non-normalized concentration distributions evolving in time with nonlinear dynamics. This framework potentially applies to almost every synthetic chemical engine realized until now, and to many models of biological systems too. Here, we undertake the thermodynamic analysis of some of the epitomes in the field of artificial chemical engines: a model of chemically driven self-assembly, an experimental chemically driven molecular motor, and an experimental photochemical bimolecular pump. The thesis provides a thermodynamic level of understanding of chemical engines that is general, complements previous analyses based on kinetics and stochastic thermodynamics, and has practical implications for designing and improving synthetic systems, regardless of the particular type of powering or chemical structure. [less ▲]

Detailed reference viewed: 99 (9 UL)
Full Text
See detailMicrobiome reservoirs of antimicrobial resistance
de Nies, Laura UL

Doctoral thesis (2022)

Antimicrobial resistance (AMR) presents a global threat to public health due to the inability to comprehensively treat bacterial infections. Emerging resistant bacteria residing within human, animal and ... [more ▼]

Antimicrobial resistance (AMR) presents a global threat to public health due to the inability to comprehensively treat bacterial infections. Emerging resistant bacteria residing within human, animal and environmental reservoirs may spread from one to the other, at both local and global levels. Consequently, AMR has the potential to rapidly become pandemic whereby it is no longer constrained by either geographical or human-animal borders. Therefore, to enhance our understanding on the dissemination of AMR we systematically resolved different reservoirs of antimicrobial resistance, leveraging animal, environmental and human samples, to provide a One Health perspective. To identify antimicrobial resistance genes (ARGs) and compare their identity and prevalence across different microbial reservoirs, we developed the PathoFact pipeline which also contextualizes ARG localization on mobile genetic elements (MGEs). This methodology was applied to several metagenomic datasets covering microbiomes of infants, laboratory mice, a wastewater treatment plant (WWTP) and biofilms from glacier-fed streams (GFS). Investigating the infant gut resistome we found that the abundance of ARGs against (semi)-synthetic agents were increased in infants born via caesarian section compared to those born via vaginal delivery. Additionally, we identified mobile genetic elements (MGEs) encoding ARGs such as glycopeptide, diaminopyrimidine and multidrug resistance at an early age. MGEs are often pivotal in the accumulation and dissemination of AMR within a microbial population. Therefore, we assessed the effect of selective pressure on the evolution and consecutive dissemination of AMR within the commensal gut microbiome, utilizing a mouse model. While plasmids and phages were found to contribute to the spread of AMR, we found that integrons represented the primary factors mediating AMR in the antibiotic-treated mice. In addition to the above-described studies, we investigated the environmental resistome, comprising both the urban environment, i.e., the WWTP, and a natural environment, GFS biofilms. Utilizing a multi-omics approach we investigated the WWTP resistome over a 1.5 years timeseries and found that a core group of fifteen AMR categories were always present. Additionally, we found a significant difference in AMR categories encoded on phages versus plasmids indicating that the MGEs contributed differentially to the dissemination of AMR. On the other hand, the GFS biofilms represent pristine environments with limited anthropogenic influences. Therein, we found that eukaryotes, as well as prokaryotes, may serve as AMR reservoirs owing to their potential for encoding ARGs. In addition to our identification of biosynthetic gene clusters encoding antibacterial secondary metabolites, our findings highlight the constant intra- and inter-domain competition and the underlying mechanisms influencing microbial survival in GFS epilithic biofilms. In general, we observed that the overall AMR abundances were highest in human and animal microbial reservoirs whilst environmental reservoirs demonstrated a higher diversity of ARG subtypes. Additionally, we identified human-associated, MGE-derived ARGs in all three components of the One Health triad, indicating possible transmission routes for AMR dissemination. In summary, this work provides a comprehensive assessment of the prevalence of antimicrobial resistance and its dissemination mechanisms in human, animal, and environmental mechanisms. [less ▲]

Detailed reference viewed: 147 (19 UL)
See detailOlder people's use of assistive technologies and alternative means to cope with health-related declines
Abri, Diana

Doctoral thesis (2022)

Health-related declines are risk factors for autonomous living and quality of life of older people. Yet, they have various action possibilities (e.g., use of technical and personal support, health ... [more ▼]

Health-related declines are risk factors for autonomous living and quality of life of older people. Yet, they have various action possibilities (e.g., use of technical and personal support, health services, environmental adaptation) which may reduce these risks. Sadly, the use rates of these options are far from perfect, so that their positive effects are not sufficiently realized. With this in mind, the main goal of this dissertation was to gain a better understanding of the factors that influence older people's use of the various action possibilities to cope with illness, functional decline, activity limitations, and participation restrictions. Five studies using different but complementary methods pursued this goal. Study I provided a systematic review of 12 empirical studies on the effectiveness and use of self-care assistive technologies (ATs). It found that self-care ATs are efficient with respect to reduced care hours and increased independence level. The use of such technologies was associated with three different kinds of factors including personal, contextual, technological device factors. Study II conducted a systematic review of 23 theoretical models of AT use from an action-theoretical perspective on lifespan development. It revealed that these models considered a limited range of internal and external context factors of AT use. None of them contained perceived discrepancies between the actual and desired developmental situation, goals to reduce these discrepancies, action alternatives to AT use, and decision-making about AT use by other persons or a joint decision-making. Study III contained a qualitative meta-synthesis of 7 qualitative studies on subjective reasons for the use or nonuse of ATs. It thus considered a branch of research that is independent from the ones covered by Study 1 and 2 and it used a different method. It found 25 subjective reasons referring to user’s beliefs and desires of which 18 were not contained in AT use models. However, they could be included in more comprehensive models to increase their predictive value. Study IV focused on the construction of an “Actional Model of older people´s coping with health related declines” for explaining the use of 8 major action possibilities (7 beyond ATs). Its development followed a recent theory construction methodology and recent principles for constructing a practically useful theory. It integrated results from the studies 1, 2 and 3 as well as from other relevant literature. Central explanatory variables are perceived discrepancies between actual and desired development, discrepancy reduction and prevention goals and internal as well as external context factors. Outcome variables are the 8 courses of action and their results. Study V examined the view of experts (professional caregivers of older people) regarding central components of the Actional Model of Older people´s Coping with Health-Related Declines developed in study 4. Theory generating expert interviews were conducted to further clarify key components of the model. The results led to their further specification, such as the contents of discrepancy reduction and prevention goals, further motivating and demotivating goals, and external context factors as barriers and facilitators of their use. The findings of the dissertation are discussed with respect to the advancement of empirical, theoretical and methodological knowledge. Implications for future research and for the improvement of practical applications in gerontological case management and developmental counselling are highlighted. [less ▲]

Detailed reference viewed: 49 (6 UL)
Full Text
See detailANALYSIS OF NEURODEVELOPMENTAL DEFECTS IN HUMAN MIDBRAIN ORGANOIDS FROM GBA-N370S PARKINSON'S DISEASE PATIENTS
Rosety, Isabel UL

Doctoral thesis (2022)

With increasing prevalence, Parkinson’s disease presents a major challenge for medical research and public health. Despite years of investigation, significant knowledge gaps exist and Parkinson’s disease ... [more ▼]

With increasing prevalence, Parkinson’s disease presents a major challenge for medical research and public health. Despite years of investigation, significant knowledge gaps exist and Parkinson’s disease (PD) etiology remains unclear. A recent concept in the field is that neurodevelopmental aspects might contribute to the pathogenesis of neurodegenerative diseases such as PD. Our hypothesis is that mutations in PD-linked genes have an impact on the cells’ homeostasis at the neural precursor state, giving rise to vulnerable dopaminergic (DA) neurons, thereby increasing the degree of susceptibility for neurodegeneration with aging. In order to investigate this, we used a human midbrain organoid (hMO) model generated from iPSC-derived neural precursor cells. As part of the optimization of the model, we treated the organoids with the neurotoxin 6-OHDA to develop a neurotoxin-induced PD model and set up a high-content imaging pipeline coupled with machine learning classification to predict neurotoxicity. We then used these tools to derive PD patient-derived hMOs in order to investigate our main hypothesis. First, we focused on PD patients carrying a heterozygous mutation in the GBA gene. We developed a genome-scale metabolic model that predicted significant differences in lipid metabolism between patients and controls. Then, we validated the observations by performing a comprehensive lipidomics analysis confirming a dysregulated lipidome in mutant hMOs. Moreover, GBA-PD hMOs displayed PD-relevant phenotypes, impaired DA differentiation and an increased population of neural progenitor cells (NPCs) in cell cycle arrest, confirming the presence of neurodevelopmental defects. To further investigate the neurodevelopmental component of PD, we used patient-derived cell lines carrying PINK1 mutations. PINK1-PD neural precursors presented differences in their energetic profile, imbalanced proliferation, apoptosis, mitophagy, and an impaired differentiation efficiency to DA neurons compared to controls. Correction of the PINK1 point mutation was able to improve the metabolic properties and neuronal firing rates as well as rescuing the differentiation phenotype. We performed a drug screen using repurposed drugs as well as novel compounds to evaluate their potential to rescue the observed developmental phenotype. Treatment with 2-hydroxypropyl-β- cyclodextrin increased the autophagy and mitophagy capacity of neurons which was accompanied by improved dopaminergic differentiation of patient-specific neurons in midbrain organoids and showed neuroprotective effects in an MPTP-treated mice PD model. In conlusion, PD has a neurodevelopmental component that increases susceptibility to the pathology. Thus, our findings suggest that the use of hMOs are suitable to reveal early PD pathomechanisms, as well as constituting a powerful tool for advanced therapy development. [less ▲]

Detailed reference viewed: 55 (5 UL)
Full Text
See detailA posteriori error estimation for finite element approximations of fractional Laplacian problems and applications to poro–elasticity
Bulle, Raphaël UL

Doctoral thesis (2022)

This manuscript is concerned with a posteriori error estimation for the finite element discretization of standard and fractional partial differential equations as well as an application of fractional ... [more ▼]

This manuscript is concerned with a posteriori error estimation for the finite element discretization of standard and fractional partial differential equations as well as an application of fractional calculus to the modeling of the human meniscus by poro-elasticity equations. In the introduction, we give an overview of the literature of a posteriori error estimation in finite element methods and of adaptive refine- ment methods. We emphasize the state–of–the–art of the Bank–Weiser a posteriori error estimation method and of the adaptive refinement methods convergence results. Then, we move to fractional partial differential equations. We give some of the most common discretization methods of fractional Laplacian operator based equations. We review some results of a priori error estimation for the finite element discretization of these equations and give the state–of–the–art of a posteriori error estimation. Finally, we review the literature on the use of the Caputo’s fractional derivative in applications, focusing on anomalous diffusion and poro-elasticity applications. The rest of the manuscript is organized as follow. Chapter 1 is concerned with a proof of the reliability of the Bank–Weiser estimator for three–dimensional problems, extending a result from the literature. In Chapter 2 we present a numerical study of the Bank–Weiser estimator, provide a novel implementation of the estimator in the FEniCS finite element software and apply it to a variety of elliptic equations as well as goal-oriented error estimation. In Chapter 3 we derive a novel a posteriori estimator for the L2 error induced by the finite element discretization of fractional Laplacian operator based equations. In Chapter 4 we present new theoretical results on the convergence of a rational approximation method with consequences on the approximation of fractional norms as well as a priori error estimation results for the finite element discretization of fractional equations. Finally, in Chapter 5 we provide an application of fractional calculus to the study of the human meniscus via poro-elasticity equations. [less ▲]

Detailed reference viewed: 140 (15 UL)