![]() Sikk, Kaarel ![]() Doctoral thesis (2023) Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as ... [more ▼] Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as passive imprints of past human activities and spatial interactions they shape have not been studied as the driving force of historical processes. While archaeological knowledge has been used to construct geographical theories of evolution of settlement there still exist gaps in this knowledge. Currently no theoretical framework has been adopted to explore them as spatial systems emerging from micro-choices of small population units. The goal of this thesis is to propose a conceptual model of adaptive settlement systems based on complex adaptive systems framework. The model frames settlement system formation processes as an adaptive system containing spatial features, information flows, decision making population units (agents) and forming cross scale feedback loops between location choices of individuals and space modified by their aggregated choices. The goal of the model is to find new ways of interpretation of archaeological locational data as well as closer theoretical integration of micro-level choices and meso-level settlement structures. The thesis is divided into five chapters, the first chapter is dedicated to conceptualisation of the general model based on existing literature and shows that settlement systems are inherently complex adaptive systems and therefore require tools of complexity science for causal explanations. The following chapters explore both empirical and theoretical simulated settlement patterns based dedicated to studying selected information flows and feedbacks in the context of the whole system. Second and third chapters explore the case study of the Stone Age settlement in Estonia comparing residential location choice principles of different periods. In chapter 2 the relation between environmental conditions and residential choice is explored statistically. The results confirm that the relation is significant but varies between different archaeological phenomena. In the third chapter hunter-fisher-gatherer and early agrarian Corded Ware settlement systems were compared spatially using inductive models. The results indicated a large difference in their perception of landscape regarding suitability for habitation. It led to conclusions that early agrarian land use significantly extended land use potential and provided a competitive spatial benefit. In addition to spatial differences, model performance was compared and the difference was discussed in the context of proposed adaptive settlement system model. Last two chapters present theoretical agent-based simulation experiments intended to study effects discussed in relation to environmental model performance and environmental determinism in general. In the fourth chapter the central place foragingmodel was embedded in the proposed model and resource depletion, as an environmental modification mechanism, was explored. The study excluded the possibility that mobility itself would lead to modelling effects discussed in the previous chapter. The purpose of the last chapter is the disentanglement of the complex relations between social versus human-environment interactions. The study exposed non-linear spatial effects expected population density can have on the system and the general robustness of environmental inductive models in archaeology to randomness and social effect. The model indicates that social interactions between individuals lead to formation of a group agency which is determined by the environment even if individual cognitions consider the environment insignificant. It also indicates that spatial configuration of the environment has a certain influence towards population clustering therefore providing a potential pathway to population aggregation. Those empirical and theoretical results showed the new insights provided by the complex adaptive systems framework. Some of the results, including the explanation of empirical results, required the conceptual model to provide a framework of interpretation. [less ▲] Detailed reference viewed: 158 (20 UL)![]() Fernandez de Henestrosa, Martha ![]() Doctoral thesis (2023) The experience of work-related stress and ill-/health is of major concern for employers and organizations in the European Union (EU-OSHA, 2022). From an occupational health perspective, work-related ... [more ▼] The experience of work-related stress and ill-/health is of major concern for employers and organizations in the European Union (EU-OSHA, 2022). From an occupational health perspective, work-related stress is expected to result from employees’ exposure to certain psychological and social characteristics of the work, so called job demands, which are presumed to lead to diminished health and performance among employees (Eurofound & EU-OSHA, 2014). To explain associations between such job characteristics and employees’ health, scholars have generally relied on prominent theoretical frameworks, such as the Job Demands-Resources model (JD-R; Bakker & Demerouti, 2017). Although the JD-R model has successfully contributed to the prediction of work-related health and motivational outcomes in the past years, it has also resulted in a number of unresolved issues (Bakker & Demerouti, 2017). Drawing on multiple theoretical frameworks, the present dissertation examined in greater detail inconsistencies related to the nature and functioning of job demands; and in doing so, aimed to provide a further understanding on work-related demands and their psychological effects. The present thesis encompasses three published articles and addresses open research avenues as regards to (i) the categorization of job demands, (ii) the cognitive appraisal of job demands, and (iii) organizational determinants of demand appraisal. A review of the occupational health literature revealed that most research conducted on the categorization of work-related demands continues to apply a two-fold differentiation of job demands (i.e., Challenge-Hindrance framework, Cavanaugh et al., 2000). However, a more differentiated approach to distinguish between different types of job demands has recently been introduced (i.e., Challenge-Hindrance-Threat framework; Tuckey et al., 2015). Few studies have been conducted on Tuckey et al.'s (2015) extended dimensionality of workplace stressors (e.g., Espedido & Searle, 2018) and not much is known on how job threats, job hindrances and job challenges relate to well-being outcomes, once job resources (i.e., motivational aspects of the job) are taking into account. Therefore, the first aim of the present dissertation (i.e., Article 1) was to examine Tuckey et al.'s (2015) expanded dimensionality of workplace stressors within the JD-R framework by analyzing job threats, job hindrances and job challenges alongside job resources. Results from a heterogenous occupational sample of Luxemburgish employees supported the distinctiveness of Tuckey et al.'s (2015) threefold differentiation of job demands based on their associations with well-being outcomes, while accounting for the effects of job resources. Results further corroborated the health-impairing nature of job threats, job hindrances and job challenges, and supported the motivational nature of job resources. Contrary to expectations, job challenges did not relate to employees’ experiences of vigor. Prior research has often relied on classification frameworks to categorize job demands a priori and to explain their effects (Mellupe, 2020). However, a more recent stream of research has moved towards the examination of employees’ subjective evaluations (i.e., appraisals) of work-related demands (LePine, 2022). Although first studies examining employees’ appraisal of job demands have yielded promising insights into the nature of work-related demands, research does not yet consider appraisal in a systematic manner (Li et al., 2019). In addition, scholars have exclusively considered primary appraisal (i.e., motivational relevance of the stressor) when examining job demands and their associated effects, thereby ignoring the notion of secondary appraisal (i.e., individuals’ assessment to cope with the stressor; Lazarus & Folkman, 1984; Podsakoff et al., 2023). To address these limitations, and taking into account previous findings on the well-being of nurses during the Covid-19 pandemic (e.g., Mo et al., 2020), the second aim of the present dissertation (i.e., Article 2) was to examine how nursing professionals appraise job demand during the health crisis, and to analyze the predictive contribution of nurses’ secondary appraisal of job demands in view of their proximal affective responses. Results from a sample of nursing professionals working in Luxembourg indicated that secondary appraisal was the most important predictor of nurses’ affective states. In addition, negative affective states were predicted by threat appraisals and job demands (i.e., time pressure, emotional demands), whereas positive affect was predicted by challenge appraisals of emotional and physical demands. Results further showed that emotional and physical demands were exclusively appraised as threatening, whereas time pressure associated with challenge and threat appraisal. Lastly, a literature review revealed that not much is known about which organizational factors might contribute to employees’ demand appraisals (LePine, 2022). Therefore, the third aim of the present dissertation (i.e., Article 3) was to investigate determinants and possible boundary conditions of nurses’ demand appraisals among matching job resources. Results showed that corresponding job resources predicted challenge appraisals of job demands. Regarding the prediction of threat appraisal, with the exception of social support, all proposed job resources significantly associated with nurses’ threat appraisal of corresponding job demands. Contrary to expectations, job resources did not moderate the associations between matching job demands and their respective challenge/threat appraisals. In sum, findings of the present dissertation highlight the importance (i) to adopt a threefold understanding of job demands (i.e., challenges, hindrance, threats) while taking into account job resources, (ii) to consider secondary appraisals alongside job demands and their primary appraisals, and (iii) to consider matching job resources as organizational determinants of challenge and threat appraisals. These findings may serve to guide occupational health interventions strategies. [less ▲] Detailed reference viewed: 55 (12 UL)![]() Fahmy, Hazem ![]() Doctoral thesis (2023) Detailed reference viewed: 58 (11 UL)![]() Garg, Aayush ![]() Doctoral thesis (2023) Software Testing is a quality control activity that, in addition to finding flaws or bugs, provides confidence in the software’s correctness. The quality of the developed software depends on the strength ... [more ▼] Software Testing is a quality control activity that, in addition to finding flaws or bugs, provides confidence in the software’s correctness. The quality of the developed software depends on the strength of its test suite. Mutation Testing has shown that it effectively guides in improving the test suite’s strength. Mutation is a test adequacy criterion in which test requirements are represented by mutants. Mutants are slight syntactic modifications of the original program that aim to introduce semantic deviations (from the original program) necessitating the testers to design tests to kill these mutants, i.e., to distinguish the observable behavior between a mutant and the original program. This process of designing tests to kill a mutant is iteratively performed for the entire mutant set, which results in augmenting the test suite, hence improving its strength. Although mutation testing is empirically validated, a key issue is that its application is expensive due to the large number of low-utility mutants that it introduces. Some mutants cannot be even killed as they are functionally equivalent to the original program. To reduce the application cost, it is imperative to limit the number of mutants to those that are actually useful. Since it requires manual analysis and test executions to identify such mutants, there is a lack of an effective solution to the problem. Hence, it remains unclear how to mutate and test a code efficiently. On the other hand, with the advancement in deep learning, several works in the literature recently focused on using it on source code to automate many nontrivial tasks including bug fixing, producing code comments, code completion, and program repair. The increasing utilization of deep learning is due to a combination of factors. The first is the vast availability of data to learn from, specifically source code in open-source repositories. The second is the availability of inexpensive hardware able to efficiently run deep learning infrastructures. The third and the most compelling is its ability to automatically learn the categorization of data by learning the code context through its hidden layer architecture, making it especially proficient in identifying features. Thus, we explore the possibility of employing deep learning to identify only useful mutants, in order to achieve a good trade-off between the invested effort and test effectiveness. Hence, as our first contribution, this dissertation proposes Cerebro, a deep learning approach to statically select subsuming mutants based on the mutants’ surrounding code context. As subsuming mutants reside at the top of the subsumption hierarchy, test cases designed to only kill this minimal subset of mutants kill all the remaining mutants. Our evaluation of Cerebro demonstrates that it preserves the mutation testing benefits while limiting the application cost, i.e., reducing all cost factors such as equivalent mutants, mutant executions, and the mutants requiring analysis. Apart from improving test suite strength, mutation testing has been proven useful in inferring software specifications. Software specifications aim at describing the software’s intended behavior and can be used to distinguish correct from incorrect software behaviors. Specification inference techniques aim at inferring assertions by generating and filtering candidate assertions through dynamic test executions and mutation testing. Due to the introduction of a large number of mutants during mutation testing such techniques are also computationally expensive, hence establishing a need for the selection of mutants that fit best for assertion inference. We refer to such mutants as Assertion Inferring Mutants. In our analysis, we find that the assertion inferring mutants are significantly different from the subsuming mutants. Thus, we explored the employability of deep learning to identify Assertion Inferring Mutants. Hence, as our second contribution, this dissertation proposes Seeker, a deep learning approach to statically select Assertion Inferring Mutants. Our evaluation demonstrates that Seeker enables an assertion inference capability comparable to the full mutation analysis while significantly limiting the execution cost. In addition to testing software in general, a few works in the literature attempt to employ mutation testing to tackle security-related issues, due to the fault-based nature of the technique. These works propose mutation operators to convert non-vulnerable code to vulnerable by mimicking common security bugs. However, these pattern-based approaches have two major limitations. Firstly, the design of security-specific mutation operators is not trivial. It requires manual analysis and comprehension of the vulnerability classes. Secondly, these mutation operators can alter the program semantics in a manner that is not convincing for developers and is perceived as unrealistic, thereby hindering the usability of the method. On the other hand, with the release of powerful language models trained on large code corpus, e.g. CodeBERT, a new family of mutation testing tools has arisen with the promise to generate natural mutants. We study the extent to which the mutants produced by language models can semantically mimic the behavior of vulnerabilities aka Vulnerability-mimicking Mutants. Designed test cases failed by these mutants will also tackle mimicked vulnerabilities. In our analysis, we found that a very small subset of mutants is vulnerability-mimicking. Though, this set mimics more than half of the vulnerabilities in our dataset. Due to the absence of any defined features to identify vulnerability-mimicking mutants, as our third contribution, this dissertation introduces Mystique, a deep learning approach that automatically extracts features to identify vulnerability-mimicking mutants. Despite the scarcity, Mystique predicts vulnerability-mimicking mutants with a high prediction performance, demonstrating that their features can be automatically learned by deep learning models to statically predict these without the need of investing any effort in defining features. Since our vulnerability-mimicking mutants cannot mimic all the vulnerabilities, we perceive that these mutants are not a complete representation of all the vulnerabilities and there exists a need for actual vulnerability prediction approaches. Although there exist many such approaches in the literature, their performance is limited due to a few factors. Firstly, vulnerabilities are fewer in comparison to software bugs, limiting the information one can learn from, which affects the prediction performance. Secondly, the existing approaches learn on both, vulnerable, and supposedly non-vulnerable components. This introduces an unavoidable noise in training data, i.e., components with no reported vulnerability are considered non-vulnerable during training, and hence, results in existing approaches performing poorly. We employed deep learning to automatically capture features related to vulnerabilities and explored if we can avoid learning on supposedly non-vulnerable components. Hence, as our final contribution, this dissertation proposes TROVON, a deep learning approach that learns only on components known to be vulnerable, thereby making no assumptions and bypassing the key problem faced by previous techniques. Our comparison of TROVON with existing techniques on security-critical open-source systems with historical vulnerabilities reported in the National Vulnerability Database (NVD) demonstrates that its prediction capability significantly outperforms the existing techniques. [less ▲] Detailed reference viewed: 52 (9 UL)![]() Ünsal, Alper ![]() Doctoral thesis (2023) The overarching theme of this PhD thesis is human mobility and its externalities, particularly in the context of labour and health economics. Through rigorous modelling and analysis, the three chapters of ... [more ▼] The overarching theme of this PhD thesis is human mobility and its externalities, particularly in the context of labour and health economics. Through rigorous modelling and analysis, the three chapters of the thesis demonstrate the potential benefits of policies that regulate human mobility. In the first chapter of my PhD, I examine how language training can improve the functioning of the labour market, with a particular focus on immigrants with high skills who face language barriers. I argue that fully funding the cost of language acquisition for migrants can bring significant benefits to the economy and migrants, but may marginally worsen the labour market performance of low-skilled natives. Using a search and matching framework with two-dimensional skill heterogeneity, I model the effects of a language acquisition subsidy on migrants' labour market integration and its impact on natives' labour market performance. My study finds that subsidizing language acquisition costs may increase the GDP of the German economy by approximately ten billion dollars by decreasing the aggregate unemployment rate and skill mismatch rate and increasing the share of job vacancies requiring high generic skills. The second chapter of my PhD explores the challenges involved in devising social contact limitation policies as a means of controlling infectious disease transmission. Using an economic-epidemiological model of COVID-19 transmission, I evaluate the effectiveness of different intervention strategies and their consequences on public health, social welfare and economic outcomes. The findings emphasize the importance of responsiveness in implementing social contact limitations, rather than solely focusing on their stringency, and suggest that early interventions lead to the lowest losses in economy and mental well-being for a given number of life losses. The study has broader implications for managing the societal impact of infectious diseases and highlights the need to continue refining our understanding of these trade-offs and developing adaptable models and policy tools to safeguard public health while minimizing social and economic consequences. Overall, the study offers a robust and versatile framework for understanding and navigating the challenges posed by public health crises and pandemics. The third chapter of my PhD builds on the economic-epidemiological model developed in Chapter 2 to analyze the multifaceted effects of vaccine hesitancy in controlling the spread of infectious diseases, with a particular focus on the COVID-19 pandemic in Belgium. The study utilizes actual vaccination rates by age group until June 2021 and simulates the following months by incorporating realistic properties such as temporary immunity, age-specific vaccination hesitancy rates, daily vaccination capacity, and vaccine efficacy rate. The baseline scenario with an overall 27.1$\%$ vaccine hesitancy rate indicates that current vaccination rates in Belgium are sufficient to control the spread of COVID-19 without imposing social contact limitations. However, hypothetical scenarios with higher disease transmission rates demonstrate the high costs of vaccine hesitancy, resulting in significant losses in labour supply, mental well-being, and life losses. Throughout this thesis, I have described the costs and benefits induced by mobility, and shown that mobility policies make winners and losers. In Chapter 1, subsidizing the cost of language acquisition for migrants can bring significant benefits to the economy and migrants, but may marginally worsen the labour market performance of low-skilled natives. In Chapter 2, stringent policies alleviate health losses, but they impact economic activity and mental health. In Chapter 3, the health externalities generated by human interactions impose a potential tradeoff between values, namely the freedom to move and the freedom to choose to get vaccinated. In each of these chapters, I quantify these tradeoffs. Another important insight from this thesis is the need to incorporate behavioural aspects into macro models evaluating the consequences of policies related to human mobility. In the thesis, these aspects include individual investments in language training, decision-making on infection avoidance, social contacts, labour supply, and vaccination decisions. can lead to more effective policies that balance the interests of various stakeholders. Overall, this thesis contributes to the literature on human mobility by highlighting the potential benefits and challenges associated with it, and the need for nuanced and responsive policymaking that takes into account behavioural aspects and externalities. The insights gained from this thesis can be relevant for future research in economics on topics related to human mobility, public health, and labour market integration. [less ▲] Detailed reference viewed: 48 (0 UL)![]() Valieva, Farida ![]() Doctoral thesis (2023) This dissertation starts with an overview of the recent and ongoing efforts to achieve greater convergence in national banking supervision within the European Single Supervisory Mechanism (SSM). However ... [more ▼] This dissertation starts with an overview of the recent and ongoing efforts to achieve greater convergence in national banking supervision within the European Single Supervisory Mechanism (SSM). However, the persistence of distinct national preferences on banking supervision has resulted in ongoing differences in the practice of banking supervision at the national level. More specifically, the supervision of Less Significant Institutions (LSIs) has remained under the direct control of national supervisors and to, a certain extent, under national law, thus allowing significant ongoing margin of manoeuvre on supervision. This dissertation examines the consequences of this margin for manoeuvre left to national supervisors, despite strong convergence pressures through post-financial crisis EU institutional developments. The analysis focus upon the national supervision of LSIs. The main research question guiding this work is, therefore: under what conditions do pre-existing national institutional configurations continue to determine the trajectory of national supervisory practice in the context of European-level convergence pressures (through the European Banking Authority and the SSM)? To answer this question, I use a four-part analytical framework based on, first, Europeanisation which provides insight into top-down processes of integration, second, Historical Institutionalism which provides an understanding of path dependency from earlier policy decisions shaping national supervisory institutions and practice, third, the Epistemic Communities approach and fourth Transnational Policy Network framework. Based on this combined analytical framework, I formulate the following hypothesis: the more discretion exercised by the national supervisor in relation to its government, the more likely the adoption of policies and practices that result in greater convergence with the rules and practices developed at the EU / Banking Union level. To test this hypothesis, I start with a broad assessment of the provisions that provide margin of manoeuvre to national authorities, specifically the options and national discretions (ONDs) explicitly granted to national authorities — member state governments or supervisors — in EU capital requirements legislation: the CRD IV/V and CRR I/II. This assessment provides an initial confirmation of my hypothesis, showing a more important degree of convergence in the cases where national supervisors benefit from full discretion with no intervention from national governments. I then test the hypothesis on a typical case where NCAs can exercise discretion — the Supervisory Review and Evaluation Process (SREP) — and a typical case with national government intervention that limits supervisory discretion — Non Performing Loans (NPLs). Through an analysis of the French and German national cases with regard to SREP and NPLs, I conclude that the convergence of prudential supervision within the SSM was largely observed in cases where the national supervisor benefitted from discretion as a result of cooperation opportunities and socialisation processes. [less ▲] Detailed reference viewed: 52 (2 UL)![]() Pavlikova, Polina ![]() Doctoral thesis (2023) This thesis analyses the texts of authors who developed the capacity to regularly apply different systems of expression in their creative writing/art. These writers/artists generally experiment with ... [more ▼] This thesis analyses the texts of authors who developed the capacity to regularly apply different systems of expression in their creative writing/art. These writers/artists generally experiment with poetic/prosaic, verbal/visual forms or several languages. One of the artistic results that they can produce, consciously or unconsciously, is a “twin text,” the object of the current study. The objective of the present research is to establish the concept of twin texts as a cultural supranational phenomenon, propose a method to identify and study them, and suggest reasons why the author creates them systematically. The twin texts are two or more « texts » (verbal and/or visual) that are linked to each other on the thematic level by repeating their images or plots or by showing the same characters, often in a contradictory way. Twin texts can be the result of a non-monolingual situation and non-identified artistic position of an author. For these reasons, we find twin texts among émigré and translingual writers, « non-professionals » (prose writers creating poems or artists applying verbal forms of expression, and vice versa), and other writers/artists whose identity can be defined as « multiple ». By elaborating a series of images, the author of twin texts is placed in a situation described by Mikhail Bakhtin as polyphonic. By being above the meanings his/her works contain, the author of twin texts establishes a space for dialogue. In other words, the writer/artist is not above the text, but above all the texts, « avant-texte », to borrow this term from genetic criticism. [less ▲] Detailed reference viewed: 93 (13 UL)![]() Choudhury, Diptaishik ![]() Doctoral thesis (2023) Detailed reference viewed: 71 (8 UL)![]() Duflo, Gabriel Valentin ![]() Doctoral thesis (2023) The paradigm of learning to optimise relies on the following principle: instead of designing an algorithm to solve a problem, we design an algorithm which will automate the design of such a solver. The ... [more ▼] The paradigm of learning to optimise relies on the following principle: instead of designing an algorithm to solve a problem, we design an algorithm which will automate the design of such a solver. The initial idea was to alleviate the limitations stated by the No Free Lunch Theorem by producing an algorithm which efficiency is less dependent upon known instances of the problem to tackle. Hyper-heuristics constitute the main learning-to-optimise techniques. These rely on a high-level algorithm performing a search process into a space of low-level heuristics to tackle a given problem. Because the latter search space is problem-dependent, the vast majority of hyper-heuristics are designed to tackle a specific problem. Due to this lack of generality, existing works fully redesign hyper-heuristics when tackling a new problem, despite the fact that they may share a similar structure. In this dissertation, we tackle this challenge by proposing a generic way for learning to optimise any problem. To this end, this thesis introduces three main contributions: (i) an analysis of the formal functioning of learning-to-optimise techniques; (ii) a model of generic hyper-heuristic, named Algorithm Learner for Graph Optimisation problems (ALGO), constituting the central point of this work; (iii) a real-world use case where we use our generic hyper-heuristic to automate the design of behaviours within a swarm of drones. In the first part, we provide a formalism for optimisation and learning concepts, which we use to describe the large body of knowledge that combines two layers of optimisation and/or learning. We then put an emphasis on approaches using learning to improve an optimisation process, i.e., aiming at learning to optimise. In the second part, we present ALGO, our model of generic hyper-heuristic. We explain how we abstract from a given problem with a graph structure so that it can be used to tackle any optimisation problem. We also detail the steps to follow in order to use ALGO to tackle a given problem. We finally present the modularity of ALGO with inner components that a user can implement. The second part ends with a validation of our model, i.e., using ALGO to tackle a classical optimisation problem. In the third part, we use ALGO to tackle the problem of area surveillance with a swarm of drones. We demonstrate that ALGO constitutes a novel and efficient way to automate the design of such a distributed and multi-objective problem. [less ▲] Detailed reference viewed: 28 (4 UL)![]() Reicher, Ruth ![]() Doctoral thesis (2023) Detailed reference viewed: 32 (1 UL)![]() Grasso, Giuseppe ![]() Doctoral thesis (2023) Detailed reference viewed: 43 (13 UL)![]() Lothritz, Cedric ![]() Doctoral thesis (2023) The Grand Duchy of Luxembourg is a small country in Western Europe, which, despite its size, is an important global financial centre. Due to its highly multilingual population, and the fact that one of ... [more ▼] The Grand Duchy of Luxembourg is a small country in Western Europe, which, despite its size, is an important global financial centre. Due to its highly multilingual population, and the fact that one of its national languages - Luxembourgish - is regarded as a low-resource language, this country lends itself naturally to a wide variety of interesting research opportunities in the domain of Natural Language Processing (NLP). This thesis discusses and addresses challenges with regard to domain-specific and language-specific NLP, using the unique linguistic situation in Luxembourg as an elaborate case study. We focus on three main topics: (I) NLP challenges present in the financial domain, specifically handling personal names in sensitive documents, (II) NLP challenges related to multilingualism, and (III) NLP challenges for low-resource languages with Luxembourgish as the language of interest. With regard to NLP challenges in the financial domain, we address the challenge of finding and anonymising names in documents. Firstly, an empirical study on the usefulness of Transformer-based deep learning models is presented on the task of Fine-Grained Named Entity Recognition. This empirical study was conducted for a wide array of domains, including the financial domain. We show that Transformer-based models, and in particular BERT models, yield the best performance for this task. We furthermore show that the performance is also strongly dependent on the domain itself, regardless of the choice of model. The automatic detection of names in text documents in turn facilitates the anonymisation of these documents. However, anonymisation can distort data and have a negative effect on models that are built on that data. We investigate the impact of anonymisation of personal names on the performance of deep learning models trained on a large number of NLP tasks. Based on our experiments, we establish which anonymisation strategy should be used to guarantee accurate NLP models. Regarding NLP challenges related to multilingualism, we address the need for polyglot conversational AI in a multilingual environment such as Luxembourg. The trade-off between a single multilingual chatbot and multiple monolingual chatbots trained on Intent Classification and Slot Filling for the banking domain is evaluated in an empirical study. Furthermore, we publish a quadrilingual, parallel dataset that we built specifically for this study, and which can be used to train a client support assistant for the banking domain. With regard to NLP challenges for the Luxembourgish language, we predominantly address the lack of a suitable language model and datasets for NLP tasks in Luxembourgish. First, we present the most impactful contribution of this PhD thesis, which is the first BERT model for the Luxembourgish language which we name LuxemBERT. We explore a novel data augmentation technique based on partially and systematically translating texts to Luxembourgish from a closely related language in order to artificially increase the training data to build our LuxemBERT model. Furthermore, we create datasets for a variety of downstream NLP tasks in Luxembourgish to evaluate the performance of LuxemBERT. We use these datasets to show that LuxemBERT outperforms mBERT, the de facto state-of-the-art model for Luxembourgish. Finally, we compare different approaches to pre-train BERT models for Luxembourgish. Specifically, we investigate whether it is preferable to pre-train a BERT model from scratch or continue pre-training an already existing pre-trained model on new data. To this end, we further pre-train the multilingual mBERT model and the German GottBERT on the Luxembourgish dataset that we used to pre-train LuxemBERT and compare all models in terms of performance and robustness. We make all our language models as well as the datasets available to the NLP community. [less ▲] Detailed reference viewed: 65 (13 UL)![]() Chiara, Pier Giorgio ![]() Doctoral thesis (2023) The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current ... [more ▼] The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current technical landscape by providing a working definition and architecture taxonomy of ‘Internet of Things’ and ‘resource-constrained devices’, coupled with a threat landscape where each specific attack is linked to a layer of the taxonomy. Chapter 2 lays down the theoretical foundations for an interdisciplinary approach and a unified, holistic vision of cybersecurity, safety and privacy justified by the ‘IoT revolution’ through the so-called infraethical perspective. Chapter 3 investigates whether and to what extent the fast-evolving European cybersecurity regulatory framework addresses the security challenges brought about by the IoT by allocating legal responsibilities to the right parties. Chapters 4 and 5 focus, on the other hand, on ‘privacy’ understood by proxy as to include EU data protection. In particular, Chapter 4 addresses three legal challenges brought about by the ubiquitous IoT data and metadata processing to EU privacy and data protection legal frameworks i.e., the ePrivacy Directive and the GDPR. Chapter 5 casts light on the risk management tool enshrined in EU data protection law, that is, Data Protection Impact Assessment (DPIA) and proposes an original DPIA methodology for connected devices, building on the CNIL (French data protection authority) model. [less ▲] Detailed reference viewed: 40 (1 UL)![]() Mc Ardle, Arron ![]() Doctoral thesis (2023) Detailed reference viewed: 41 (1 UL)![]() Leterre, Gabrielle Céline Giliane ![]() Doctoral thesis (2023) As space activities continue to develop and increase in number, so do environmental concerns in outer space. For decades now, humanity has continuously sent satellites into Earth orbits without caring for ... [more ▼] As space activities continue to develop and increase in number, so do environmental concerns in outer space. For decades now, humanity has continuously sent satellites into Earth orbits without caring for potential environmental consequences in outer space. Ultimately, these actions have proven to raise issues regarding the sustainability of the activity; issues which are now being addressed legally. Satellites were the first venture of humanity into space, and it is fair to admit we did not know better at the time. We do now. With the development of new types of space missions, such as space resources-related activities, it is safe to assume that new serious environmental problems will arise as well. Based on previous experience both on Earth and in outer space, it is logical, but also imperative, to question the environmental impact of these space resources activities and to consider legal solutions to promote and facilitate their sustainability. Accordingly, this research assesses the applicability of existing rules and mechanisms promoting environmental protection and sustainability in outer space to the case of the exploitation of space resources. To that end, an array of mechanisms is considered such as the framework of the UN Space Treaties, international environmental law, non-legally binding instruments, such as the space debris mitigation guidelines and COSPAR’s planetary protection policy, as well as national space legislations. Ultimately, this work aims at drafting the roadmap for the environmentally sustainable exploitation of space resources from a legal standpoint. It recommends the adoption of a mix of interdisciplinary approaches which balances a national effective approach with international guiding rules. [less ▲] Detailed reference viewed: 60 (3 UL)![]() Khanfir, Ahmed ![]() Doctoral thesis (2023) Artificial faults have been proven useful to ensure software quality, enabling the simulation of its behaviour in erroneous situations, and thereby evaluating its robustness and its impact on the ... [more ▼] Artificial faults have been proven useful to ensure software quality, enabling the simulation of its behaviour in erroneous situations, and thereby evaluating its robustness and its impact on the surrounding components in the presence of faults. Similarly, by introducing these faults in the testing phase, they can serve as a proxy to measure the fault revelation and thoroughness of current test suites, and provide developers with testing objectives, as writing tests to detect them helps reveal and prevent eventual similar real ones. This approach – mutation testing – has gained increasing fame and interest among researchers and practitioners since its appearance in the 1970s, and operates typically by introducing small syntactic transformations (using mutation operators) to the target program, aiming at producing multiple faulty versions of it (mutants). These operators are generally created based on the grammar rules of the target programming language and then tuned through empirical studies in order to reduce the redundancy and noise among the induced mutants. Having limited knowledge of the program context or the relevant locations to mutate, these patterns are applied in a brute-force manner on the full code base of the program, producing numerous mutants and overwhelming the developers with a costly overhead of test executions and mutants analysis efforts. For this reason, although proven useful in multiple software engineering applications, the adoption of mutation testing remains limited in practice. Another key challenge of mutation testing is the misrepresentation of real bugs by the induced artificial faults. Indeed, this can make the results of any relying application questionable or inaccurate. To tackle this challenge, researchers have proposed new fault-seeding techniques that aim at mimicking real faults. To achieve this, they suggest leveraging the knowledge base of previous faults to inject new ones. Although these techniques produce promising results, they do not solve the high-cost issue or even exacerbate it by generating more mutants with their extended patterns set. Along the same lines of research, we start addressing the aforementioned challenges – regarding the cost of the injection campaign and the representativeness of the artificial faults – by proposing IBIR; a targeted fault injection which aims at mimicking real faulty behaviours. To do so, IBIR uses information retrieved from bug reports (to select relevant code locations to mutate) and fault patterns created by inverting fix patterns, which have been introduced and tuned based on real bug fixes mined from different repositories. We implemented this approach, and showed that it outperforms the fault injection performed by traditional mutation testing in terms of semantic similarity with the originally targeted fault (described in the bug report), when applied at either project or class levels of granularity, and provides better, statistically significant, estimations of test effectiveness (fault detection). Additionally, when injecting only 10 faults, IBIR couples with more real bugs than mutation testing even when injecting 1000 faults. Although effective in emulating real faults, IBIR’s approach depends strongly on the quality and existence of bug reports, which when absent can reduce its performance to that of traditional mutation testing approaches. In the absence of such prior and with the same objective of injecting few relevant faults, we suggest accounting for the project’s context and the actual developer’s code distribution to generate more “natural” mutants, in a sense where they are understandable and more likely to occur. To this end, we propose the usage of code from real programs as a knowledge base to inject faults instead of the language grammar or previous bugs knowledge, such as bug reports and bug fixes. Particularly, we leverage the code knowledge and capability of pre-trained generative language models (i.e. CodeBERT) in capturing the code context and predicting developer-like code alternatives, to produce few faults in diverse locations of the input program. This way the approach development and maintenance does not require any major effort, such as creating or inferring fault patterns or training a model to learn how to inject faults. In fact, to inject relevant faults in a given program, our approach masks tokens (one at a time) from its code base and uses the model to predict them, then considers the inaccurate predictions as probable developer-like mistakes, forming the output mutants set. Our results show that these mutants induce test suites with higher fault detection capability, in terms of effectiveness and cost-efficiency than conventional mutation testing. Next, we turn our interest to the code comprehension of pre-trained language models, particularly their capability in capturing the naturalness aspect of code. This measure has been proven very useful to distinguish unusual code which can be a symptom of code smell, low readability, bugginess, bug-proneness, etc, thereby indicating relevant locations requiring prior attention from developers. Code naturalness is typically predicted using statistical language models like n-gram, to approximate how surprising a piece of code is, based on the fact that code, in small snippets, is repetitive. Although powerful, training such models on a large code corpus can be tedious, time-consuming and sensitive to code patterns (and practices) encountered during training. Consequently, these models are often trained on a small corpus and thus only estimate the language naturalness relative to a specific style of programming or type of project. To overcome these issues, we propose the use of pre-trained generative language models to infer code naturalness. Thus, we suggest inferring naturalness by masking (omitting) code tokens, one at a time, of code sequences, and checking the models’ ability to predict them. We implement this workflow, named CodeBERT-NT, and evaluate its capability to prioritize buggy lines over non-buggy ones when ranking code based on its naturalness. Our results show that our approach outperforms both, random-uniform- and complexity-based ranking techniques, and yields comparable results to the n-gram models, although trained in an intra-project fashion. Finally, We provide the implementation of tools and libraries enabling the code naturalness measuring and fault injection by the different approaches and provide the required resources to compare their effectiveness in emulating real faults and guiding the testing towards higher fault detection techniques. This includes the source code of our proposed approaches and replication packages of our conducted studies. [less ▲] Detailed reference viewed: 84 (14 UL)![]() Santaló, Carlos ![]() Doctoral thesis (2023) Regulation (EU) 655/2014, establishing a European Account Preservation Order (‘EAPO Regulation’), introduced the very cross-border interim measure at the European Union level. As its name indicates, it ... [more ▼] Regulation (EU) 655/2014, establishing a European Account Preservation Order (‘EAPO Regulation’), introduced the very cross-border interim measure at the European Union level. As its name indicates, it permits the direct cross-border attachment of funds in bank accounts. Furthermore, it contains a specific mechanism to search for the debtors’ bank accounts that does not exist in all EU-Member States. Although the EAPO is a self-standing procedure, it combines uniform standards with other aspects of the procedure that depend on national law. This dissertation studies and compares the application of the EAPO Regulation in three jurisdictions: Germany, Luxembourg, and Spain. It aims to understand the incorporation of the EAPO procedure within a national civil procedural system and the impact national law has on the EAPO procedure as a whole. The comparative approach determines the different ways Member States apply the EAPO procedures. This comparative analysis follows both a theoretical and empirical approach. The empirical approach relies on qualitative and quantitative data on the functioning of the EAPO obtained from stakeholders, case law databases, and institutional statistical data. The empirical side of the research seeks to identify specific issues courts and practitioners encounter in real-life EAPO cases, and whether such issues are the same or different depending on the jurisdiction where the EAPO is applied. Relying on the outcome of the comparative-empirical analysis, specific policy-making recommendations are designed and intended to improve the application of the EAPO at the EU and the national level. [less ▲] Detailed reference viewed: 60 (4 UL)![]() Vitello, Piergiorgio ![]() Doctoral thesis (2023) The importance of data in transportation research has been widely recognized since it plays a crucial role in understanding and analyzing the movement of people, identifying inefficiencies in ... [more ▼] The importance of data in transportation research has been widely recognized since it plays a crucial role in understanding and analyzing the movement of people, identifying inefficiencies in transportation systems, and developing strategies to improve mobility services. This use of data, known as mobility analysis, involves collecting and analyzing data on transport infrastructure and services, traffic flows, demand, and travel behavior. However, traditional data sources have limitations. The widespread use of mobile devices, such as smartphones, has enabled the use of Information and Communications Technology (ICT) to improve data sources for mobility analysis. Mobile crowdsensing (MCS) is a paradigm that uses data from smart devices to provide researchers with more detailed and real-time insights into mobility patterns and behaviors. However, this new data also poses challenges, such as the need to fuse it with other types of information to obtain mobility insights. In this thesis, the primary source of data that is being examined and leveraged is the popularity index of local businesses and points of interest from Google Popular Times (GPT) data. This data has significant potential for mobility analysis as it overcomes limitations of traditional mobility data, such as data availability and lack of reflection of demand for secondary activities. The main objective of this thesis is to investigate how crowdsourced data can contribute to reduce the limitations of traditional mobility datasets. This is achieved by developing new tools and methodologies to utilize crowdsourced data in mobility analysis. The thesis first examines the potential of GPT as a source to provide information on the attractiveness of secondary activities. A data-driven approach is used to identify features that impact the popularity of local businesses and classify their attractiveness based on these features. Secondly, the thesis evaluates the possible use of GPT as a source to estimate mobility patterns. A tool is created to use the crowdness of a station to estimate transit demand information and map the precise volume and temporal dynamics of entrances and exits at the station level. Thirdly, the thesis investigates the possibility of leveraging the popularity of activities around stations to estimate flows in and out of stations. A method is proposed to profile stations based on the dynamic information of activities in catchment areas. Through this data, machine learning techniques are used to estimate transit flows at the station level. Finally, this study concludes by exploring the possibility of exploiting crowdsourced data not only for extracting mobility insights under normal conditions but also to extract mobility trends during anomalous events. To this end, we focused on analyzing the recovery of mobility during the first outbreak of COVID-19 for different cities in Europe. [less ▲] Detailed reference viewed: 49 (4 UL)![]() Yolacan, Taygun Firat ![]() Doctoral thesis (2023) Steel-concrete hybrid building systems offer sustainable and effective structural solutions for multi-story and high-rise buildings considering that steel is a completely recyclable material and that the ... [more ▼] Steel-concrete hybrid building systems offer sustainable and effective structural solutions for multi-story and high-rise buildings considering that steel is a completely recyclable material and that the most advantageous mechanical properties of steel and concrete could be used simultaneously against the effects of tension and compression stress resultants. On the other hand, a small percentage of multi-story buildings and a small number of high-rise structures are actually constructed using steel-concrete hybrid building technologies. This is mostly a result of general contractors’ orientation toward the completion of construction projects using traditional reinforced-concrete construction techniques. Therefore, they generally do not employ a sufficient and competent workforce to execute labor-intensive and complex on-site manufacturing activities such as welding of fin plates and pre-tensioning applications for high-strength bolts required to assemble steel beams and reinforced-concrete columns and walls of steel-concrete hybrid building systems. In order to reduce labor-intensive on-site tasks, general construction contractors typically utilize conventional construction approaches using only reinforced concrete building systems. As a result, the structural and environmental benefits of steel-concrete hybrid building systems could not be widely adopted by the construction industry. This research project proposes three different novel structural joint configurations with cutting-edge saw-tooth interface mechanical interlock bolted connection, bolt-less plug-in connection, and grouted joint details for beam-to-column joints of steel-concrete hybrid building systems. The proposed joint configurations eliminate on-site welding and enable the accommodation of construction and manufacturing tolerances in three spatial directions to achieve fast erection strategies for the construction of steel-concrete hybrid building systems. Therefore, the outcomes of the research project make it possible for general construction contractors to use their existing workforce to complete construction tasks for steel-concrete hybrid building systems without the requirement of specialized tools or training. In this study, a total of six separate experimental test campaigns were established to determine the load-deformation behaviors of the proposed joint configurations and to identify their load-bearing components. In order to show that the suggested joint configurations are appropriate for mass production without the utilization of special equipment or machinery, the experimental test prototypes of the proposed joint configurations were produced in partnership with commercial producers. The experimental test campaigns were simulated with numerical models by means of advanced computer-aided finite element analyses for the identification of the ultimate deformation limits of the proposed joint components and to clarify their progressive failure mechanisms under quasi-static loading conditions. A set of analytical resistance models were developed to estimate the load-bearing capacities of the proposed joint configurations based on the failure modes identified by the observations made during the experimental tests and in accordance with the output results of the numerical simulations. Based on the analytical expressions, the most significant, in other words, the basic variables impacting the load-bearing capacities of the proposed joint configurations were identified. Additionally, the load-deformation behaviors of the proposed joint configurations were further investigated with numerical parametric studies by parametrizing the basic variables to understand their impact on the load-deformation behaviors of the proposed joint configurations. To verify the accuracy of the analytical resistance models of the proposed joint configurations, the estimations of the analytical expressions were compared with the output results of the numerical parametric studies. Based on the distribution of the estimations of the analytical expression against the output result of the numerical parametric studies, characteristic and design partial safety factors were established according to EN1990, Annex D for the analytical resistance models of the saw-tooth interface mechanical interlock bolted connection and bolt-less plug-in connection. The estimations of the analytical resistance model of grouted joint details for beam-to-column joints of steel-concrete hybrid building systems were also compared with the output results of a numerical parametric study but no partial safety factor was established for this joint detail. [less ▲] Detailed reference viewed: 61 (5 UL)![]() Ros Cuellar, Julia ![]() Doctoral thesis (2023) Detailed reference viewed: 28 (3 UL)![]() van Zweel, Karl Nicolaus ![]() Doctoral thesis (2023) The general scope of the PhD research project falls within the framework of developing integrated catchment hydro-biogeochemical theories in the context of the Critical Zone (CZ). Significant advances in ... [more ▼] The general scope of the PhD research project falls within the framework of developing integrated catchment hydro-biogeochemical theories in the context of the Critical Zone (CZ). Significant advances in the understanding of water transit time theory, subsurface structure controls, and the quantification of catchment scale weathering rates have resulted in the convergence of classical biogeochemical and hydrological theories. This will potentially pave the way for a more mechanistic understanding of CZ because many challenges still exist. Perhaps the most difficult of all is a unifying hydro-biogeochemical theory that can compare catchments across gradients of climate, geology, and vegetation. Understanding the processes driving the evolution of chemical tracers as they move through space and time is of cardinal importance to validating mixing hypotheses and assisting in determining the residence time of water in CZ. The specific aim of the study is to investigate what physical and biogeochemical processes are driving variations in observable endmembers in stream discharge as a function of the hydrological state at headwater catchment scale. This requires looking beyond what can be observed in the stream and what is called ”unseen flowlines” in this thesis. The Weierbach Experimental Catchment (WEC) in Luxembourg provides a unique opportunity to study these processes, with an extensive biweekly groundwater chemistry dataset spanning over ten years. Additionally, WEC has been the subject of numerous published works in the domain of CZ science, adding to an already detailed hydrological and geochemical understanding of the system. Multivariate analysis techniques were used to identify the unseen flowlines in the catchment. Together with the excising hydrological perception model and a geochemical modelling approach, these flowlines were rigorously investigated to understand what processes drive their respective manifestations in the system. The existing perceptual model for WEC was updated by the new findings and tested on 27 flood events to assess if it could adequately explain the c − Q behaviour observed during these periods. The novelty of the study lies in the fact that it uses both data-driven modelling approaches and geochemical processbased modelling to look beyond what can be observed in the near-stream environment of headwaters. [less ▲] Detailed reference viewed: 46 (4 UL)![]() Perney, Antoine Eric Paul Pascal ![]() Doctoral thesis (2023) This work is part of the reconstruction of data from images. Its purpose is to develop methods to generate a surface of CAO type (B-Spline or NURBS). Indeed, obtaining a mathematical representation of the ... [more ▼] This work is part of the reconstruction of data from images. Its purpose is to develop methods to generate a surface of CAO type (B-Spline or NURBS). Indeed, obtaining a mathematical representation of the surface of a solid body from point clouds, images or tetrahedral meshes is a fundamental task of 21st century digital engineering, where simulators interact with real systems. In this thesis, we will develop new algorithms for reconstruction of CAO geometry. First, in order to determine a NURBS surface, a control network, i.e. a quadrangular mesh, is required. Using the eigenfunctions of a Graph Laplacian problem and thanks to the discrete Morse theory, a control network is determined. The surface obtained using this mesh is not a priori optimal, which is why an optimization algorithm is introduced. It allows to adjust the NURBS surface to the triangulation and thus to best approximate the geometry of the object. Then, a model selection is carried out. To do so, a regression model is set up to compare the surfaces obtained with 3D images, and a surface is chosen using a information criterion. These steps being established, we will no longer consider only the noise of the data, but also that of the solution. Thus, using a sampling method, a probabilistic distribution of surface is determined. Finally, in perspective, constraints are applied to the graph Laplacian problem in order to align the NURBS patches along a given curve, for example in the case of an object with a marked edge. The methods developed are robust and do not depend on the topology of the desired 3D object, that is to say that the algorithm works on a wide range of shapes. We apply the developed methodologies in the biomedical field, with examples of vertebrae and femurs. This would make it possible to have the scanned object, a bone, and the implant in the same "format" and thus the adjustment of the implant will be carried out more easily. [less ▲] Detailed reference viewed: 19 (1 UL)![]() Muwanigwa, Mudiwa Nathasia ![]() Doctoral thesis (2023) Neurodegenerative diseases are one of the leading causes of disability and mortality, affecting millions of people worldwide. Parkinson’s disease (PD) is the second most common neurodegenerative disease ... [more ▼] Neurodegenerative diseases are one of the leading causes of disability and mortality, affecting millions of people worldwide. Parkinson’s disease (PD) is the second most common neurodegenerative disease globally, and while it was first described over 200 years ago, curative treatments remain elusive. One of the main challenges in developing effective therapeutic strategies for PD is the complex molecular pathophysiology of the disease has not been well recapitulated in classically used animal models systems, and studies using post-mortem tissue from patients only represents the end point of disease. Human derived brain organoid models have revolutionized the field of neurological disease modeling, as they are able to recapitulate key cellular and physiological features reminiscent of the human brain. This thesis describes the use of human midbrain organoids (hMO) to model and gain a deeper understanding of genetic forms of PD. In the first manuscript, patient-specific hMO harboring a triplication in the SNCA gene (3xSNCA hMO) were able to recapitulate the key neuropathological hallmarks of PD. We observed the progressive loss and dysfunction of midbrain dopaminergic neurons in 3xSNCA hMO, and the accumulation of pathological α-synuclein including elevated levels of pS129 α-synuclein and the presence of α-synuclein aggregates. We also identified a phenotype indicative of senescence in the 3xSNCA hMO, which represents a mechanism that has recently gained more attention as a driving factor in PD pathogenesis and progression. The second manuscript of this thesis investigated the pathogenic role of LRRK2-G2019S in astrocytes using a combination of post-mortem brain tissue, induced pluripotent stem cell derived astrocytes and hMO. The iPSC derived astrocytes and organoids recapitulated the phenotypes seen in the post-mortem tissue, emphasizing the validity of these models in reflecting the in vivo situation. Interestingly, single-cell RNA sequencing of the hMO revealed that astrocytes from the LRRK2-G2019S organoids showed a senescent-like phenotype. Thus, this thesis highlights the relevance of senescence as a converging mechanism in PD. Finally, this thesis explores the future development of organoid models as they are combined with technologies such as microfluidic devices as in Manuscript III to improve their complexity and reproducibility. Ultimately, this will lead to the development of more representative models that can better recapitulate and model PD as well as other neurodegenerative disorders. [less ▲] Detailed reference viewed: 98 (5 UL)![]() Setti Junior, Paulo de Tarso ![]() Doctoral thesis (2023) Understanding, quantifying and monitoring soil moisture is important for many applications, e.g., agriculture, weather forecasting, occurrence of heatwaves, droughts and floods, and human health. At a ... [more ▼] Understanding, quantifying and monitoring soil moisture is important for many applications, e.g., agriculture, weather forecasting, occurrence of heatwaves, droughts and floods, and human health. At a large scale, satellite microwave remote sensing has been used to retrieve soil moisture information. Surface water has also been detected and monitored through remote sensing orbital platforms equipped with passive microwave, radar, and optical sensors. The use of reflected L-band Global Navigation Satellite System (GNSS) signals represents an emerging remote sensing concept to retrieve geophysical parameters. In GNSS Reflectometry (GNSS-R) these signals are repurposed to infer properties of the surface from which they reflect as they are sensitive to variations in biogeophysical parameters. NASA's Cyclone GNSS (CYGNSS) is the first mission fully dedicated to spaceborne GNSS-R. The eight-satellite constellation measures Global Positioning System (GPS) reflected L1 (1575.42 MHz) signals. Spire Global, Inc. has also started developing their GNSS-R mission, with four satellites currently in orbit. In this thesis we propose and validate a method to retrieve large-scale near-surface soil moisture and a method to map and monitor inundations using spaceborne GNSS-R. Our soil moisture model is based on the assumption that variations in surface reflectivity are linearly related to variations in soil moisture and uses a new method to normalize the observations with respect to the angle of incidence. The normalization method accounts for the spatially varying effects of coherent and incoherent scattering. We found a median unbiased root-mean-square error (ubRMSE) of 0.042 cm3 cm-3 when comparing our method to two years of Soil Moisture Active Passive (SMAP) data and a median ubRMSE of 0.059 cm3 cm-3 compared to the observations of 207 in-situ stations. Our results also showed an improved temporal resolution compared to sensors traditionally used for this purpose. Assessing Spire and CYGNSS data over a region in south east Australia, we observed a similar behavior in terms of surface reflectivity and sensitivity to soil moisture. As Spire satellites collect data from multiple GNSS constellations, we found that it is important to differentiate the observations when calibrating a soil moisture model. The inundation mapping method that we propose is based on a track-wise approach. When classifying the reflections track by track the influence of the angle of incidence and the GNSS transmitted power are minimized or eliminated. With CYGNSS data we produced more than four years of monthly surface water maps over the Amazon River basin and the Pantanal wetlands complex with a spatial resolution of 4.5 km. With GNSS-R we could overcome some of the limitations of optical and microwave remote sensing methods for inundation mapping. We used a set of metrics commonly used to evaluate classification performance to assess our product and discussed the differences and similarities with other products. [less ▲] Detailed reference viewed: 153 (9 UL)![]() Gerardy, Marie Olivia Philippine ![]() Doctoral thesis (2023) Detailed reference viewed: 37 (2 UL)![]() Deregnoncourt, Marine ![]() Doctoral thesis (2023) Entitled “The Figures of intimacy and extimacy: a Reflection on Marina Hands and Eric Ruf’s acting in Jean Racine's Phèdre and Paul Claudel's Partage de midi”, this PhD dissertation addresses the concepts ... [more ▼] Entitled “The Figures of intimacy and extimacy: a Reflection on Marina Hands and Eric Ruf’s acting in Jean Racine's Phèdre and Paul Claudel's Partage de midi”, this PhD dissertation addresses the concepts of “intimacy” and “extimacy” as witnessed through Marina Hands and Eric Ruf’s vocal and scenic acting in Patrice Chereau and Yves Beaunesne’s respective productions of Racine's and Claudel's works. In the frame of these two shows both actors manage to suggest “extimacy” with their body language, and to render “intimacy” thanks to their “singing” diction of Racine’s alexandrine and Claudel’s free verse. Throughout our development, we will display perpetual links between Racine and Claudel, as suggested by the singular acting of these two performers. The problematic axis of this dissertation is thus the following: “how does the combination of the “extimacy” of Marina Hands and Éric Ruf’s body-language and the “intimacy” of their “singing” diction reveal the musicality of Racine’s and Claudel’s languages?”. We shall see that “intimacy” turns out to be “extimacy” on stage, and that the two concepts are nothing but two sides of the same coin. “Intimacy” is the constant object of both Patrice Chereau's and Yves Beaunesne's research, to the extent that it constitutes the essence of their artistic creations. In order to become “extimacy”, “intimacy” has to be mediated by actors’ bodies if it means to serve the text actually heard on stage. The union between body and text is therefore a central issue. [less ▲] Detailed reference viewed: 46 (4 UL)![]() Roberts, Juliet Mary ![]() Doctoral thesis (2023) Detailed reference viewed: 22 (2 UL)![]() Wei, Yufei ![]() Doctoral thesis (2023) Detailed reference viewed: 72 (20 UL)![]() Sauveur, Renaldo ![]() Doctoral thesis (2023) In the field of hydro-geodesy, ill-posed inverse problems are very common. Those problems need to be regularized to find a stabilized solution. Usually, to solve those problems, two regularization methods ... [more ▼] In the field of hydro-geodesy, ill-posed inverse problems are very common. Those problems need to be regularized to find a stabilized solution. Usually, to solve those problems, two regularization methods are often used, Tikhonov’s regularization and Truncated Singular Value Decomposition (TSVD), with some common regularization parameter choice methods such as L-curve or General Cross Validation (GCV). This study aims to test the capacity of the Least Squares Collocation (LSC) method to estimate the terrestrial water storage variations as an original approach. First, for the forward model, we calculated the hydrological crustal loading deformation in the island of Haiti by convolving Farrell (1972) Green's function with the surface mass loading from the Global Land Data Assimilation (GLDAS). After, a dense synthetic Global Navigation Satellite System (GNSS) network is used with the LSC method to estimate the Terrestrial Water Storage (TWS) variations for the inverse problem. LSC is a natural way to stabilize an ill-posed inverse problem. Unlike Tikhonov’s or TSVD regularization method, LSC allows us to stabilize the inverse problem by including more physical information. The latter is introduced through a covariance function characterizing the observations, the parameters, and the functional link between them. One of the advantages of the LSC method is that it does not require any regularization parameter. First, we showed that, for the island of Haiti, the near field can extend until 24° around a GNSS station. Secondly, we proved that the hydrology-induced vertical deformation is part of the GNSS vertical displacement over the island. Finally, we demonstrated that the LSC may be used as a method to estimate TWS variations in dense GNSS network area. [less ▲] Detailed reference viewed: 26 (7 UL)![]() Singh, Kartikeya ![]() Doctoral thesis (2023) Cell-cell communication plays a significant role in shaping the functionality of the cells. Communication between the cells is also responsible for maintaining the physiological state of the cells and the ... [more ▼] Cell-cell communication plays a significant role in shaping the functionality of the cells. Communication between the cells is also responsible for maintaining the physiological state of the cells and the tissue. Therefore, it is important to study the different ways by which cell-cell communication impacts the functional state of cells. Alterations in cell-cell communication can contribute to the development of disease conditions. In this thesis, we present two computational tools and a study to explore the different aspects of cell-cell communication. In the first manuscript, FunRes was developed to leverage the cell-cell communication model to investigate functional heterogeneity in cell types and characterize cell states based on the integration of interand intra-cellular signalling. This tool utilizes a combination of receptors and transcription factors (TFs) based on the reconstructed cell-cell communication network to split the cell types into functional states. The tool was applied to the TabulaMurisSenis atlas to discover functional cell states in both young and old mouse datasets. In addition, we compared our tool with state-of-the-art tools and validated the cell states using available markers from the literature. Secondly, we studied the evolution of gene expression in developing astrocytes under normal and inflammatory conditions. We characterized these cells using both transcriptional and chromatin accessibility data which were integrated to reconstruct the gene regulatory networks (GRNs) specific to the condition and timepoints. The GRNs were then topologically analyzed to identify key regulators of the developmental process under both normal and inflammatory conditions. In the final manuscript, we developed a computational tool that identified regulators of allergy and tolerance in a mouse model. The tool works by first reconstructing the cell-cell communication network and then analyzing the network for feedback loops. These feedback loops are important as they contribute to the sustenance of the tissue’s state. Identification of the feedback loops allows for the discovery of important molecules by comparative analysis of these feedback loops between various conditions. In summary, this thesis encompasses various ways of cellular regulation using cell-cell communication in a tissue. These studies contribute to a better understanding of the role cell-cell communication plays in health and disease along with the identification of therapeutic targets to design novel strategies against diseases [less ▲] Detailed reference viewed: 36 (5 UL)![]() Hubai, Andrii ![]() Doctoral thesis (2023) Detailed reference viewed: 61 (6 UL)![]() Chakrapani, Neera ![]() Doctoral thesis (2023) Red meat allergy aka α-Gal allergy is a delayed allergic response occurring upon consumption of mammalian meat and by-products. Patients report eating meat without any problems for several years before ... [more ▼] Red meat allergy aka α-Gal allergy is a delayed allergic response occurring upon consumption of mammalian meat and by-products. Patients report eating meat without any problems for several years before developing the allergy. Although children can develop red meat allergy, it is more prevalent in adults. In addition to a delayed onset of reactions, immediate hypersensitivity is reported in case of contact with the allergen via intravenous route. Galactose-α-1,3-galactose (α-Gal) is the first highly allergenic carbohydrate that has been identified to cause allergy all across the world. In general, carbohydrates exhibit low immunogenicity and are not capable of inducing a strong immune response on their own. Although the α-Gal epitope is present in conjugation with both proteins and lipids, due to an overall accepted role of proteins in allergy, glycoproteins from mammalian food sources were first characterized. However, a unique feature of α-Gal allergy is the delayed occurrence of allergic symptoms upon ingestion of mammalian meat and an allergenic role of glycolipids has been proposed to explain these delayed responses. A second important feature of the disease is that the development of specific IgE to α-Gal has been associated with tick bites belonging to various tick species, depending on the geographical region. In this tick-mediated allergy an intriguing factor is the absence of an α-1,3-GalT gene in ticks, a gene coding for an enzyme capable of α-Gal synthesis, which raises questions on the source and identity of the sensitizing molecule within ticks, immune responses to tick bites, and effect of increased exposure. In this study, we sought to elucidate the origin of sensitization to α-Gal by investigating a cohort of individuals exposed to recurrent tick bites and by exploring the proteome of ticks in a longitudinal study. Furthermore, we analysed the allergenicity of glycoproteins and glycolipids in order to determine the food components responsible for the delayed onset of symptoms. The aim of the Chapter I was to determine IgG profiles and the prevalence rate of sensitization to α-Gal in a high-risk cohort of forestry employees from Luxembourg. The aim of Chapter II was to analyse the presence of host blood in Ixodes ricinus after moulting and upon prolonged starvation in order to support or reject the host blood transmission hypothesis. The aim of the Chapter III was to investigate and compare the allergenicity of glycolipids and glycoproteins to understand their role in the allergic response. Moreover, we have analysed the stability of glycoproteins and compared extracts from different food sources. This chapter is in the form of a published article. In Chapter IV, I have made an attempt to create mutant models with specified α-Gal glycosylation in order to study role of spatial distribution of α-Gal in IgE cross linking and effector cell activation. [less ▲] Detailed reference viewed: 60 (3 UL)![]() Hemedan, Ahmed ![]() Doctoral thesis (2023) Interpretation of omics data is needed to form meaningful hypotheses about disease mechanisms. Pathway databases give an overview of disease-related processes, while mathematical models give qualitative ... [more ▼] Interpretation of omics data is needed to form meaningful hypotheses about disease mechanisms. Pathway databases give an overview of disease-related processes, while mathematical models give qualitative and quantitative insights into their complexity. Similarly to pathway databases, mathematical models are stored and shared on dedicated platforms. Moreover, community-driven initiatives such as disease maps encode disease-specific mechanisms in both computable and diagrammatic form using dedicated tools for diagram biocuration and visualisation. To investigate the dynamic properties of complex disease mechanisms, computationally readable content can be used as a scaffold for building dynamic models in an automated fashion. The dynamic properties of a disease are extremely complex. Therefore, more research is required to better understand the complexity of molecular mechanisms, which may advance personalized medicine in the future. In this study, Parkinson’s disease (PD) is analyzed as an example of a complex disorder. PD is associated with complex genetic, environmental causes and comorbidities that need to be analysed in a systematic way to better understand the progression of different disease subtypes. Studying PD as a multifactorial disease requires deconvoluting the multiple and overlapping changes to identify the driving neurodegenerative mechanisms. Integrated systems analysis and modelling can enable us to study different aspects of a disease such as progression, diagnosis, and response to therapeutics. Therefore, more research is required to better understand the complexity of molecular mechanisms, which may advance personalized medicine in the future. Modelling such complex processes depends on the scope and it may vary depending on the nature of the process (e.g. signalling vs metabolic). Experimental design and the resulting data also influence model structure and analysis. Boolean modelling is proposed to analyse the complexity of PD mechanisms. Boolean models (BMs) are qualitative rather than quantitative and do not require detailed kinetic information such as Petri nets or Ordinary Differential equations (ODEs). Boolean modelling represents a logical formalism where available variables have binary values of one (ON) or zero (OFF), making it a plausible approach in cases where quantitative details and kinetic parameters 9 are not available. Boolean modelling is well validated in clinical and translational medicine research. In this project, the PD map was translated into BMs in an automated fashion using different methods. Therefore, the complexity of disease pathways can be analysed by simulating the effect of genomic burden on omics data. In order to make sure that BMs accurately represent the biological system, validation was performed by simulating models at different scales of complexity. The behaviour of the models was compared with expected behavior based on validated biological knowledge. The TCA cycle was used as an example of a well-studied simple network. Different scales of complex signalling networks were used including the Wnt-PI3k/AKT pathway, and T-cell differentiation models. As a result, matched and mismatched behaviours were identified, allowing the models to be modified to better represent disease mechanisms. The BMs were stratified by integrating omics data from multiple disease cohorts. The miRNA datasets from the Parkinson’s Progression Markers Initiative study (PPMI) were analysed. PPMI provides an important resource for the investigation of potential biomarkers and therapeutic targets for PD. Such stratification allowed studying disease heterogeneity and specific responses to molecular perturbations. The results can support research hypotheses, diagnose a condition, and maximize the benefit of a treatment. Furthermore, the challenges and limitations associated with Boolean modelling in general were discussed, as well as those specific to the current study. Based on the results, there are different ways to improve Boolean modelling applications. Modellers can perform exploratory investigations, gathering the associated information about the model from literature and data resources. The missing details can be inferred by integrating omics data, which identifies missing components and optimises model accuracy. Accurate and computable models improve the efficiency of simulations and the resulting analysis of their controllability. In parallel, the maintenance of model repositories and the sharing of models in easily interoperable formats are also important. [less ▲] Detailed reference viewed: 49 (14 UL)![]() Verstichel-Boulanger, Eolia Emilienne Muriel ![]() Doctoral thesis (2023) Amélie Nothomb est présente sur la scène littéraire depuis 31 ans et pourtant il y a en France un vide dans la recherche universitaire concernant son oeuvre. Cette thèse aborde la place encore et toujours ... [more ▼] Amélie Nothomb est présente sur la scène littéraire depuis 31 ans et pourtant il y a en France un vide dans la recherche universitaire concernant son oeuvre. Cette thèse aborde la place encore et toujours problématique réservée aux autrices dans le champ et la critique littéraires français, les obstacles posés à leur légitimation et à leur consécration du fait de leur genre, obstacles renforcés si elles appartiennent non au centre mais aux marges de la francophonie, et plus encore si leur œuvre obtient un succès commercial indéniable, avec pour effet, auprès de la critique, que la légitimité et la littérarité de leurs ouvrages se voient remises en doute lorsqu’elles ne sont pas tout simplement niées. Amélie Nothomb fait donc l'objet d'une triple marginalité qui est renforcée par la difficulté de classifier son oeuvre. [less ▲] Detailed reference viewed: 32 (4 UL)![]() Ansarinia, Morteza ![]() Doctoral thesis (2023) Cognitive control is essential to human cognitive functioning as it allows us to adapt and respond to a wide range of situations and environments. The possibility to enhance cognitive control in a way ... [more ▼] Cognitive control is essential to human cognitive functioning as it allows us to adapt and respond to a wide range of situations and environments. The possibility to enhance cognitive control in a way that transfers to real life situations could greatly benefit individuals and society. However, the lack of a formal, quantitative definition of cognitive control has limited progress in developing effective cognitive control training programs. To address this issue, the first part of the thesis focuses on gaining clarity on what cognitive control is and how to measure it. This is accomplished through a large-scale text analysis that integrates cognitive control tasks and related constructs into a cohesive knowledge graph. This knowledge graph provides a more quantitative definition of cognitive control based on previous research, which can be used to guide future research. The second part of the thesis aims at furthering a computational understanding of cognitive control, in particular to study what features of the task (i.e., the environment) and what features of the cognitive system (i.e., the agent) determine cognitive control, its functioning, and generalization. The thesis first presents CogEnv, a virtual cognitive assessment environment where artificial agents (e.g., reinforcement learning agents) can be directly compared to humans in a variety of cognitive tests. It then presents CogPonder, a novel computational method for general cognitive control that is relevant for research on both humans and artificial agents. The proposed framework is a flexible, differentiable end-to-end deep learning model that separates the act of control from the controlled act, and can be trained to perform the same cognitive tests that are used in cognitive psychology to assess humans. Together, the proposed cognitive environment and agent architecture offer unique new opportunities to enable and accelerate the study of human and artificial agents in an interoperable framework. Research on training cognition with complex tasks, such as video games, may benefit from and contribute to the broad view of cognitive control. The final part of the thesis presents a profile of cognitive control and its generalization based on cognitive training studies, in particular how it may be improved by using action video game training. More specifically, we contrasted the brain connectivity profiles of people that are either habitual action video game players or do not play video games at all. We focused in particular on brain networks that have been associated with cognitive control. Our results show that cognitive control emerges from a distributed set of brain networks rather than individual specialized brain networks, supporting the view that action video gaming may have a broad, general impact of cognitive control. These results also have practical value for cognitive scientists studying cognitive control, as they imply that action video game training may offer new ways to test cognitive control theories in a causal way. Taken together, the current work explores a variety of approaches from within cognitive science disciplines to contribute in novel ways to the fascinating and long tradition of research on cognitive control. In the age of ubiquitous computing and large datasets, bridging the gap between behavior, brain, and computation has the potential to fundamentally transform our understanding of the human mind and inspire the development of intelligent artificial agents. [less ▲] Detailed reference viewed: 131 (31 UL)![]() Fouillet, Thibault ![]() Doctoral thesis (2023) The capacity of small powers to think strategically remains a limited field of interest in historical thinking and international relations. Thus, beyond the debate concerning the capacity of small states ... [more ▼] The capacity of small powers to think strategically remains a limited field of interest in historical thinking and international relations. Thus, beyond the debate concerning the capacity of small states to be full-fledged actors in the international system, there appears to be a denial of the conceptualization and doctrinal innovation capacity of small powers despite the historical redundancy of the victory of the weak over the strong. However, small powers are by nature more sensitive to threats due to their limited response capabilities, and are therefore more inclined to rationalize their action over the long term in order to develop national (military, economic, diplomatic capabilities, etc.) and international (alliances, international organizations, etc.) mechanisms for containing these threats. In this respect, this thesis proposes to look at the construction of the strategic thinking of small powers in the face of perceived threats and the means used to try to contain them. The aim is therefore to study the mechanisms by which small powers found a Grand Strategy (transcribed in the form of doctrines) to deal with the security dilemmas they face. To this end, three case studies were analyzed (Luxembourg, Singapore, Lithuania), chosen for the diversity of their strategic and historical contexts offering a variety of security dilemmas. The Grand Strategy being in essence a conceptual construction with a prospective and applicative aim, a theoretical as well as a practical methodology (through the use of immediate history and wargaming) was then implemented. Two sets of lessons can be drawn from this thesis. The first is methodological, confirming the interest of doctrinal studies as a field of strategic reflection, and establishing wargaming as a prospective tool adapted to the conduct of fundamental research. The second, conceptual, allows for a better understanding of the capacity of small powers to create great and efficient strategies, which must be taken into account within the strategic genealogy because of their conceptual dynamism, which can be used to teach lessons even to great powers. [less ▲] Detailed reference viewed: 66 (5 UL)![]() Heuser, Svenja ![]() Doctoral thesis (2023) This work examines the practice of reading aloud in the interactional context of adult participants engaging in an interface-mediated collaborative game activity. With a conversation analytic approach ... [more ▼] This work examines the practice of reading aloud in the interactional context of adult participants engaging in an interface-mediated collaborative game activity. With a conversation analytic approach onto video data of user studies, empirical cases of reading aloud are presented. It is shown how participants multimodally co-organise reading aloud in-interaction for providing accessibility to game text in a game that is unfamiliar to them. With reading aloud, participants meet the interactional challenge of making game text audibly accessible that is not always visually accessible for all participant alike. This practice is not only conducted for another but with another in a truly joint fashion, working as a continuer to accomplish the unfamiliar game. [less ▲] Detailed reference viewed: 17 (1 UL)![]() Daoudi, Nadia ![]() Doctoral thesis (2023) Android offers plenty of services to mobile users and has gained significant popularity worldwide. The success of Android has resulted in attracting more mobile users but also malware authors. Indeed ... [more ▼] Android offers plenty of services to mobile users and has gained significant popularity worldwide. The success of Android has resulted in attracting more mobile users but also malware authors. Indeed, attackers target Android markets to spread their malicious apps and infect users’ devices. The consequences vary from displaying annoying ads to gaining financial benefits from users. To overcome the threat posed by Android malware, Machine Learning has been leveraged as a promising technique to automatically detect malware. The literature on Android malware detection lavishes with a huge variety of ML-based approaches that are designed to discriminate malware from legitimate samples. These techniques generally rely on manually engineered features that are extracted from the apps’ artefacts. Reported to be highly effective, Android malware detection approaches seem to be the magical solution to stop the proliferation of malware. Unfortunately, the gap between the promised and the actual detection performance is far from negligible. Despite the rosy excellent detection performance painted in the literature, the detection reports show that Android malware is still spreading and infecting mobile users. In this thesis, we investigate the reasons that impede state-of-the-art Android malware detection approaches to surround the spread of Android malware and propose solutions and directions to boost their detection performance. In the first part of this thesis, we focus on revisiting the state of the art in Android malware detection. Specifically, we conduct a comprehensive study to assess the reproducibility of state-of-the-art Android malware detectors. We consider research papers published at 16 major venues over a period of ten years and report our reproduction outcome. We also discuss the different obstacles to reproducibility and how they can be overcome. Then, we perform an exploratory analysis on a state-of-the-art malware detector, DREBIN, to gain an in-depth understanding of its inner working. Our study provides insights into the quality of DREBIN’s features and their effectiveness in discriminating Android malware. In the second part of this thesis, we investigate novel features for Android malware detection that do not involve manual engineering. Specifically, we propose an Android malware detection approach, DexRay, that relies on features extracted automatically from the apps. We convert the raw bytecode of the app DEX files into an image and train a 1-dimensional convolutional neural network to automatically learn the relevant features. Our approach stands out for the simplicity of its design choices and its high detection performance, which make it a foundational framework for further developing this domain. In the third part, we attempt to push the frontier of Android malware detection via enhancing the detection performance of the state of the art. We show through a large-scale evaluation of four state-of-the-art malware detectors that their detection performance is highly dependent on the experimental dataset. To solve this issue, we investigate the added value of combining their features and predictions using 22 combination methods. While it does not improve the detection performance reported by individual approaches, the combination of features and predictions maintains the highest detection performance independently of the dataset. We further propose a novel technique, Guided Retraining, that boosts the detection performance of state-of-the-art Android malware detectors. Guided Retraining uses contrastive learning to learn a better representation of the difficult samples to improve their prediction. [less ▲] Detailed reference viewed: 104 (22 UL)![]() Florent, Perrine Julie ![]() Doctoral thesis (2023) Albeit recent technological developments (e.g. field deployable instruments operating at high temporal frequencies), experimental hydrology is a discipline that remains measurement limited. From this ... [more ▼] Albeit recent technological developments (e.g. field deployable instruments operating at high temporal frequencies), experimental hydrology is a discipline that remains measurement limited. From this perspective, trans-disciplinary approaches may create valuable opportunities to enlarge the number of tools available for investigating hydrological processes. Tracing experiments are usually performed in order to investigate the water flow pathways and water sources in underground areas. Since the 19th century, researchers have worked with hydrological tracers to do this. Among them, the fluorescent dyes and the isotopes are the most commonly used to follow the water flow while others like salts or bacteriophages are employed as additional tracers to those mentioned above. Bacteriophages are the least known of all, but it has been studied since the 1960s as hydrological tracers, especially in karstic environments. The purpose is to evaluate the potential for bacteriophages naturally occurring in soils to serve as a new environmental tracer of hydrological processes. We hypothesize that such viral particles can be a promising tool in water tracing experiments since they are safe for ecosystems. In both fields of hydrology and virology, the knowledge regarding the fate of bacteriophages within the pedosphere is still limited. Their study would not only allow proposing potential new candidates to enlarge the hydrological tracers available, but also improving the current knowledge about the bacteriophage communities in soil and their interactions with certain environmental factors. For this purpose, we aim at describing the bacteriophage communities occurring in the soil through shotgun metagenomics analysis. Those viruses are widely spread in the pedosphere, and we assume that they have specific signatures according to the type of soil. Then, bacteriophage populations will be investigated in the soil water to analyse the dis/similarities between the two communities as well as their dynamics in the function of the precipitation events. This way, with a relatively high abundance of soil and soil water and a capacity of being mobilised, good bacteriophage candidates could be selected as hydrological tracers. [less ▲] Detailed reference viewed: 59 (1 UL)![]() Ceci, Jean-Marc ![]() Doctoral thesis (2023) Le courant « droit et littérature » n’est jamais défini qu’à travers le prisme de déclinaisons particulières qui ne rendent pas compte du lien originel qui unit ces deux discours. Au contraire, ces ... [more ▼] Le courant « droit et littérature » n’est jamais défini qu’à travers le prisme de déclinaisons particulières qui ne rendent pas compte du lien originel qui unit ces deux discours. Au contraire, ces déclinaisons étouffent ce lien et lui jettent une ombre empêchant de le mettre au jour. Notre recherche vise à combler cette absence de définition originelle. Elle propose une définition générique du lien sous le terme de « présence ina$ive » afin de résoudre ce problème. [less ▲] Detailed reference viewed: 81 (1 UL)![]() Chen, Juntong ![]() Doctoral thesis (2023) Detailed reference viewed: 48 (5 UL)![]() Gubenko, Alla ![]() Doctoral thesis (2023) Arguably, embodiment is the most neglected aspect of cognitive psychology and creativity research. Whereas most existing theoretical frameworks are inspired by or implicitly imply “cognition as a ... [more ▼] Arguably, embodiment is the most neglected aspect of cognitive psychology and creativity research. Whereas most existing theoretical frameworks are inspired by or implicitly imply “cognition as a computer” metaphor, depicting creative thought as a disembodied idea generation and processing of amodal symbols, this thesis proposes that “cognition as a robot” may be a better metaphor to understand how creative cognition operates . In this thesis, I compare and investigate human creative cognition in relation to embodied artificial agents that has to learn to navigate and act in complex and changing material and social environments from a set of multimodal streams (e.g., vision, haptic). Instead of relying on divergent thinking or associative accounts of creativity, I attempt to elaborate an embodied and action-oriented vision of creativity grounded in the 4E cognition paradigm. Situated at the intersection of the psychology of creativity, technology, and embodied cognitive science, the thesis attempts to synthesize disparate lines of work and look at a complex problem of human creativity through interdisciplinary lenses. In this perspective, the study of creativity is no longer a prerogative of social scientists but a collective and synergistic endeavor of psychologists, engineers, designers, and computer scientists. [less ▲] Detailed reference viewed: 70 (11 UL)![]() Bellomo, Nicolas ![]() Doctoral thesis (2023) Climate change due to the increase in GHG emissions and energy crisis due to scarcity of fossil fuel availability are an ever growing issue for the planet and countries. The decarbonization and the ... [more ▼] Climate change due to the increase in GHG emissions and energy crisis due to scarcity of fossil fuel availability are an ever growing issue for the planet and countries. The decarbonization and the sustainability of the energy sector is one of the top priority to achieve a resilient system. Hydrogen has been considered for decades to be used as an alternative for fossil fuel and now is the time of development for an hydrogen based economy. Fuel cells are devices that convert the hydrogen chemical energy into electrical energy and is one the main component considered for the hydrogen economy. However, much is yet to be achieved to make their manufacturing as cheap and as efficient as possible. Chemical vapour deposition (CVD) is a technique used to synthesize solid materials from gaseous precursors which has the advantages, over wet chemistry, to reduce wastes of production, to be cheap, to make pure solid materials and to be easily scalable. In this thesis we investigated the possibility to use CVD to produce two major components of fuel cells, namely the gas diffusion layer and the proton exchange membrane. The results were highly promising regarding the elaboration of gas diffusion layers and a CVD prototype was assembled to make the highly complex copolymerization of proton exchange membrane a reality with promising initial results. [less ▲] Detailed reference viewed: 83 (6 UL)![]() Govzmann, Alisa ![]() Doctoral thesis (2023) Detailed reference viewed: 31 (2 UL)![]() Samhi, Jordan ![]() Doctoral thesis (2023) In general, software is unreliable. Its behavior can deviate from users’ expectations because of bugs, vulnerabilities, or even malicious code. Manually vetting software is a challenging, tedious, and ... [more ▼] In general, software is unreliable. Its behavior can deviate from users’ expectations because of bugs, vulnerabilities, or even malicious code. Manually vetting software is a challenging, tedious, and highly-costly task that does not scale. To alleviate excessive costs and analysts’ burdens, automated static analysis techniques have been proposed by both the research and practitioner communities making static analysis a central topic in software engineering. In the meantime, mobile apps have considerably grown in importance. Today, most humans carry software in their pockets, with the Android operating system leading the market. Millions of apps have been proposed to the public so far, targeting a wide range of activities such as games, health, banking, GPS, etc. Hence, Android apps collect and manipulate a considerable amount of sensitive information, which puts users’ security and privacy at risk. Consequently, it is paramount to ensure that apps distributed through public channels (e.g., the Google Play) are free from malicious code. Hence, the research and practitioner communities have put much effort into devising new automated techniques to vet Android apps against malicious activities over the last decade. Analyzing Android apps is, however, challenging. On the one hand, the Android framework proposes constructs that can be used to evade dynamic analysis by triggering the malicious code only under certain circumstances, e.g., if the device is not an emulator and is currently connected to power. Hence, dynamic analyses can -easily- be fooled by malicious developers by making some code fragments difficult to reach. On the other hand, static analyses are challenged by Android-specific constructs that limit the coverage of off-the-shell static analyzers. The research community has already addressed some of these constructs, including inter-component communication or lifecycle methods. However, other constructs, such as implicit calls (i.e., when the Android framework asynchronously triggers a method in the app code), make some app code fragments unreachable to the static analyzers, while these fragments are executed when the app is run. Altogether, many apps’ code parts are unanalyzable: they are either not reachable by dynamic analyses or not covered by static analyzers. In this manuscript, we describe our contributions to the research effort from two angles: ① statically detecting malicious code that is difficult to access to dynamic analyzers because they are triggered under specific circumstances; and ② statically analyzing code not accessible to existing static analyzers to improve the comprehensiveness of app analyses. More precisely, in Part I, we first present a replication study of a state-of-the-art static logic bomb detector to better show its limitations. We then introduce a novel hybrid approach for detecting suspicious hidden sensitive operations towards triaging logic bombs. We finally detail the construction of a dataset of Android apps automatically infected with logic bombs. In Part II, we present our work to improve the comprehensiveness of Android apps’ static analysis. More specifically, we first show how we contributed to account for atypical inter-component communication in Android apps. Then, we present a novel approach to unify both the bytecode and native in Android apps to account for the multi-language trend in app development. Finally, we present our work to resolve conditional implicit calls in Android apps to improve static and dynamic analyzers. [less ▲] Detailed reference viewed: 86 (10 UL)![]() Clees, Elisabeth ![]() Doctoral thesis (2023) Life in residential child and youth welfare facilities is a great challenge for the concerned children. This paper explains factors that contribute to an increase or decrease in the subjectively perceived ... [more ▼] Life in residential child and youth welfare facilities is a great challenge for the concerned children. This paper explains factors that contribute to an increase or decrease in the subjectively perceived well-being of children and adolescents in residential institutions. Qualitative content analysis was chosen to analyze the data collected in Luxembourg. The study shows that children's well-being is particularly influenced by structural conditions and concepts, by fellow residents, and by (pedagogical) professionals. Another result points to the presence of different forms of violence and to the danger of (re-) traumatization of the children and young people within the institutions of inpatient child and youth welfare in Luxembourg. [less ▲] Detailed reference viewed: 158 (2 UL)![]() Tedgue Beltrao, Gabriel ![]() Doctoral thesis (2023) Vital signs are a group of biological indicators that show the status of the body’s life-sustaining functions. They provide an objective measurement of the essential physiological functions of a living ... [more ▼] Vital signs are a group of biological indicators that show the status of the body’s life-sustaining functions. They provide an objective measurement of the essential physiological functions of a living organism, and their assessment is the critical first step for any clinical evaluation. Monitoring vital sign information provides valuable insight into the patient's condition, including how they are responding to medical treatment and, more importantly, whether the patient is deteriorating. However, conventional contact-based devices are inappropriate for long-term continuous monitoring. Besides mobility restrictions and stress, they can cause discomfort, and epidermal damage, and even lead to pressure necrosis. On the other hand, the contactless monitoring of vital signs using radar devices has several advantages. Radar signals can penetrate through different materials and are not affected by skin pigmentation or external light conditions. Additionally, these devices preserve privacy, can be low-cost, and transmit no more power than a mobile phone. Despite recent advances, accurate contactless vital sign monitoring is still challenging in practical scenarios. The challenge stems from the fact that when we breathe, or when the heart beats, the tiny induced motion of the chest wall surface can be smaller than one millimeter. This means that the vital sign information can be easily lost in the background noise, or even masked by additional body movements from the monitored subject. This thesis aims to propose innovative signal processing solutions to enable the contactless monitoring of vital signs in practical scenarios. Its main contributions are threefold: a new algorithm for recovering the chest wall movements from radar signals; a novel random body movement and interference mitigation technique; and a simple, yet robust and accurate, adaptive estimation framework. These contributions were tested under different operational conditions and scenarios, spanning ideal simulation settings, real data collected while imitating common working conditions in an office environment, and a complete validation with premature babies in a critical care environment. The proposed algorithms were able to precisely recover the chest wall motion, effectively reducing the interfering effects of random body movements, and allowing clear identification of different breathing patterns. This capability is the first step toward frequency estimation and early non-invasive diagnosis of cardiorespiratory problems. In addition, most of the time, the adaptive estimation framework provided breathing and heart rate estimates within the predefined error intervals, being capable of tracking the reference values in different scenarios. Our findings shed light on the strengths and limitations of this technology and lay the foundation for future studies toward a complete contactless solution for vital signs monitoring. [less ▲] Detailed reference viewed: 41 (6 UL)![]() Taye, Alemayehu Demissew ![]() Doctoral thesis (2023) Detailed reference viewed: 37 (3 UL)![]() Braband, Matthias ![]() Doctoral thesis (2023) Global warming forces the automotive industry to reduce real driving emissions and thus, its CO2 footprint. Besides maximizing the individual efficiency of powertrain components, there is also energy ... [more ▼] Global warming forces the automotive industry to reduce real driving emissions and thus, its CO2 footprint. Besides maximizing the individual efficiency of powertrain components, there is also energy-saving potential in the choice of the driving strategy. Thus, model predictive control based advanced driver assistance systems to reduce the energy consumption during driving gains a significant interest in the literature. However, this results in a complex control system with many parameter dependencies that could possibly affect the energy efficiency of the vehicle. Most of these parameters are subject to uncertainties. Thus, the important question remains how these parameter uncertainties affect the energy efficiency of the system and how a driver assistance system should be designed to be robust against these uncertainties. To answer this question this thesis applies variance-based sensitivity analyses to design an appropriate driver assistance system and to quantify the influences of the uncertain system and controller parameters. First, a detailed vehicle and powertrain model of a battery electric vehicle is evolved and verified on component test benches. The parameter uncertainties and their sensitivities were investigated on typical urban and interurban commuter routes using quantitative variance-based sensitivity analysis methods. Based on these findings an economic nonlinear model predictive control eco-cruise control is derived which takes the identified parameter dependencies into account. The developed economic nonlinear model predictive control system is evaluated on artificial drive cycles and compared to a linear model predictive control approach as often outlined in the literature. Afterwards, the closed loop control system, consisting of the developed economic nonlinear model predictive control and the detailed vehicle model, is analyzed on typical urban and interurban commuter routes using variance-based sensitivity analysis. The findings and parameter dependencies are outlined and discussed. It has been shown, that vehicle parameters as well as controller parameters impact the energy consumption and the driving time of the vehicle. It has been outlined that if the as influential identified parameters are optimized, an average energy-saving potential on the investigated routes of 10.5% exists by only increasing the driving time of 0.7%. [less ▲] Detailed reference viewed: 62 (8 UL)![]() Messias de Jesus Rufino Ribeiro, Mariana ![]() Doctoral thesis (2023) Detailed reference viewed: 35 (5 UL)![]() Forastiere, Danilo ![]() Doctoral thesis (2022) Detailed reference viewed: 56 (2 UL)![]() Delbrouck, Catherine Anne Lucie ![]() Doctoral thesis (2022) Metabolic rewiring is essential to enable cancer onset and progression. One important metabolic pathway that is often hijacked by cancer cells is the one-carbon (1C) cycle, in which the third carbon of ... [more ▼] Metabolic rewiring is essential to enable cancer onset and progression. One important metabolic pathway that is often hijacked by cancer cells is the one-carbon (1C) cycle, in which the third carbon of serine is oxidized to formate. It was previously shown that formate production in cancer cells often exceeds the anabolic demand, resulting in formate overflow. Furthermore, extracellular formate was described to promote the in vitro invasiveness of glioblastoma (GBM) cells. Nevertheless, the mechanism underlying the formate-induced invasion remains elusive. In this present study, we aimed to characterize formate-induced invasion in greater detail. At first, we studied the generalizability of formate-induced invasion in different GBM models as well as in different breast cancer models. We applied different in vitro assays, like the Boyden chamber assay to probe the impact of formate on different cancer cell lines. Then, we studied the in vivo relevance and the pro-invasive properties of formate in physiological models by using different ex vivo and in vivo models. Lastly, we investigated the mechanism underlying the formate-dependent pro-invasive phenotype. We applied a variety of different biochemical as well as cellular assays to investigate the underlying mechanism. In the present study, we underline that formate specifically promotes invasion and not migration in different cancer types. Furthermore, we now demonstrate that inhibition of formate overflow results in a decreased invasiveness of GBM cells ex vivo and in vivo. Using breast cancer models, we also obtain first evidence that formate does not only promote local cancer cell invasion but also metastasis formation in vivo, suggesting that locally increased formate concentrations within the tumour microenvironment promote cancer cell motility and dissemination. Mechanistically, we uncover a previously undescribed interplay where formate acts as a trigger to alter fatty acid metabolism, which in turn affects cancer cell invasiveness and metastatic potential via matrix metalloproteinase (MMP) release. Gaining a better mechanistic understanding of formate overflow, and how formate promotes invasion in cancer, may contribute to preventing cancer cell dissemination, one of the main reasons for cancer-related mortality. [less ▲] Detailed reference viewed: 40 (7 UL)![]() Lai, Adelene ![]() Doctoral thesis (2022) In most societies, using chemical products has become a part of daily life. Worldwide, over 350,000 chemicals have been registered for use in e.g., daily household consumption, industrial processes ... [more ▼] In most societies, using chemical products has become a part of daily life. Worldwide, over 350,000 chemicals have been registered for use in e.g., daily household consumption, industrial processes, agriculture, etc. However, despite the benefits chemicals may bring to society, their usage, production, and disposal, which leads to their eventual release into the environment has multiple implications. Anthropogenic chemicals have been detected in myriad ecosystems all over the planet, as well as in the tissues of wildlife and humans. The potential consequences of such chemical pollution are not fully understood, but links to the onset of human disease and threats to biodiversity have been attributed to the presence of chemicals in our environment. Mitigating the potential negative effects of chemicals typically involves regulatory steps and multiple stakeholders. One key aspect thereof is environmental monitoring, which consists of environmental sampling, measurement, data analysis, and reporting. In recent years, advancements in Liquid Chromatography-High Resolution Mass Spectrometry (LC-HRMS), open chemical databases, and software have enabled researchers to identify known (e.g., pesticides) as well as unknown environmental chemicals, commonly referred to as suspect or non-target compounds. However, identifying unknown chemicals, particularly non-targets, remains extremely challenging because of the lack of a priori knowledge on the analytes - all that is available are their mass spectrometry signals. In fact, the number of unknown features in a typical mass spectrum of an environmental sample is in the range of thousands to tens of thousands, and therefore requires feature prioritisation before identification within a suitable workflow. In this dissertation work, collaborations with two regulatory authorities responsible for environmental monitoring sought to identify relevant unknown compounds in the environment, specifically by developing computational workflows for unknown identification in LC-HRMS data. The first collaboration culminated in Publication A, which involved a joint project with the Zürcher Amt für Wasser, Energie und Luft. Environmental samples taken from wastewater treatment plant sites in Switzerland were retrospectively analysed using a pre-screening workflow that prioritised features suitable for non-target identification. For this purpose, a multi-step Quality Control algorithm that checks the quality of mass spectral data in terms of peak intensities, alignment, and signal-to-noise ratio was developed and used within pre-screening. This algorithm was incorporated into the R package Shinyscreen. Features that were prioritised by pre-screening then underwent identification using the in silico fragmentation tool MetFrag. To obtain these identifications, MetFrag was coupled to various open chemical information resources such as spectral databases like MassBank Europe and MassBank of North America, as well as suspect lists from the NORMAN Suspect List Exchange and the CompTox Chemicals Dashboard database. One confirmed and twenty-one tentative compound identifications were achieved and reported according to an established confidence level scheme. Comprehensive data interpretation and detailed communication of MetFrag’s results was performed as a means of formulating evidence-based recommendations that may inform future environmental monitoring campaigns. Building on the pre-screening and identification workflow developed in Publication A, Publication B resulted from a collaboration with the Luxembourgish Administration de la gestion de l’eau that sought to identify, and where possible quantify unknown chemicals in Luxembourgish surface waters. More specifically, surface water samples collected as part of a two-year national monitoring campaign were measured using LC-HRMS and screened for pharmaceutical parent compounds and their transformation products. Compared to pharmaceutical compound information, which is publicly available from local authorities (and was used in the suspect list), information on transformation products is relatively scarce. Therefore, new approaches were developed in this work to mine data from the PubChem database as well as from the literature in order to formulate a suspect list containing pharmaceutical transformation products, in addition to their parent compounds. Overall, 94 pharmaceuticals and 14 transformation products were identified, of which 88 and 2 were confirmed identifications respectively. The spatio-temporal occurrence and distribution of these compounds throughout the Luxembourgish environment were analysed using advanced data visualisations that highlighted patterns in certain regions and time periods of high incidence. These findings may support future chemicals management measures, particularly in environmental monitoring. Another challenging aspect of managing chemicals is that they mostly exist as complex mixtures within the environment as well as chemical products. Substances of Unknown or Variable composition, Complex reaction products or Biological materials (UVCBs) make up 20-40% of international chemical registries and include chlorinated paraffins, polymer mixtures, petroleum fractions, and essential oils. However, little is known about their chemical identities and/or compositions, which poses formidable obstacles to assessing their environmental fate and toxicity, let alone identification in the environment. Publication C addresses the challenges of UVCBs by taking an interdisciplinary approach in reviewing the literature that incorporates considerations of their chemical representations, toxicity, environmental fate, exposure, and regulatory approaches. Improved substance registration requirements, grouping techniques to simplify assessment, and the use of Mixture InChI to represent UVCBs in a findable, accessible, interoperable, and reusable (FAIR) way in databases are amongst the key recommendations of this work. A specific type of UVCB, mixtures of homologous compounds, are commonly detected in environmental samples, including many High Production Volume (HPV) compounds such as surfactants. Compounds forming homologous series are related by a common core fragment and repeating chemical subunit, and can be represented using general formulae (e.g., CnF2n+1COOH) and/or Markush structures. However, a significant identification bottleneck is the inability to match their characteristic analytical signals in LC-HRMS data with chemicals in databases; while comb-like elution patterns and constant differences in mass-to-charge ratio indicate the presence of homologous series in samples, most chemical databases do not contain annotated homologous series. To address this gap, Publication D introduces a cheminformatics algorithm, OngLai, to detect homologous series within compound datasets. OngLai, openly implemented in Python using the RDKit, detects homologous series based on two inputs: a list of compounds and the chemical structure of a repeating unit. OngLai was applied to three open datasets from environmental chemistry, exposomics, and natural products, in which thousands of homologous series with a CH2 repeating unit were detected. Classification of homologous series in compound datasets is expected to advance their analytical detection in samples. Overall, the work in this dissertation contributed to the advancement of identifying and managing unknown chemicals in the environment using cheminformatics and computational approaches. All work conducted followed Open Science and FAIR data principles: all code, datasets, analyses, and results generated, including the final peer-reviewed publications, are openly available to the public. These efforts are intended to spur further developments in unknown chemical identification and management towards protecting the environment and human health. [less ▲] Detailed reference viewed: 76 (5 UL)![]() Adeleye, Damilola ![]() Doctoral thesis (2022) Cu(In,Ga)S2 is a chalcopyrite material suitable as the higher bandgap top cell in tandem applications in next generation multijunction solar cells. This owes primarily to the tunability of its bandgap ... [more ▼] Cu(In,Ga)S2 is a chalcopyrite material suitable as the higher bandgap top cell in tandem applications in next generation multijunction solar cells. This owes primarily to the tunability of its bandgap from 1.5 eV in CuInS2 to 2.45 eV in CuGaS2, and its relative stability over time. Currently, a major hinderance to the potential use of Cu(In,Ga)S2 in tandem capacity remains a deficient single-junction device performance in the form of low open-circuit voltage (VOC) and low efficiency. Aside interfacial recombination which leads to losses in the completed Cu(In,Ga)S2 solar cell, deficiencies stems from a low optoelectronic quality of the Cu(In,Ga)S2 absorber quantified by the quasi-Fermi level splitting (QFLS) and which serves as the upper limit of VOC achievable by a solar cell device. In this thesis, the QFLS is compared with the theoretical VOC (SQ-VOC) in the radiative limit, and “SQ-VOC deficit” is defined to compare the difference between SQ-VOC and QFLS as a comparable measure of the optoelectronic deficiency in the absorber material. In contrast to the counterpart Cu(In,Ga)Se2 absorber which has produced highly efficient solar cell devices, the Cu(In,Ga)S2 absorber still suffers from a high SQ-VOC deficit. However, SQ-VOC deficit in Cu(In,Ga)S2 can be reduced by growing the absorbers under Cu-deficient conditions. For the effective use of Cu(In,Ga)S2 as the top cell in tandem with Si or Cu(In,Ga)Se2 as the bottom cell, an optimum bandgap of 1.6-1.7 eV is required, and this is realized in absorbers with Ga content up to [Ga]/([Ga]+[In]) ratio of 0.30-0.35. However, the increase of Ga in Cu-poor Cu(In,Ga)S2 poses a challenge to the structural and optoelectronic quality of the absorber, resulting from the formation of segregated Ga phases with steep Ga/bandgap gradient which constitutes a limitation to the quality of the Cu(In,Ga)S2 absorber layer with a highSQ-VOC deficit and low open circuit voltage and overall poor performance of the finalized solar cell. In this work, the phase segregation in Cu(In,Ga)S2 has been circumvented by employing higher substrate temperatures and adapting the Ga flux during the first-stage of deposition when growing the Cu(In,Ga)S2 absorbers. A more homogenous Cu(In,Ga)S2 phase and improved Ga/bandgap gradient is achieved by optimizing the Ga flux at higher substrate temperature to obtain a Cu(In,Ga)S2 absorber with high optoelectronic quality and low SQ-VOC deficit. Additionally, the variation of the Cu-rich phase when growing the Cu(In,Ga)S2 absorber layers was found to not only alter the notch profile and bandgap minimum of the absorbers, but also influence the optoelectronic quality of the absorber. Shorter Cu-rich phase in the absorbers led to narrower notch profile and higher bandgap. Ultimately, several steps in the three-stage deposition method used for processing the Cu(In,Ga)S2 absorbers were revised to enhanced the overall quality of the absorbers. Consequently, the SQ-VOC deficit in high bandgap Cu(In,Ga)S2 absorbers is significantly reduced, leading to excellent device performance. This thesis also examines the temperature- and compositional-related optoelectronic improvement in pure Cu-rich CuInS2 absorbers without Ga, where improvement in QFLS was initially linked to a reduction of nonradiative recombination channels with higher deposition temperatures and increase in Cu content. Findings through photoluminescence decay measurements show that the origin of the improved QFLS in CuInS2 is rather linked to changes in doping levels with variations of deposition temperature and Cu content. Finally, in order to understand and gain insight into the influence of Ga in Cu(In,Ga)S2, the electronic structure of CuGaS2 absorbers was investigated in dependence of excitation intensity and temperature by low temperature photoluminescence measurements. A shallow donor level and three acceptor levels were detected. It was found that similar acceptor levels in CuInSe2 and CuGaSe2 which are otherwise shallow become deeper in CuGaS2. These deep defects serve as nonradiative recombination channels and their appearance in the Ga-containing compound is be detrimental to the optoelectronic quality of Cu(In,Ga)S2 absorbers as Ga content is increased therefore limiting the optimum performance of Cu(In,Ga)S2 devices. [less ▲] Detailed reference viewed: 84 (18 UL)![]() Chatterjee, Sreyoshi ![]() Doctoral thesis (2022) Detailed reference viewed: 27 (5 UL)![]() Citeroni, Nicole ![]() Doctoral thesis (2022) Detailed reference viewed: 75 (5 UL)![]() Kchouri, Bilal ![]() Doctoral thesis (2022) Detailed reference viewed: 55 (6 UL)![]() Grant, Erica Taylor ![]() Doctoral thesis (2022) A growing number of diseases have been linked to aberrations in the interaction between diet, gut microbiota, and host immune function. Understanding these complex dynamics will be critical for the ... [more ▼] A growing number of diseases have been linked to aberrations in the interaction between diet, gut microbiota, and host immune function. Understanding these complex dynamics will be critical for the development of personalized therapeutic regimens to improve health outcomes. In mice colonized with a defined, 14-member synthetic human microbial community, mucin-degrading bacteria proliferate and are suspected to contribute to thinning of the colonic mucus layer and enhanced pathogen susceptibility. This dissertation investigates three aspects of diet–microbiome–host interactions in healthy models. In the first chapter, we investigate this question in early life by assessing the impact of the maternal microbiota and fiber-deprivation on immune development in pups. Next, we leverage an adult mouse model to ascertain the effects of specific fiber types on bacterial metabolic output and host immunity. Finally, we translate this work into humans by examining the effects of a high- and low-fiber diet on host mucolytic bacteria populations and early inflammatory shifts in healthy adults. Interim analyses indicate that there is high translatability of the mouse findings to humans, with similar changes in composition and enzymatic activities according to fiber intake. By implementing a bench-to-bed-to-bench research approach, this work aims to expand the range of commensals that can be considered as potential biomarkers of early barrier disruption or targeted using customized diet-based approaches. [less ▲] Detailed reference viewed: 68 (3 UL)![]() Usanova, Ksenia ![]() Doctoral thesis (2022) Recent research has determined that talent management is a highly context-sensitive phenomenon. Indeed, the way talent is defined and managed varies from one context to another. Although talent management ... [more ▼] Recent research has determined that talent management is a highly context-sensitive phenomenon. Indeed, the way talent is defined and managed varies from one context to another. Although talent management has been studied for the last two decades, the majority of scientific works still focus on the context of large multinational corporations with a prevalence of managerial views. Therefore, this thesis aims to contribute to the literature by challenging the dominant understandings of talent management through examining the phenomenon in the contexts that are less explored. To that end, four empirical studies were conducted constituting this thesis. The first study explores how talent is defined and managed in the not-for-profit sector. Based on the interviews with 34 leaders of 34 mission-driven organizations, it offers a unique definition of a talent and an understanding of how TM is implemented in this sector. The second study analytically contextualizes talent management in micro-, small- and medium-sized enterprises. Based on 31 interviews with TM leaders of 27 aerospace companies, this research proposes three types of TM in this context, namely “strategic”, “entrepreneurial” and “ad hoc”. The third study, in the context of the high technology industry, explores understanding of talent management not only from the perspective of managers but also that of talent. It is based on the discussions with 20 managers and 20 talents from the aerospace industry and identifies three views on TM: talents’, managers’ and shared. Finally, the fourth study explores gender differences in quitting intentions of talent in the knowledge-based field. Drawing on the survey responses from 119 talented individuals, it shows that gender moderates relationships between talent intention to quit and its main antecedents. This thesis provides an important theoretical contribution to the talent management literature and offers useful practical implications for organizational leaders, managers, talented individuals and policy-makers. [less ▲] Detailed reference viewed: 80 (13 UL)![]() Mangers, Jeff ![]() Doctoral thesis (2022) The concept of Circular Economy (CE) is gaining increasing attention as an indispensable renewal of linear economy without neglecting sustainable development goals. Closing resource loops and keeping ... [more ▼] The concept of Circular Economy (CE) is gaining increasing attention as an indispensable renewal of linear economy without neglecting sustainable development goals. Closing resource loops and keeping resources in the system at the highest level of use for as long as possible are cited as the main goals of CE. However, due to missing information exchange, the lack of consistency between the existing End-of-Life (EOL) infrastructure and the respective product designs hinders a successful circularity of resources. This research provides a modular method to collect, process, and apply EOL process data to provide the Beginning-of-Life (BOL) with important EOL-knowledge through a CE adapted product design assessment. EOL-data is collected using a Circular Value Stream Mapping (CVSM), EOL-information is processed using a digital state flow representation, and EOL-knowledge is applied by providing a graphical user interface for designers. The method is verified by a simulation model that serves as a decision-support tool for product designers in the context of a PET bottle case study in Luxembourg. The goal is to anticipate a circular flow of resources by reflectively aligning product design with the relevant EOL infrastructure. Within the linear economy, the focus has been on improving production processes while neglecting what happens to a product after its use. The developed method makes it possible to consider not only the requirements of users but also the actual end users, the EOL process chains, when designing products. [less ▲] Detailed reference viewed: 52 (10 UL)![]() Ost, Alexander Dimitri ![]() Doctoral thesis (2022) The progressive trend to miniaturize samples presents a challenge to materials characterization techniques in terms of both lateral resolution and chemical sensitivity. The latest generation of focused ... [more ▼] The progressive trend to miniaturize samples presents a challenge to materials characterization techniques in terms of both lateral resolution and chemical sensitivity. The latest generation of focused ion beam (FIB) platforms has allowed to advance in a variety of different fields, including nanotechnology, geology, soil, and life sciences. State-of-the-art ultra-high resolution electron microscopy (EM) devices coupled with secondary ion mass spectrometry (SIMS) systems have enabled to perform in-situ morphological and chemical imaging of micro- and even nanosized objects to better understand materials by studying their properties correlatively. However, SIMS images are prone to artefacts induced by the sample topography as the sputtering yield changes with respect to the primary ion beam incidence angle. Knowing the exact sample topography is crucial to understand SIMS images. Moreover, using non-reactive primary ions (Ne+) produced in a gas field ion source (GFIS) allows to image in SIMS with an excellent lateral resolution of < 20 nm, but it comes with a lower ionization probability compared to reactive sources (e.g., Cs+) and due to small probe sizes only a limited number of atoms are sputtered, resulting in low signal statistics. This thesis focused first on taking advantage of high-resolution in-situ EM-SIMS platforms for applications in specific research fields and to go beyond traditional correlative 2D imaging workflows by developing adapted methodologies for 3D surface reconstruction correlated with SIMS (3D + 1). Applying this method to soil microaggregates and sediments allowed not only to enhance their visualization but also to acquire a deeper understanding of materials’ intrinsic transformation processes, in particular the organic carbon sequestration in soil biogeochemistry. To gain knowledge of the influence of the topography on surface sputtering, using model samples the change of the sputtering yield under light ion bombardment (He+, Ne+) for different ranges of incidence angles of the primary ion beam was studied experimentally. This data was compared to Monte Carlo simulation results and fitted with existing sputtering model functions. We showed thus that these models developed and studied for heavier ions (Ar+, Cs+) are also applicable to light ions (He+, Ne+). Additionally, an algorithm used to correct SIMS images with respect to topographical artefacts resulting from local changes of the sputtering yield was presented. Finally, the contribution of oxygen on positive SI yields was studied for non-reactive primary ions (25 keV Ne+) under high primary ion current densities (up to 10^20 ions/(cm2 ∙ s)). It was shown that in order to maximize and maintain a high ionization probability oxygen needs to be provided continuously to the surface. Secondary ion signal enhancement of up to three orders of magnitude were achieved for silicon, opening the doors for SIMS imaging at both highest spatial resolution and high sensitivity. [less ▲] Detailed reference viewed: 86 (8 UL)![]() Fabiani, Ginevra ![]() Doctoral thesis (2022) The interaction between topography and climate has a crucial role in shaping forest composition and structure. The understating of how the ecohydrological processes across the landscape affect tree ... [more ▼] The interaction between topography and climate has a crucial role in shaping forest composition and structure. The understating of how the ecohydrological processes across the landscape affect tree performance becomes especially important with the expected reduction in water availability and increase in water demand, which could enhance the thermal and hydrologic gradient along the slope. Incorporating soil moisture variation and groundwater gradient across the landscape has been found to improve the capacity to predict forest vulnerability and water fluxes in complex terrains. However, most of the information that can be retrieved by remote sensing technique cannot capture small scale-processes. Therefore, hillslope-catchment scale studies can shed light on ecosystem responses to spatially and temporally variable growing conditions. In the present work, I investigated how hillslope position affects tree physiological response to environmental controls (i.e. soil moisture, vapor pressure deficit, groundwater proximity to the surface) and tree water use in two hillslope transects (Chapter 1 and 3). Sap velocity measurements and isotopic measurements have been applied along two hillslope transects, characterized by contrasting slopes angle, climate, and species composition. We found that the different hydrological processes occurring at the two sites lead to contrasting physiological responses and water uptake strategies. In the Weierbach catchment, the lack of shallow downslope water redistribution through interflow leads to no substantial differences in vadose zone water supply between hillslope positions and ultimately no spatial differences in the tree’s physiological response to environmental drivers. Furthermore, beech and oak trees displayed different stomatal control resulting from their water uptake strategies and physiology. In the Lecciona catchment, the greater soil moisture content at the footslope, promoted by the steep slope, led to more suitable growing conditions and a longer growing season in the piedmont zone. These results emphasize the strong interconnection between vegetation, climate, and hydrological processes in complex terrains, and the need to consider them as a whole to better understand future ecosystem responses to changing climate. Additionally, the present work sheds new light on the complex interaction between sapwood and heartwood. In Chapter 2, I provide experimental evidence about water isotopic exchange between the two compartments in four tree species (Fagus sylvatica, Quercus petraea, Pseudotsuga menziesii, and Picea abies) characterized by different xylem anatomy, and timing of physiological activity. While the two functional parts display a consistent difference in isotopic composition in conifers, they are characterized by more similar values in broadleaved species in broadleaved species, suggesting a higher degree of water exchange. These results highlight the value of accounting for radial isotopic variation, which might potentially lead to uncertainties concerning the origin of the extracted water for water uptake studies. [less ▲] Detailed reference viewed: 45 (2 UL)![]() Nezhelskii, Maksim ![]() Doctoral thesis (2022) This thesis consists of three main chapters, which study different topics of financial economics. The first two chapters are applied theory studies of heterogenous agents in continuous time, where the ... [more ▼] This thesis consists of three main chapters, which study different topics of financial economics. The first two chapters are applied theory studies of heterogenous agents in continuous time, where the primary focus is endogenised portfolio choice of risky assets by agents in a general equilibrium framework. While chapter 1 studies risky-asset allocation in general, trying to match inequality data, chapter 2 models housing choice and studies the effects of different shocks on the real estate market. These first two chapters represent working papers which are written jointly with Christos Koulovatianos. Chapter 3 is an empirical paper on post-earnings announcement drift and how to better capture this anomaly using a bigger set of publicly available information. The third working paper is written jointly with Anna Ignashkina. Chapter 1 is entitled “Income and wealth inequality in heterogeneous-agent models in con- tinuous time.” In this chapter we analyse wealth inequality and how it is affected by the heterogeneity of the risk-taking pattern. Wealth inequality in the United States reached un- precedented levels over the last thirty years. The puzzle of the heavy tail of wealth distribution remains unresolved. We build a heterogeneous-agent model in continuous time with endogenous portfolio choice to test if risk-taking of the wealthy can explain the thick upper tail of wealth distribution for the US data. We incorporate the recent evidence of Guvenen et al. (2014) of income process’s non-normality in our model. We find that asset holdings play an important role in explaining increased inequality, especially when accompanied by non-normal income process. In both general equilibrium and partial setting we show that the non-normality of the income process contributes significantly to the formation of the convex risk-taking pattern against income. We also find that the rise in volatility of capital markets observed in the last 30 years can explain trends in inequality and interest rates. Chapter 2 is entitled “A Heterogeneous-Agent Model of Household Mortgages in Luxem- bourg: Responses to the Covid-19 Shock.” As it is well-known, the covid-19 pandemic lockdowns 8 did not have an impact on every worker in the same way. More social professions were affected in a more adverse manner by the lockdowns, experiencing severe income losses, while many ser- vices professions could continue working remotely as before, experiencing no income losses. In order to study the impact of these asymmetric idiosyncratic income shocks on household balance sheets in Luxembourg and on house prices, we calibrate a continuous-time heterogeneous-agent model of homeownership to pre- and post-covid income data. We compute the transition dy- namics of the net-worth distribution of households and study alternative scenarios of shocks to the mortgage rates that may stem from overall credit market conditions and central-bank policies. Our general results are that the mortgage market in Luxembourg is resilient. Yet, our model raises alert for some vulnerable households and provides a tool for policy evaluation in the future. Chapter 3 is entitled “Information aggregation and post-earnings announcement drift.” In this chapter we propose a new measure of surprise information that aggregates different signals coming together with earnings reports, complementing the standard earnings-surprise measure for analysis of Post-earnings announcement drift (PEAD). We find that new factors, such as revenue surprises and aggregated non-financial information available in earnings reports, are important determinants of post-earnings returns. Surprisingly, these new factors amplify, rather than mitigate, the PEAD anomaly. In dynamic portfolios, weekly returns to PEAD increase by 72 basis points, if more financial metrics are taken into account, compared to the standard approach. Similarly, with analyses of textual metrics, we demonstrate that changes in the text are associated with a longer drift. [less ▲] Detailed reference viewed: 78 (2 UL)![]() Mashhood, Muhammad ![]() Doctoral thesis (2022) The Additive Manufacturing (AM) process is the scale-able, flexible and prospective way of fabricating the parts. It forms the product of desired design by depositing layer upon layer of material to print ... [more ▼] The Additive Manufacturing (AM) process is the scale-able, flexible and prospective way of fabricating the parts. It forms the product of desired design by depositing layer upon layer of material to print the object in 3D. It has a vast field of applications from forming prototypes to the manufacturing of sophisticated parts for space and aeronautical industry. It has even found its way into the domain of biological research and the development of implants and artificial organs. Depending upon the form of the raw material and the mechanism of printing it layer upon layer, there are different techniques of AM in metal parts production. One of them is Selective Laser Melting (SLM). This process involves the raw material in the form of metal powder. To manufacture the product, this powder first undergoes melting through a moving laser. Afterwards, it solidifies and joins with the already solidified structure in the layer below. The movement of the laser is carried out in the shape of a 2D cross-section design which has to be consolidated at the corresponding height. This process involves the repetitive heating and cooling of the material which causes sharp thermal gradients in the object. Because of such gradients, the material during manufacturing consistently undergoes thermal loading. Such thermal loading, therefore, induces the residual stress and permanent distortion in the manufactured part. These residual stress and thermally induced distortions affect the quality of the part and cause the mismatch in dimensions between the final product and the required design. To reduce the waste of raw material and energy, therefore it is important to predict such problems beforehand. This research work presents the modelling of a numerical simulation platform which simulates the part-scale AM SLM part manufacturing process and its cooling down in a virtual environment. The objective of establishing this platform was to evaluate the residual stress and thermal distortion. It included the modelling of thermal & structural analysis and their coupling to establish the multi-physics simulation tool. The transient thermal analysis with the elastoplastic non-linearity of the material model was implemented to capture the permanent deformation behaviour of a material under thermal loading. The modelling was done on the Finite Elements Method (FEM) based open source numerical analysis tools to incorporate the flexibility of numerical modelling in the project. The modelling strategy of solidified material deposition was incorporated via the elements activation technique. To synchronize the activation of the elements in the multi-physics FEM solver with the laser movement, the interfacing with AM G-code based data was performed. With this modelling strategy, the simulation experiments were conducted to analyse the evolution of thermal gradients, residual stress and deformation in part manufacturing. The study also highlights the challenges of the applied elements activation technique and its limitations. It was also studied how the prediction of simulation results vary with the different material deposition methods. Moreover, the resulting numerical analysis of the established simulation platform was also compared with the experimental and validated simulation data to ensure its reliability. In this comparative study, the current numerical strategy replicated the trends of stress and deformation from physical experimental data and represented the expected material behaviour in the manufactured part. Additionally, during this study, the skill gained for results handling and their validation was also applied in the other field of numerical modelling e.g. in the numerical analysis conducted for a blast furnace with Computational Fluid Dynamics - Discrete Element Method (CFD-DEM) coupled multi-physics platform of eXtended Discrete Element Method (XDEM). The current working simulation platform, via its AM G-code machine data interface with numerical solver, can facilitate the manufacturing engineers to predict earlier the possible thermally caused residual stress and deformation in their AM SLM produced product via simulation. On the other hand with the identified challenges in the virtual depiction of material deposition, the simulation developers may also be able to expect such limitations and make relevant decisions in the choice of material deposition technique in their AM SLM process modelling. Moreover, with the potential of this simulation tool being the basic building block, it may also provide the opportunity to build upon it the multi-scale numerical techniques and add them to the multidisciplinary research work of Artificial Intelligence based digital twins. [less ▲] Detailed reference viewed: 81 (5 UL)![]() Ul Haq, Fitash ![]() Doctoral thesis (2022) With the recent advances of Deep Neural Networks (DNNs) in real-world applications, such as Automated Driving Systems (ADS) for self-driving cars, ensuring the reliability and safety of such DNN-Enabled ... [more ▼] With the recent advances of Deep Neural Networks (DNNs) in real-world applications, such as Automated Driving Systems (ADS) for self-driving cars, ensuring the reliability and safety of such DNN-Enabled Systems (DES) emerges as a fundamental topic in software testing. Automatically generating new and diverse test data that lead to safety violations of DES presents the following challenges: (1) there can be many safety requirements to be considered at the same time, (2) running a high-fidelity simulator is often very computationally intensive, (3) the space of all possible test data that may trigger safety violations is too large to be exhaustively explored, (4) depending upon the accuracy of the DES under test, it may be infeasible to find a scenario causing violations for some requirements, and (5) DNNs are often developed by a third party, who does not provide access to internal information of the DNNs. In this dissertation, in collaboration with IEE sensing, we address the aforementioned challenges by providing scalable and practical automated solutions for testing Deep Learning (DL) models and systems. Specifically, we present the following in the dissertation. 1. We conduct an empirical study to compare offline testing and online testing in the context of Automated Driving Systems (ADS). We also investigate whether simulator-generated data can be used in lieu of real-world data. Furthermore, we investigate whether offline testing results can be used to help reduce the cost of online testing. 2. We propose an approach to generate test data using many-objective search algorithms tailored for test suite generation to generate test data for DNN with many outputs. We also demonstrate a way to learn conditions that cause the DNN to mispredict the outputs. 3. In order to reduce the number of computationally expensive simulations, we propose an automated approach, SAMOTA, to generate data for DNN-enabled automated driving systems, using many- objective search and surrogate-assisted optimisation. 4. The environmental conditions (e.g., weather, lighting) often stay the same during a simulation, which can limit the scope of testing. To address this limitation, we present an automated approach, MORLAT, to dynamically interact with the environment during simulation. MORLAT relies on reinforcement learning and many-objective optimisation. We evaluate our approaches using state-of-the-art deep neural networks and systems. The results show that our approaches perform statistically better than the alternatives [less ▲] Detailed reference viewed: 78 (19 UL)![]() Zhou, Yang ![]() Doctoral thesis (2022) Neutrophils are important actors of the immune system, particularly through the release of cytokines in the inflammatory environment. This process must be highly orchestrated to avoid cell overactivation ... [more ▼] Neutrophils are important actors of the immune system, particularly through the release of cytokines in the inflammatory environment. This process must be highly orchestrated to avoid cell overactivation and unwanted tissue damage. However, this elegant regulation is still only partially known and poorly characterized. Increasing evidence over the years show that Ca2+ is actively involved in cytokine secretion but our knowledge on the relationship between these two phenomena needs to be filled. To this end, in this study, we investigated the Ca2+-dependent mechanisms underlying cytokine secretion in neutrophils. For that, the differentiated myeloid cell line HL-60 (dHL-60) was used as a cell model since primary neutrophils are unable to be genetically modified. Our results showed that the mobilization of several cytokines, notably IL-8, was up-regulated in a time-dependent manner by a pro-inflammatory stimulus (fMLF). Experiments of intracellular flow cytometry staining provided evidence on the presence of preformed IL-8 as well as de novo synthesis of IL-8. Changes in intracellular Ca2+ levels, resulting from an extracellular Ca2+ entry, is shown to be indispensable for efficient secretion of CCL2, CCL3, CCL4 and IL-8, even if an additional signal appears to be required for the release of IL-8. Ca2+-dependent cytokine secretion was associated to the store-operated Ca2+ entry (SOCE) mechanism and probably relies on the Ca2+ sensor STIM1. Since Ca2+-binding proteins S100A8/A9 have been previously reported to be key actors in the regulation of neutrophil NADPH oxidase activation, we hypothesized that Ca2+ signals could be converted into cytokine secretion through intracellular S100A8/A9. Knockdown studies performed in mouse (Hoxb8 cells) and human (dHL-60) neutrophil models confirmed the involvement of S100A8/A9 in the regulation of cytokine secretion. Moreover, our data support the fact that a part of cytokine secretion occurs through the degranulation process. Finally, we investigated the post-transcriptional mechanism involved in the regulation of S100A8/A9 expression and thus, in the control of cytokine secretion. Based on prediction network analysis, miR-132-5p was identified as a potential regulator of S100A8/A9. Stable overexpression of miR-132-5p in dHL-60 cells caused a strong inhibition of S100A8/A9 expression and IL-8 secretion underlining the preponderant role of miR-132-5p-regulated S100A8/A9 expression in the pro-inflammatory response. To summarize, for the first time, we prove that Ca2+-dependent cytokine secretion is associated with SOCE and is regulated by intracellular S100A8/A9, which is negatively modulated by miR-132-5p to prevent excessive neutrophil activation and host damage. [less ▲] Detailed reference viewed: 48 (30 UL)![]() Gomez Ramos, Borja ![]() Doctoral thesis (2022) Midbrain dopaminergic neurons (mDANs) control voluntary movement, cognition, and reward behavior and are implicated in human diseases such as Parkinson’s disease (PD). Many transcription factors (TFs ... [more ▼] Midbrain dopaminergic neurons (mDANs) control voluntary movement, cognition, and reward behavior and are implicated in human diseases such as Parkinson’s disease (PD). Many transcription factors (TFs) controlling human mDAN differentiation have been described but much of the regulatory landscape remains undefined. The location and the low number of these cells in the brain have limited the application of epigenomic assays, as they usually require a high number of cells. Thanks to the emergence of induced pluripotent stem cell (iPSC) technology, differentiation protocols for the derivation of mDANs were developed, making access to this neuronal subtype easier, facilitating its study. However, current protocols for the differentiation of human iPSC towards mDANs produce a mixture of developmentally immature and incompletely specified cells together with more physiological cells. Differentiation protocols are based on the developmental knowledge generated from animal studies and the translation of this knowledge to humans appears not to be completely compatible. Therefore, a better understanding of human development is needed, encouraging the use of human-based models. A proper understanding of the epigenetic landscape of human mDAN differentiation will have direct implications for uncovering gene regulatory mechanisms, disease-associated variants (as most of them are in the non-coding regions of the genome), and cell identity. In this study, a human tyrosine hydroxylase (TH) reporter line of iPSC was used for the generation of time series transcriptomic and epigenomic profiles from differentiating mDANs. TH is the rate-limiting enzyme for dopamine production and therefore a specific marker for mDANs. In the reporter line, mCherry was expressed under the control of the TH promoter, which allowed to isolate mDANs from the cultures by FACS. Integration of time-point-specific chromatin accessibility and associated TF binding motifs with paired transcriptome profiles across 50 days of differentiation was performed using an adapted version of the EPIC-DREM pipeline. Time-point-specific gene regulatory interactions were obtained and served to identify putative key TFs controlling mDAN differentiation. Low-input ChIP-seq for histone H3 lysine 27 acetylation (H3K27ac) was performed to identify and prioritize key TFs controlled by super-enhancer regions. LBX1, NHLH1, and NR2F1/2 were found to be necessary for mDAN differentiation. Overexpression of either LBX1 or NHLH1 was also able to increase mDAN numbers. LBX1 was found to regulate cholesterol biosynthesis and translation possibly via mTOR signaling. NHLH1 was found to be necessary for the induction of miR-124, a potent neurogenic microRNA. Interestingly, miR-124 and NHLH1 appear to be part of a positive feedback loop. Thus, the results from this study provide novel insights into the regulatory landscape of human mDAN differentiation. In addition, as the identified candidates from EPIC-DREM did not show selective expression in mDANs, the data produced was further explored for the identification of novel expression selective TFs in these cells. ZFHX4 was selected as a relevant TF for mDANs that was also downregulated in PD patients. It presented a high and specific expression during development and in adult mDANs from human brains. Depletion of ZFHX4 during differentiation affected mDAN neurogenesis. However, CRISPR-mediated overexpression of ZFHX4 during differentiation did not affect mDAN numbers. Transcriptomic analysis revealed a role of ZFHX4 in controlling cell cycle and cell division on mDANs. ZFHX4 seems to be regulating cell cycle control by interaction with E2F TFs and the NuRD complex, as these proteins have also been associated with this function and appeared in the analysis performed. Overall, the present study provides a novel profile of mDANs during differentiation that can be used for many other applications apart from the one presented here, like the identification of disease-associated variants affecting these neurons. Incorporating epigenetic information into the current transcriptomic knowledge increased the understanding of this neuronal subtype and uncovered important pathways involved in the biology of these cells and most probably with implications to disease. [less ▲] Detailed reference viewed: 26 (2 UL)![]() Spignoli, Lorenzo ![]() Doctoral thesis (2022) Detailed reference viewed: 34 (4 UL)![]() Kyriakis, Dimitrios ![]() Doctoral thesis (2022) Using RNA sequencing, we can examine distinctions between different cell types and capture a moment in time of the dynamic activities taking place inside a cell. Researchers in fields like developmental ... [more ▼] Using RNA sequencing, we can examine distinctions between different cell types and capture a moment in time of the dynamic activities taking place inside a cell. Researchers in fields like developmental biology have embraced this technology quickly as it has improved over the past few years, and there are now many single-cell RNA sequencing datasets accessible. A surge in the development of computational analysis techniques has occurred along with the invention of technologies for generating single-cell RNA sequencing data. In my thesis, I examine computational methods and tools for single-cell RNA sequencing data analysis in 3 distinct projects. In the fetal brain project, I tried to decipher the complexity of the human brain and its development, and the link between development and neuropsychiatric diseases at early fetal brain development. I provide a unique resource of fetal brain development across a number of functionally distinct brain regions in a brain region-specific manner at single nuclei resolution. In total, I retrieved 50,937 single nuclei from four individual time points (Early; gestational weeks 18 and 19, and late; gestational weeks 23 and 24) and four distinct brain regions (cortical plate, hippocampus, thalamus, and striatum). In my dissertation, I also tried to investigate the underlying mechanisms of Parkinsons disease (PD), the second-most prevalent neurodegenerative disorder, characterized by the loss of dopaminergic neurons (mDA) in the midbrain. I examined the disease process using single cells of mDA neurons developed from human induced pluripotent stem cells (hiPSCs) expressing the ILE368ASN mutation in the PINK1 gene, at four different maturation time points. Differential expression analysis resulted in a potential core network of PD development which linked known genetic risk factors of PD to mitochondrial and ubiquitination processes. In the final part of my thesis, I perform an analysis of a dataset from brain biopsies from patients with Intracerebral hemorrhage (ICH) stroke. In this project, I tried to investigate the dynamic spectrum of polarization of the immune cells to pro/anti-inflammatory states. I also tried to identify markers that potentially can be used to predict the outcome of the ICH patients. Overall, my thesis discusses a wide range of single-cell RNA sequencing tools and methods, as well as how to make sense of real datasets using already-developed tools. These discoveries may eventually lead to a more thorough understanding of Parkinson’s disease, ICH stroke but also psychiatric diseases and may facilitate the creation of novel treatments. v [less ▲] Detailed reference viewed: 48 (12 UL)![]() Soriano Baguet, Leticia ![]() Doctoral thesis (2022) Th17 cells are a subset of effector CD4+ T cells essential for the protection against extracellular bacteria and fungi. At the same time, Th17 cells have been implicated in the progression of autoimmune ... [more ▼] Th17 cells are a subset of effector CD4+ T cells essential for the protection against extracellular bacteria and fungi. At the same time, Th17 cells have been implicated in the progression of autoimmune diseases, including multiple sclerosis, rheumatoid arthritis and psoriasis. Effector T cells require energy and building blocks for their proliferation and effector function. To that end, these cells switch from oxidative and mitochondrial metabolism to fast and short pathways such as glycolysis and glutaminolysis. Pyruvate dehydrogenase (PDH) is the central enzyme connecting cytoplasmic glycolysis to the mitochondrial tricarboxylic acid (TCA) cycle. The specific role of PDH in inflammatory Th17 cells is unknown. To unravel the role of this pivotal enzyme, a mutant mouse line where T cells do not express the catalytic subunit of the PDH complex was generated, using the Cre-Lox recombination system. In this study, PDH was shown to be essential for the generation of an exclusive glucose-derived citrate pool needed for the proliferation, survival, and effector functions of Th17 cells. In vivo, mice harboring a T cell-specific deletion of PDH were less susceptible to experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis, showing lower disease burden and increased survival. In vitro, the absence of PDH in Th17 cells increased glutamine and glucose uptake, as well as glycolysis and glutaminolysis. Similarly, lipid uptake was increased through CD36 in a glutamine-mTOR axis-dependent manner. On the contrary, the TCA cycle was impaired, interfering with oxidative phosphorylation (OXPHOS) and causing levels of cellular citrate to remain critically low in mutant Th17 cells. Citrate is the substrate of ATP citrate synthase (ACLY), an enzyme responsible for the generation of acetyl-CoA, which is essential for lipid synthesis and histone acetylation, crucial for the transcription process. In line, PDH-deficient Th17 cells showed a reduced expression of Th17 signature genes. Notably, increasing cellular citrate by the addition of acetate in PDH-deficient Th17 cells restored their metabolism and function. PDH was identified as a pivotal enzyme for the maintenance of a metabolic feedback loop within central carbon metabolism that can be of relevance for therapeutically targeting Th17 cell-driven autoimmunity. [less ▲] Detailed reference viewed: 99 (9 UL)![]() Manisekaran, Ahilan ![]() Doctoral thesis (2022) Detailed reference viewed: 79 (6 UL)![]() Damodaran, Aditya Shyam Shankar ![]() Doctoral thesis (2022) Privacy preserving protocols typically involve the use of Zero Knowledge (ZK) proofs, which allow a prover to prove that a certain statement holds true, to a verifier, without revealing the witness ... [more ▼] Privacy preserving protocols typically involve the use of Zero Knowledge (ZK) proofs, which allow a prover to prove that a certain statement holds true, to a verifier, without revealing the witness (secret information that allows one to verify whether said statement holds true) to the verifier. This mechanism allows for the participation of users in such protocols whilst preserving the privacy of sensitive personal information. In some protocols, the need arises for the reuse of the information (or witnesses) used in a proof. In other words, the witnesses used in a proof must be related to those used in previous proofs. We propose Stateful Zero Knowledge (SZK) data structures, which are primitives that allow a user to store state information related to witnesses used in proofs, and then prove subsequent facts about this information. Our primitives also decouple state information from the proofs themselves, allowing for modular protocol design. We provide formal definitions for these primitives using a composable security framework, and go on to describe constructions that securely realize these definitions. These primitives can be used as modular building blocks to attenuate the security guarantees of existing protocols in literature, to construct privacy preserving protocols that allow for the collection of statistics about secret information, and to build protocols for other schemes that may benefit from this technique, such as those that involve access control and oblivious transfer. We describe several such protocols in this thesis. We also provide computational cost measurements for our primitives and protocols by way of implementations, in order to show that they are practical for large data structure sizes. We finally provide a notation and a compiler that takes as input a ZK proof represented by said notation and outputs a secure SZK protocol, allowing for a layer of abstraction so that practitioners may specify the security properties and the data structures they wish to use, and be presented with a ready to use implementation without needing to deal with the theoretical aspects of these primitives, essentially bridging the gap between theoretical cryptographic constructions and their implementation. This thesis conveys the results of FNR CORE Junior project, Stateful Zero Knowledge. [less ▲] Detailed reference viewed: 45 (9 UL)![]() Peracchi, Silvia ![]() Doctoral thesis (2022) Detailed reference viewed: 44 (11 UL)![]() Krieger, Bastian ![]() Doctoral thesis (2022) This dissertation investigates the innovation effects of three public policies with large economic relevance, high political priority, and increasing scientific coverage in three essays. The first essay ... [more ▼] This dissertation investigates the innovation effects of three public policies with large economic relevance, high political priority, and increasing scientific coverage in three essays. The first essay examines the role of including environmental selection criteria in public procurement tenders for the introduction of more environmentally friendly products, services, and processes. The implementation of competitive large-scale university funding programs and their heterogeneous effects on regional firms’ innovativeness is covered in the second essay. The third essay analyzes the liberalization of trade in foreign knowledge services and its relevance to the innovation activities of domestic firms. [less ▲] Detailed reference viewed: 52 (5 UL)![]() Bai, Peiru ![]() Doctoral thesis (2022) Du fait de la mondialisation galopante, des familles transnationales se déplacent, s’installent et s’intègrent dans un nouvel environnement linguistique et culturel autre que leur pays d’origine. Notre ... [more ▼] Du fait de la mondialisation galopante, des familles transnationales se déplacent, s’installent et s’intègrent dans un nouvel environnement linguistique et culturel autre que leur pays d’origine. Notre travail de thèse s’intéresse à l’étude des politiques linguistiques au sein de trois familles transnationales d’origine chinoise au Grand-Duché de Luxembourg, où le multilinguisme constitue tant un défi qu’une opportunité pour leur intégration dans la société luxembourgeoise. Entre héritage et intégration, la confrontation interculturelle ne s’effectue pas sans tension. Il s’agit, pour les familles, d’une part, de maintenir leur langue d’origine (le chinois) et, d’autre part, d’apprendre des langues scolaires (le luxembourgeois, le français, l’allemand) et l’anglais dans le système d’éducation luxembourgeois. Notre objectif est d’étudier les idéologies linguistiques et éducatives des parents, le développement du plurilinguisme chez l’enfant et les questions identitaires qui méritent réflexion dans un environnement multilingue et interculturel. Comment les parents conceptualisent-ils le développement du plurilinguisme chez leurs enfants au Luxembourg ? Quelles sont les considérations et les aspirations des parents quant à l’éducation de leurs enfants au sens large en contexte multilingue et interculturel ? Comment et dans quelle mesure la construction et la négociation des idéologies linguistiques des membres des trois familles s’articulent-elles autour du niveau individuel, du niveau familial et du niveau social ? Nous avons opté pour une approche qualitative et ethnolinguistique. L’entretien semi-directif, l’observation participante et l’enregistrement des interactions familiales ont été mobilisés comme outils principaux de recueil des données sur le terrain. L’analyse thématique, l’étude de cas et l’analyse transversale ont constitué notre démarche d’analyse, avec un accent mis sur l’identification et l’appréhension des idéologies linguistiques et éducatives. L’analyse des données nous permet de conclure que le mécanisme de rapports entre l’héritage linguistique, la conformité aux exigences scolaires et la valorisation de l’anglais comme lingua franca s’avère d’autant plus complexe que les familles chinoises ont à faire face à plus d’une langue dans un environnement multilingue comme celui du Luxembourg. Primo, les parents chinois démontrent un engagement ferme vis-à-vis du maintien du chinois standard chez leurs enfants, mais avec des attentes variées quant aux compétences à acquérir. Secundo, face à la multitude de langues en jeu, ils témoignent d’une ambivalence évidente dans leur conception du rôle et de l’ordre de priorité des différentes langues, mais aussi, pour ce qui est de l’aspiration au plurilinguisme et à la mentalité monolingue. Et, tertio, leur appréciation des valeurs des langues et leur construction du rôle de celles-ci se fondent essentiellement sur le pragmatisme et l’affirmation identitaire. Il est mis en exergue également que les idéologies linguistiques et éducatives des parents au niveau individuel s’articulent avec l’environnement social. Ce travail de thèse contribue à faire avancer nos connaissances sur les politiques linguistiques familiales dans les familles transnationales, notamment celles d’origine non occidentale dans un pays occidental marqué par le multilinguisme. [less ▲] Detailed reference viewed: 68 (4 UL)![]() Shang, Lan ![]() Doctoral thesis (2022) Detailed reference viewed: 48 (28 UL)![]() Acharya, Kishor ![]() Doctoral thesis (2022) Atmospheric Pressure Plasma has been used to enhance and/or initiate the Chemical Vapour Deposition (AP-PECVD) to deposit thin films or functional layer coatings over a large surface area on a large range ... [more ▼] Atmospheric Pressure Plasma has been used to enhance and/or initiate the Chemical Vapour Deposition (AP-PECVD) to deposit thin films or functional layer coatings over a large surface area on a large range of substrates. Now an ability to localise the AP-PECVD coating on an area of interest and control the deposition’s dimension showed its potential application as a viable technique to perform Additive Manufacturing (AM). Additive Manufacturing (AM) is a bottom-up approach in which 2-D patterning or 3-D structures are built using a layer-by-layer deposition. AM allowed easy design optimization and quickly provided the customized parts on demands, thus making itself a very popular technique in the mainstream manufacturing process. As such, it has a wide application in automotive, optics, electronics, aeronautics, medical and biotechnology fields. However, the existing AM printing techniques have some limitations regarding high-resolution printing deposition in a wide variety of substrates and very often get restricted to the types of precursors that could be printed. Whereas, due to the high energetic/reactive species in non-thermal plasma, the AP-PECVD deposition has been obtained using a wide range of precursors on a versatile surface. Thus, there has been a growing interest in performing an area selective localised AP-PECVD coating, mainly by adapting the design of the PECVD reactor. Hence, this thesis aims to design, optimize and study a one-step mask-free AP-PECVD plasma process that could locally deposit the material of interest with high precision to perform AM. In the thesis, the technical approach undertaken by the home-built prototype “plasma torch” is to decouple the plasma generator annular tube and the precursor injector central capillary. This approach has allowed a way to tune the diameter of the deposited dot by changing the dimension of the precursor injector, which has been demonstrated by the deposition of the micro-dot as small as 400µm in diameter. Further, the flexibility to move the capillary tube without significant changes in the plasma torch's overall geometry has also allowed for selectively injecting the precursor (Methylmethacrylate, MMA) in the spatial plasma post-discharge region. Thanks to this setting, the deposited dot has high retention of monomer's chemistry (functional group) and unprecedented molecular weights (oligomeric chain up to 18 MMA units). Hence, initially, a novel area selective AP-PECVD plasma torch design has been demonstrated, and its performance has been defined to obtain the micro resolution coating. During the research work, gas flow rates have been identified as a crucial parameter in obtaining the localised coating; three kinetic regimes with different coating morphology have been discovered. By performing a thorough computational fluid dynamic (CFD) simulation of the torch phenomena, it has been possible to establish a parallel between the fluid behaviour and the deposition size. The deposition was found to be confined in a zone created by the dynamical behaviour of gas, i.e., re-circulating vortices between the torch and substrate. Hence, later the gas flow rate was used to tune the diameter of the confinement zone, which in return changed the diameter of the deposited dot. The gas flow dynamic impacts the involved species, i.e., reactive plasma species, precursor molecules, and the open-air interaction and distribution on the surface of the substrate. When organosilicon precursors with the presence or absence of vinyl bond and/or ethoxy groups are used, it results in different depositional chemical reactions and depositional patterns. The correlation between the depositional patterns and the mass fraction distribution of involved species has been obtained thanks to the performed CFD simulation done in parallel. Further, the likelihood of deposition mechanisms like "vinyl group opening by free radical" for vinyl containing precursor resulting in silicon oxycarbide-like (SiOxCyH) structural deposition, and the Reactive Oxygen Species (ROS)-induced "fragmentation and adsorption" deposition mechanism resulting in silica SiOx like structural deposition for siloxane containing precursor has been suggested and discussed. The understanding gained from this systematic case study implies the importance of reactive plasma species in the underlying deposition mechanisms; hence, it has been suggested that tuning/tailoring its distribution can alter the chemical nature of deposition and its pattens. Overall, this thesis work provides insight into area selective AP-PECVD coating (plasma printing) and demonstrates that plasma technology is a viable option for additive manufacturing. The findings would be helpful in both designing the AP-PECVD plasma torch and selecting precursors for the desired organic/inorganic deposition. Thanks to the insight gained during the thesis work, the home-designed prototype of the plasma torch has been upgraded to implement in a commercial 3-D printer. [less ▲] Detailed reference viewed: 26 (1 UL)![]() Ramirez Sanchez, Omar ![]() Doctoral thesis (2022) With a record power conversion efficiency of 23.35% and a low carbon footprint, Cu(In,Ga)Se2 remains as one of the most suitable solar energy materials to assist in the mitigation of the climate crisis we ... [more ▼] With a record power conversion efficiency of 23.35% and a low carbon footprint, Cu(In,Ga)Se2 remains as one of the most suitable solar energy materials to assist in the mitigation of the climate crisis we are currently facing. The progress seen in the last decade of Cu(In,Ga)Se2 advancement, has been made possible by the development of postdeposition treatments (PDTs) with heavy alkali metals. PDTs are known to affect both surface and bulk properties of the absorber, resulting in an improvement of the solar cell parameters open-circuit voltage, short-circuit current density and fill factor. Even though the beneficial effects of PDTs are not questioned, the underlying mechanisms responsible for the improvement, mainly the one related to the open-circuit voltage, are still under discussion. Although such improvement has been suggested to arise from a suppression of bulk recombination, the complex interplay between alkali metals and grain boundaries has complicated the labour to discern what exactly in the bulk material is profiting the most from the PDTs. In this regard, the development of this thesis aims at investigating the effects of PDTs on the bulk properties of Cu(In,Ga)Se2 single crystals, i.e., to study the effects of alkali metals in the absence of grain boundaries. Most of the presented analyses are based on photoluminescence, since this technique allows to get access to relevant information for solar cells such as the quasi-Fermi level splitting and the density of tail states directly from the absorber layer, and without the need of complete devices. This work is a cumulative thesis of three scientific publications obtained from the results of the different studies carried out. Each publication aims at answering important questions related to the intrinsic properties of Cu(In,Ga)Se2 and the effects of PDTs. The first publication presents a thorough investigation on the effects of a single heavy alkali metal species on the optoelectronic properties of Cu(In,Ga)Se2. In the case of polycrystalline absorbers, the effects of potassium PDTs in the absence of sodium have been previously attributed to the passivation of grain boundaries and donor-like defects. The obtained results, however, suggest that potassium incorporated from a PDT can act as a dopant in the absence of grain boundaries and yield an improvement in quasi-Fermi level splitting of up to 30 meV in Cu-poor CuInSe2, where a type inversion from N-to-P is triggered upon potassium incorporation. This observation led to the second paper, where a closer look was taken to how the carrier concentration and electrical conductivity of alkali-free Cu-poor CuInSe2 is affected by the incorporation of gallium in the solid solution Cu(In,Ga)Se2. The results obtained suggest that the N-type character of CuInSe2 can remain as such until the gallium content reaches the critical concentration of 15-19%, where the N-to-P transition occurs. A model based on the trends in formation energies of donor and acceptor-like defects is presented to explain the experimental results. The conclusions drawn in this paper shed light on why CuGaSe2 cannot be doped N-type like CuInSe2. Since a decreased density of tail states as a result of reduced band bending at grain boundaries had been previously pointed out as the mechanism behind the improvement of the open-circuit voltage after postdeposition treatments, the third publication focusses on how compositional variations and alkali incorporation affect the density of tail states of Cu(In,Ga)Se2 single crystals. The results presented in this paper suggest that increasing the copper and reducing the gallium content leads to the reduction of tail states. Furthermore, it is observed that tail states in single crystals are similarly affected by the addition of alkali metals as in the case of polycrystalline absorbers, which demonstrates that tail states arise from grain interior properties and that the role of grain boundaries is not as relevant as it was thought. Finally, an analysis of the voltage losses in high-efficiency polycrystalline and single crystalline solar cells, suggested that the doping effect caused by the alkalis affects the density of tail states through the reduction of electrostatic potential fluctuations, which are reduced due to a decrease in the degree of compensation. By taking the effect of doping on tail sates into account, the entirety of the VOC losses in Cu(In,Ga)Se2 is described. The findings presented in this thesis explain the link between tail states and open circuit voltage losses and demonstrate that the effects of alkali metals in Cu(In,Ga)Se2 go beyond grain boundary passivation. The results presented shed light on the understanding of tail states, VOC losses and the intrinsic properties of Cu(In,Ga)Se2, which is a fundamental step in this technology towards the development of more efficient devices. [less ▲] Detailed reference viewed: 101 (14 UL)![]() Kamlovskaya, Ekaterina ![]() Doctoral thesis (2022) The genre of Australian Aboriginal autobiography is a literature of significant socio-political importance, with authors sharing a history different to the one previously asserted by the European settlers ... [more ▼] The genre of Australian Aboriginal autobiography is a literature of significant socio-political importance, with authors sharing a history different to the one previously asserted by the European settlers which ignored or misrepresented Australia's First People. While there has been a number of studies looking at the works belonging to this genre from various perspectives, Australian Indigenous life writing has never been approached from the digital humanities point of view which, given the constant development of computer technologies and growing availability of digital sources, offers humanities researchers many opportunities for exploring textual collections from various angles. With this research work I contribute to closing the above-mentioned research gap and discuss the results of the interdisciplinary research project within the scope of which I created a bibliography of published Australian Indigenous life writing works, designed and assembled a corpus and created word embedding models of this corpus which I then used to explore the discourses of identity, land, sport, and foodways, as well as gender biases present in the texts in the context of postcolonial literary studies and Australian history. Studying these discourses is crucial for gaining a better understanding of the contemporary Australian society as well as the nation's history. Word embeddings modelling has recently been used in digital humanities as an exploratory technique to complement and guide traditional close reading approaches, which is justified by their potential to identify word use patterns in a collection of texts. In this dissertation, I provide a case study of how word embedding modelling can be used to investigate humanities research questions and reflect on the issues which researchers may face while working with such models, approaching various aspects of the research project from the perspectives of digital source and tool criticism. I demonstrate how word embedding model of the analysed corpus represents discourses through relationships between word vectors that reflect the historical, political, and cultural environment of the authors and some unique experiences and perspectives related to their racial and gender identities. I show how the narrators reconstruct the analysed discourses to achieve the main goals of Australian Indigenous life writing as a genre - reclaiming identity and rewriting history. [less ▲] Detailed reference viewed: 97 (15 UL)![]() Fixemer, Sonja ![]() Doctoral thesis (2022) Worldwide more than 55 million people are suffering from incurable age-related neurodegenerative diseases and associated dementia, including Alzheimer’s Disease (AD) and Dementia with Lewy bodies (DLB ... [more ▼] Worldwide more than 55 million people are suffering from incurable age-related neurodegenerative diseases and associated dementia, including Alzheimer’s Disease (AD) and Dementia with Lewy bodies (DLB). AD and DLB patients share memory impairment symptoms but present specific deterioration patterns of the hippocampus, a brain region essential for memory processes. Notably, the CA1 subregion is more vulnerable to atrophy in AD patients than in DLB patients. However, it remains unclear which factors contribute to this differential subregional vulnerability. On the neuropathological level, both AD and DLB patients frequently present an overlap of misfolded protein pathologies with AD-typical pathologies including extracellular amyloid-β (Aβ) plaques and neurofibrillary tangles (NFTs) of hyperphosphorylated tau protein (pTau), and DLB-typical pathological inclusions of phosphorylated 𝛼-synuclein (pSyn). Recent genome-wide association studies (GWAS) have revealed many genetic AD risk factors that are directly linked to microglia and suggest that they play an active role in pathology. However, how microglia alterations are linked to local pathological environments and which role microglia subpopulations play in specific vulnerability patterns of the hippocampus in AD and DLB remains poorly understood. This PhD thesis addressed two main aspects of microglia alterations in the post-mortem hippocampus of AD and DLB patients. The first study provided a very detailed and 3D characterization of microglia alterations at individual cell levels across CA1, CA3 and DG/CA4 subfields; and their local association with concomitant pTau, Aβ and pSyn loads across AD and DLB. We show that the co-occurrence of these three different types of misfolded proteins is frequent and follows specific subregional patterns in both diseases but is more severe in AD than DLB cases. Our results suggest that high burdens of pTau and pSyn associated with increased microglial alterations could be associated to the CA1 vulnerability in AD. Our second study provided an morphological and molecular characterization of a type of microglia accumulations referred to as coffin-like microglia (CoM), using high- and super-resolution microscopy as well as digital spatial profiling. We showed that CoM were enriched in the pyramidal layer of CA1/CA2 and were not linked to Aβ plaques, but occasionally engulfed or contained NFTs or intraneuronal granular pSyn inclusions. Furthermore, CoM are not surrounded by hypertrophic reactive astrocytes like plaque-associated microglia (PAM), but rather by dystrophic astrocytic processes. We found that proteomic and transcriptomic signatures of CoM point toward cellular senescence and immune cell infiltration, while PAM signatures indicate oxido-reductase activity and lipid degradation. Our studies provide new insights in complex signatures of human microglia in the hippocampus of age-related neurodegenerative diseases. [less ▲] Detailed reference viewed: 74 (5 UL)![]() El Orche, Fatima Ezzahra ![]() Doctoral thesis (2022) Detailed reference viewed: 46 (3 UL)![]() Proverbio, Daniele ![]() Doctoral thesis (2022) From population collapses to cell-fate decision, critical phenomena are abundant in complex real-world systems. Among modelling theories to address them, the critical transitions framework gained traction ... [more ▼] From population collapses to cell-fate decision, critical phenomena are abundant in complex real-world systems. Among modelling theories to address them, the critical transitions framework gained traction for its purpose of determining classes of critical mechanisms and identifying generic indicators to detect and alert them (“early warning signals”). This thesis contributes to such research field by elucidating its relevance within the systems biology landscape, by providing a systematic classification of leading mechanisms for critical transitions, and by assessing the theoretical and empirical performance of early warning signals. The thesis thus bridges general results concerning the critical transitions field – possibly applicable to multidisciplinary contexts – and specific applications in biology and epidemiology, towards the development of sound risk monitoring system. [less ▲] Detailed reference viewed: 180 (14 UL)![]() Jobim Fischer, Vinicius ![]() Doctoral thesis (2022) Sexual disorders are characterized by difficulties in the ability to respond sexually or to obtain sexual pleasure. Epidemiological data indicate that about 40-45% of adult women and 20-30% of adult men ... [more ▼] Sexual disorders are characterized by difficulties in the ability to respond sexually or to obtain sexual pleasure. Epidemiological data indicate that about 40-45% of adult women and 20-30% of adult men have at least one manifest sexual disorder during their lifetime. The etiology of sexual disorders is multifactorial, encompassing physiological, affective, interpersonal, and psychological, context-dependent-factors, which may predispose, precipitate, or maintain the sexual dysfunction. Psychological and emotional factors may either contribute to the development of sexual problems or be a consequence thereof. Emotional reactions and thoughts during sexual activity can also affect sexual functionality. With respect to emotions, different patterns for individuals with and without sexual dysfunction have been reported. Similarly, the difficulty or inability to face experiences or process emotions adequately, also termed emotion dysregulation, has been associated with coping strategies detrimental to health and with a variety of mental disorders. Such context suggest that emotion regulation may be important for sexual health and most likely also in the treatment of sexual problems. By deepen our knowledge of the role of emotion regulation for sexual health it would be possible to have an impact in both sexuality research and in a clinical level by providing evidence to base future of therapy programmes for people with sexual concerns. The overall objectives of the SHER project are: a) to determine the associations between emotion regulation and sexual health and; b) to develop and evaluate an internet-based intervention protocol designed to improve emotion regulation skills for people with sexual disorders. The project employed different methods, namely a literature scoping review, a cross-sectional online survey and a randomized controlled trial intervention with a three months follow-up. The dissertation presents three studies (four manuscripts) from the SHER project. The first study aimed at reviewing the existing literature on the effects of emotion regulation on sexual function and satisfaction. After searching in different databases and applying the inclusion and exclusion criteria, twenty-seven articles were analyzed. As a conclusion, was identified that emotion regulation was associated with poorer sexual health outcomes, difficulties in the sexual response cycle and overall lower sexual satisfaction. In addition, the few experimental studies (either in laboratory settings or in intervention trials) found positive effects of promoting emotion regulation change on sexual function and satisfaction. The aim of the second study was to determine whether distinct profiles in terms of preferred emotion regulation strategies are differentially associated with sexual and mental health. The sample consisted of 5436 participants aged between 18 and 77 years (M= 25.80, SD= 6.96). A gender stratified cluster analysis was performed to classify individuals according to their scores on scales measuring emotion regulation strategies (cognitive reappraisal and emotional suppression), sexual health (assessed using gender-specific self-report questionnaires), anxiety and depression symptoms. For both men and women, the results showed a four-cluster solution: low reappraisal and high suppression, n=1243; high reappraisal and low suppression, n=1695; high reappraisal and high suppression, n=1425; low reappraisal and low suppression, n=1073. Better sexual and mental health scores were found for participants with high cognitive reappraisal and low expressive suppression scores. High expressive suppression was associated with higher anxiety and depression scores and worse sexual health. We concluded by suggesting that provision of care for sexological patients should include an assessment of their emotion regulation abilities and emotion regulation training interventions fostering reappraisal should be offered when appropriate. The third study, aimed at developing and testing an internet-based emotion regulation training for sexual health (TREpS). First, an intervention protocol was established and published indicating the objectives, procedures and expected outcomes of the intervention. Later, a second manuscript reported the findings of the intervention. The intervention was composed of eight modules, delivered weekly. The module contents encompassed psychoeducation on sexual health and emotion regulation, a set of different strategies to deal with emotions (relaxation techniques, cognitive flexibility, non-judgmental awareness, self-acceptance and compassion, emotion analysis) and sexual emotional exposures. Participants were assessed in three different time points: baseline, end of the intervention and at three months after the intervention ending. Initially 60 participants met the inclusion criteria and were allocated in two groups. Nonetheless, differently than what was expected, the intervention yielded a very large dropout rate (83.4 %). In this circumstance, some changes to the study protocol have been performed, e.g., the reduction of the follow-up interval period and the complete assessment of the initially waitlist-control group in the three different time points. Since the adherence rate was very low the gathered data was insufficient to investigate treatment effects. Among the participants who completed the intervention larger and moderate effect sizes were observed between assessments for emotion regulation, depression, lubrication, orgasm and thoughts of sexual failure and abuse during sexual activity. [less ▲] Detailed reference viewed: 87 (5 UL)![]() Kremer, Paul ![]() Doctoral thesis (2022) Improvised Explosive Devices (IEDs) are an ever-growing worldwide threat. The disposal of IEDs is typically performed by experts of the police or the armed forces with the help of specialized ground ... [more ▼] Improvised Explosive Devices (IEDs) are an ever-growing worldwide threat. The disposal of IEDs is typically performed by experts of the police or the armed forces with the help of specialized ground Ordnance Disposal Robots (ODRs). Unlike aerial robots, those ODRs have poor mobility, and their deployment in complex environments can be challenging or even impossible. Endowed with manipulation capabilities, aerial robots can perform complex manipulation tasks akin to ground robots. This thesis leverages the manipulation skills and the high mobility of aerial robots to perform aerial disposal of IEDs. Being, in essence, an aerial manipulation task, this work presents numerous contributions to the broader field of aerial manipulation. This thesis presents the mechatronic concept of an aerial ODR and a high-level view of the fundamental building blocks developed throughout this thesis. Starting with the system dynamics, a new hybrid modeling approach for aerial manipulators (AMs) is proposed that provides the closed-form dynamics of any given open-chain AM. Next, a highly integrated, lightweight Universal Gripper (called TRIGGER) customized for aerial manipulation is introduced to improve grasping performance in unstructured environments. The gripper (attached to a multicopter) is tested under laboratory conditions by performing a pick-and-release task. Finally, an autonomous grasping solution is presented alongside its control architecture featuring computer vision and trajectory optimization. To conclude, the grasping concept is validated in a simulated IED disposal scenario. [less ▲] Detailed reference viewed: 39 (3 UL)![]() Liu, Bowen ![]() Doctoral thesis (2022) Advances in networking and hardware technology have made the design and deployment of Internet of Things (IoTs) and decentralised applications a trend. For example, the fog computing concept and its ... [more ▼] Advances in networking and hardware technology have made the design and deployment of Internet of Things (IoTs) and decentralised applications a trend. For example, the fog computing concept and its associated edge computing technologies are pushing computations to the edge so that data aggregation can be avoided to some extent. This naturally brings benefits such as efficiency and privacy, but on the other hand, it forces data analysis tasks to be carried out in a distributed manner. Hence, we will focus on establishing a secure channel between an edge device and a server and performing data analysis with privacy protection. In this thesis, we first studied the state-of-art Key Exchange (KE) and Authenticated Key Exchange (AKE) protocols in the literature, including security properties, security models for various security properties, existing KE and AKE schemes of pre-quantum and post-quantum era with varied authentication factors. As a result of the above research, a novel IoT-oriented security model for AKE protocol is introduced. In addition to the general security properties satisfaction, we also define several detailed security games for the desired security properties of perfect forward secrecy, key compromise impersonation resilience and server compromise impersonation resilience. Furthermore, by studying the current multi-factor AKE protocols in the literature, we are inspired by the usage of bigdata in the IoT setting for the authentication and session key establishment propose. With this in mind, we proposed a bigdata-facilitated two-party AKE protocol for IoT systems that uses the bigdata as one of the authentication factors. Moreover, we also proposed a modular framework for constructing IoT-server AKE in post-quantum setting. It is flexible that it can integrate with a public key encryption and a KE component. In addition to this, we notice that as IoT generates and collects more and more data, the need to perform data analysis increases at the same time. In order to avoid the performance limitations of IoT devices, ease the burden of the server, and also guarantee the quality of service of IoT applications, we presented a privacy-preserving decentralised Singular Value Decomposition (SVD) for fog architecture, which could be considered as a multi-IoT and multi-server setting and provides protection for the bigdata set. Next, we would like to further integrate the SVD results from different subsets using a federated learning mechanism. Privacy protection is always a fundamental requirement we need to consider; with this in mind, we proposed a privacy-preserving federated SVD scheme with secure aggregation. The results from the different edge devices are securely aggregated with the server and returned to the individual devices for further applications. [less ▲] Detailed reference viewed: 48 (8 UL)![]() Michelsen, Andreas Nicolai Bock ![]() Doctoral thesis (2022) In the search for non-Abelian anyonic zero modes for inherently fault-tolerant quantum computing, the hybridized superconductor - quantum Hall edge system plays an important role. Inspired by recent ... [more ▼] In the search for non-Abelian anyonic zero modes for inherently fault-tolerant quantum computing, the hybridized superconductor - quantum Hall edge system plays an important role. Inspired by recent experimental realizations of this system, we describe it through a microscopic theory based on a BCS superconductor with Rashba spin-orbit coupling and Meissner effect at the surface, which is tunnel-coupled to a spin-polarized integer or fractional quantum Hall edge. By integrating out the superconductor, we arrive at an effective theory of the proximitized edge state and establish a qualitative description of the induced superconductivity. We predict analytical relations between experimentally available parameters and the key parameters of the induced superconductivity, as well as the experimentally relevant transport signatures. Extending the model to the fractional quantum Hall case, we find that both the spin-orbit coupling and the Meissner effect play central roles. The former allows for transport across the interface, while the latter controls the topological phase transition of the induced p-wave pairing in the edge state, allows for particle-hole conversion in transport for weak induced pairing amplitudes, and determines when pairing dominates over fractionalization in the proximitized fractional quantum Hall edge. Further experimental indicators are predicted for the system of a superonductor coupled through a quantum point contact with an integer or fractional quantum Hall edge, with a Pauli blockade which is robust to interactions and fractionalization as a key indicator of induced superconductivity. With these predictions we establish a more solid qualitative understanding of this important system, and advance the field towards the realization of anyonic zero modes. [less ▲] Detailed reference viewed: 47 (7 UL)![]() Chitic, Ioana Raluca ![]() Doctoral thesis (2022) Detailed reference viewed: 28 (3 UL)![]() Colling, Joanne ![]() Doctoral thesis (2022) Detailed reference viewed: 28 (4 UL)![]() Schifano, Sonia ![]() Doctoral thesis (2022) Detailed reference viewed: 56 (22 UL)![]() Dalle, Marie-Alix ![]() Doctoral thesis (2022) Water and power related resources (energy sources and required material) are both critical and crucial resources that have become even more and more strategic as a result of climate change and geopolitics ... [more ▼] Water and power related resources (energy sources and required material) are both critical and crucial resources that have become even more and more strategic as a result of climate change and geopolitics. By making a large store of salty water available, desalination appears to be a viable solution to the water crisis already affecting 40% of the population today. However, because existing desalination procedures are power intensive and rely on non-renewable energy resources, their power use at large scale is unsustainable. Alternative techniques exist that are promising in terms of environmental impact, but not yet competitive in terms of fresh water outflow and energy efficiency. The focus of this work is on one of these alternatives, Air-Gap Membrane Distillation (AGMD), which was chosen because it relies on low-grade heat that is easy to collect from solar radiations or from industrial waste heat. This technique mimics the water cycle, thanks to the use of a membrane, allowing to bring the hot and cold water streams closer together. As a result, the temperature difference that drives evaporation is strengthened and the process accelerated. However, the development of a boundary layer at the membrane interface reduces this temperature difference and thus decreases the overall performance of the process. Thus this technique still requires some improvements to become industrially attractive, in terms of fresh water outflow per kWh and energy use. The goal of this thesis work is to contribute to AGMD energy efficiency and output flow enhancement by leveraging both experimental and theoretical considerations. A test facility characterizing the boundary layer based on a Schlieren method as well as an adapted AGMD module were designed and built. By interacting with the boundary layer, the laser allows the observation of the continuous temperature profile in the hot water channel of a at sheet AGMD module. The measurement can be performed in close proximity to the membrane and under a variety of operational conditions (inlet hot and cold temperatures, inlet velocities). In parallel, the fresh water outflow corresponding to these experimental conditions can be measured. Moreover, the experimental layout opens the way for further observations of the AGMD process from a different angle - such as concentration profiles or experimentation in the air-gap - with very little addition. The overall experimental set-up has eventually been used to produce a first set of data over a range of temperature (60-75◦C), which is then interpreted thanks to a custom algorithm deriving the temperature profiles and boundary layer thicknesses. A three dimensional heat and mass transfer model for AGMD (3DH&MT) - previously developed in the research team - has been used to numerically reproduce the experimental conditions and compare the results. The comparison showed promising results as the temperature gradients at the membrane interface and fresh water outflows present similar orders of magnitude and trends. The accuracy of the experiment can be further increased through several adaptations in the set-up. This 3DH&MT model could be used to simulate more complex AGMD module designs, such as spiral modules in order to optimize the operating conditions and the overall shape of the AGMD module to enhance its performances. Finally, in the aim of improving the energy efficiency and fresh water outflow of the AGMD process, spacers are usually added in the individual channels to boost mixing and thus reduce the boundary layer thickness, which improves evaporation flux. Two novel spacer geometries inspired by current industrial mixing state of the art and nature have been proposed and investigated, yielding interesting results for two distinct applications. One is particularly well-suited to maximizing mixing regardless of the energy used, hence improving the energy efficiency of the process. The second is optimal for minimizing energy consumption while maintaining a decent mixing result, thus enhancing the fresh water outflow of the process. A couple of indicators have also been proposed to assess the mixing performance of more complex 3D geometries. Overall this work broadens the current AGMD research by providing an experimental test-bench enabling the continuous temperature profile measurement, and the validation of a 3D heat and mass transfer model. Moreover, interesting tracks for improving the design of spacers are proposed in order to minimize the AGMD process's energy efficiency resistance. AGMD is an extremely promising water treatment technique since it is applicable to a broader range of waters than just seawater. The test equipment described in this work is sufficiently adaptable to investigate this potential as well as variants of AGMD processes that might boost its attractiveness. As it is based on readily available materials and technologies, it may be used anywhere and its reliance on a naturally available energy flow (solar radiation) makes it attractive in isolated regions. [less ▲] Detailed reference viewed: 54 (11 UL)![]() Abdu, Tedros Salih ![]() Doctoral thesis (2022) The application of Satellite Communications (SatCom) has recently evolved from providing simple Direct-To-Home television (DTHTV) to enable a range of broadband internet services. Typically, it offers ... [more ▼] The application of Satellite Communications (SatCom) has recently evolved from providing simple Direct-To-Home television (DTHTV) to enable a range of broadband internet services. Typically, it offers services to the broadcast industry, the aircraft industry, the maritime sector, government agencies, and end-users. Furthermore, SatCom has a significant role in the era of 5G and beyond in terms of integrating satellite networks with terrestrial networks, offering backhaul services, and providing coverage for the Internet of Things (IoT) applications. Moreover, thanks to the satellite's wide coverage area, it can provide services to remote areas where terrestrial networks are inaccessible or expensive to connect. Due to the wide range of satellite applications outlined above, the demand for satellite service from user terminals is rapidly increasing. Conventionally, satellites use multi-beam technology with uniform resource allocation to provide service to users/beams. In this case, the satellite's resources, such as power and bandwidth, are evenly distributed among the beams. However, this resource allocation method is inefficient since it does not consider the heterogeneous demands of each beam, which may result in a beam with a low demand receiving too many resources while a beam with a high demand receiving few resources. Consequently, we may not satisfy some beam demands. Additionally, satellite resources are limited due to spectrum regulations and onboard batteries constraint, which require proper utilization. Therefore, the next generation of satellites must address the above main challenges of conventional satellites. For this, in this thesis, novel advanced resource management techniques are proposed to manage satellite resources efficiently while accommodating heterogeneous beam demands. In the above context, the second and third chapters of the thesis explore on-demand resource allocation methods with no precoding technique. These methods aim to closely match the beam traffic demand by using the minimum transmit power and utilized bandwidth while having tolerable interference among the beams. However, an advanced interference mitigation technique is required in a high interference scenario. Thus, in the fourth chapter of the thesis, we propose a combination of resource allocation and interference management strategies to mitigate interference and meet high-demand requirements with less power and bandwidth consumption. In this context, the performance of the resource management method for systems with full precoding, that is, all beams are precoded; without precoding, that is, no precoding is applied to any beams; and with partial precoding, that is, some beams are precoded, is investigated and compared. Thanks to emerging technologies, the next generation of satellite communication systems will deploy onboard digital payloads; thus, advanced resource management techniques can be implemented. In this case, the digital payload can be configured to change the bandwidth, carrier frequency, and transmit power of the system in response to heterogeneous traffic demands. Typically, onboard digital payloads consist of payload processors, each operating with specific power and bandwidth to process each beam signal. There are, however, only a limited number of processors, thus requiring proper management. Furthermore, the processors consume more energy to process the signals, resulting in high power consumption. Therefore, payload management will be crucial for future satellite generation. In this context, the fifth chapter of the thesis proposes a demand-aware onboard payload processor management method, which switches on the processors according to the beam demand. In this case, for low demand, fewer processors are in-use, while more processors become necessary as demand increases. Demand-aware resource allocation techniques may require optimization of large variables. Consequently, this may increase the computational time complexity of the system. Thus, the sixth chapter of the thesis explores the methods of combining demand-aware resource allocation and deep learning (DL) to reduce the computational complexity of the system. In this case, a demand-aware algorithm enables bandwidth and power allocation, while DL can speed up computation. Finally, the last chapter provides the main conclusions of the thesis, as well as the future research directions. [less ▲] Detailed reference viewed: 103 (15 UL)![]() Robinet, François ![]() Doctoral thesis (2022) The research presented in this dissertation focuses on reducing the need for supervision in two tasks related to autonomous driving: end-to-end steering and free space segmentation. For end-to-end ... [more ▼] The research presented in this dissertation focuses on reducing the need for supervision in two tasks related to autonomous driving: end-to-end steering and free space segmentation. For end-to-end steering, we devise a new regularization technique which relies on pixel-relevance heatmaps to force the steering model to focus on lane markings. This improves performance across a variety of offline metrics. In relation to this work, we publicly release the RoboBus dataset, which consists of extensive driving data recorded using a commercial bus on a cross-border public transport route on the Luxembourgish-French border. We also tackle pseudo-supervised free space segmentation from three different angles: (1) we propose a Stochastic Co-Teaching training scheme that explicitly attempts to filter out the noise in pseudo-labels, (2) we study the impact of self-training and of different data augmentation techniques, (3) we devise a novel pseudo-label generation method based on road plane distance estimation from approximate depth maps. Finally, we investigate semi-supervised free space estimation and find that combining our techniques with a restricted subset of labeled samples results in substantial improvements in IoU, Precision and Recall. [less ▲] Detailed reference viewed: 55 (5 UL)![]() Ferreira Silva, Marielle ![]() Doctoral thesis (2022) How we design, construct and live in our houses as well as go to work can mitigate carbon dioxide (CO2) emissions and global climate change. Furthermore, the complex world we live in is in an ongoing ... [more ▼] How we design, construct and live in our houses as well as go to work can mitigate carbon dioxide (CO2) emissions and global climate change. Furthermore, the complex world we live in is in an ongoing transformation process. The housing shortage problem is increasing as the world population and cities are increasingly growing. Thereby, we must think of all the other issues that come along with population growth, such as increased demand for built space, mobility, expansion of cities into green areas, use of resources, and materials scarcity. Various projects from history have used alternatives to solve the problem of social housing, such as increasing density in cities through housing complexes, fast and low-cost constructions with prefabricated methods and materials, and modularisation systems. However, the current architecture is not designed to meet users’ future needs and reduce the environmental impact. A proposal to change this situation would be to go back to the beginning of architecture’s conception and to design it differently. In addition, nowadays, there is an increasing focus on moving towards sustainable and circular living spaces based on shared, adaptable and modular built environments to improve residents’ quality of life. For this reason, the main objective of this thesis is to study the potential of architecture that can reconfigure spatially and temporally, and produce alternative generic models to reuse and recycle architectural elements and spaces for functional flexibility through time. To approach the discussion, a documentary research methodology was applied to study the modular, prefabricated and ecological architectural typologies to address recyclability in buildings. The Atlas with case studies and architectural design strategies emerged from the analyses of projects from Durant to the 21st century. Furthermore, this thesis is a part of the research project Eco-Construction for Sustainable Development (ECON4SD), which is co-funded by the EU in partnership with the University of Luxembourg, and it presents three new generic building typologies. They are named according to their strong characteristics: Prototype 1 - Slab typology, a building designed as a concrete shelf structure in which timber housing units can be plugged in and out; Prototype 2 - Tower typology, a tower building with a flexible floor plan combining working and residential facilities with adjacent multi-purpose facilities; and Prototype 3 - Block typology, a structure characterised by the entire disassembly. The three new typologies combine modularity, prefabrication, flexibility and disassembly strategies to address the increasing demand for multi-use, reusable and resourceefficient housing units. The prototypes continually adapt to the occupants’ needs as the infrastructure incorporates repetition, exposed structure, central core, terrace, open floors, unfinished spaces, prefabrication, combined activities, and have reduced and different housing unit sizes, in which parts can be disassembled. They also densify the region that they are being implemented in. Moreover, the new circular typologies can offer more generous public and shared space for the occupants within the same building size as an ordinary building. The alternative design allows the reconversion of existing buildings or the reconstruction of the same buildings in other places reducing waste and increases its useful lifespan. Once the building is adapted and reused as much as possible, and the life cycle comes to an end, it can be disassembled, and the materials can be sorted for reusable or recyclable resources. The results demonstrate that circular architecture is feasible, realistic, adapts through time, increases material use, avoids unnecessary demolition, reduces construction waste and CO2 emissions and extends the useful life of the buildings. [less ▲] Detailed reference viewed: 78 (9 UL)![]() Perez Becker, Nicole ![]() Doctoral thesis (2022) As global population and income levels have increased, so has the waste generated as a byproduct of our production and consumption processes. Approximately two billion tons of municipal solid waste are ... [more ▼] As global population and income levels have increased, so has the waste generated as a byproduct of our production and consumption processes. Approximately two billion tons of municipal solid waste are generated globally every year – that is, more than half a kilogram per person each day. This waste, which is generated at various stages of the supply chain, has negative environmental effects and often represents an inefficient use or allocation of limited resources. With the growing concern about waste, many governments are implementing regulations to reduce waste. Waste is a often consequence of the inventory decisions of different players in a supply chain. As such, these regulations aim to reduce waste by influencing inventory decisions. However, determining the inventory decisions of players in a supply chain is not trivial. Modern supply chains often consist of numerous players, who may each differ in their objectives and in the factors they consider when making decisions such as how much product to buy and when. While each player makes unilateral inventory decisions, these decisions may also affect the decisions of other players. This complexity makes it difficult to predict how a policy will affect profit and waste outcomes for individual players and the supply chain as a whole. This dissertation studies the inventory decisions of players in a supply chain when faced with policy interventions to reduce waste. In particular, the focus is on food supply chains, where food waste and packaging waste are the largest waste components. Chapter 2 studies a two-period inventory game between a seller (e.g., a wholesaler) and a buyer (e.g., a retailer) in a supply chain for a perishable food product with uncertain demand from a downstream market. The buyer can differ in whether he considers factors affecting future periods or the seller’s supply availability in his period purchase decisions – that is, in his degree of strategic behavior. The focus is on understanding how the buyer’s degree of strategic behavior affects inventory outcomes. Chapter 3 builds on this understanding by investigating waste outcomes and how policies that penalize waste affect individual and supply chain profits and waste. Chapter 4 studies the setting of a restaurant that uses reusable containers instead of single-use ones to serve its delivery and take-away orders. With policy-makers discouraging the use of single-use containers through surcharges or bans, reusable containers have emerged as an alternative. Managing inventories of reusable containers is challenging for a restaurant as both demand and returns of containers are uncertain and the restaurant faces various customers types. This chapter investigates how the proportion of each customer type affects the restaurant’s inventory decisions and costs. [less ▲] Detailed reference viewed: 74 (6 UL)![]() Mazier, Arnaud ![]() Doctoral thesis (2022) Background: Breast-conserving surgery is the most acceptable option for breast cancer removal from an invasive and psychological point of view. During the surgical procedure, the imaging acquisition using ... [more ▼] Background: Breast-conserving surgery is the most acceptable option for breast cancer removal from an invasive and psychological point of view. During the surgical procedure, the imaging acquisition using Magnetic Image Resonance is performed in the prone configuration, while the surgery is achieved in the supine stance. Thus, a considerable movement of the breast between the two poses drives the tumor to move, complicating the surgeon's task. Therefore, to keep track of the lesion, the surgeon employs ultrasound imaging to mark the tumor with a metallic harpoon or radioactive tags. This procedure, in addition to an invasive characteristic, is a supplemental source of uncertainty. Consequently, developing a numerical method to predict the tumor movement between the imaging and intra-operative configuration is of significant interest. Methods: In this work, a simulation pipeline allowing the prediction of patient-specific breast tumor movement was put forward, including personalized preoperative surgical drawings. Through image segmentation, a subject-specific finite element biomechanical model is obtained. By first computing an undeformed state of the breast (equivalent to a nullified gravity), the estimated intra-operative configuration is then evaluated using our developed registration methods. Finally, the model is calibrated using a surface acquisition in the intra-operative stance to minimize the prediction error. Findings: The capabilities of our breast biomechanical model to reproduce real breast deformations were evaluated. To this extent, the estimated geometry of the supine breast configuration was computed using a corotational elastic material model formulation. The subject-specific mechanical properties of the breast and skin were assessed, to get the best estimates of the prone configuration. The final results are a Mean Absolute Error of 4.00 mm for the mechanical parameters E_breast = 0.32 kPa and E_skin = 22.72 kPa. The optimized mechanical parameters are congruent with the recent state-of-the-art. The simulation (including finding the undeformed and prone configuration) takes less than 20 s. The Covariance Matrix Adaptation Evolution Strategy optimizer converges on average between 15 to 100 iterations depending on the initial parameters for a total time comprised between 5 to 30 min. To our knowledge, our model offers one of the best compromises between accuracy and speed. The model could be effortlessly enriched through our recent work to facilitate the use of complex material models by only describing the strain density energy function of the material. In a second study, we developed a second breast model aiming at mapping a generic model embedding breast-conserving surgical drawing to any patient. We demonstrated the clinical applications of such a model in a real-case scenario, offering a relevant education tool for an inexperienced surgeon. [less ▲] Detailed reference viewed: 153 (13 UL)![]() Smajic, Semra ![]() Doctoral thesis (2022) For a very long time, the main focus in Parkinson’s disease (PD) research was the loss of neuromelanin-containing dopaminergic neurons from the substantia nigra (SN) of the midbrain - the key pathological ... [more ▼] For a very long time, the main focus in Parkinson’s disease (PD) research was the loss of neuromelanin-containing dopaminergic neurons from the substantia nigra (SN) of the midbrain - the key pathological feature of the disease. However, the association of neuronal vulnerability and neuromelanin presence has not been a common study subject. Recently, cells other than neurons also gained attention as mediators of PD pathogenesis. There are indications that glial cells undergo disease-related changes, however, the exact mechanisms remain unknown. In this thesis, I aimed to explore the contribution of every cell type of the midbrain to PD using single-nuclei RNA sequencing. Additionally, the goal was to explore their association to PD risk gene variants. As we identified microgliosis as a major mechanism in PD, we further extended our research to microglia. We sought to investigate the relation of microglia and neuromelanin. Thus, we aimed to, by means of immunohistochemical staining, imaging and laser-capture microdissection-based transcriptomics, elucidate this association on a single-cell level. This work resulted in the first midbrain single-cell atlas from idiopathic PD subjects and age- and sex-matched controls. We revealed SN-specific microgliosis with GPNMB upregulation, which also seemed to be specific to the idiopathic form of the disease. We further observed an accumulation of (extraneuronal) neuromelanin particles in Parkinson’s midbrain parenchyma, indicative of incomplete degradation. Moreover, we showed that GPNMB can be alleviated in microglia in contact with neuromelanin. Taken together, we provide evidence of a GPNMB-related microglial state as a disease mechanism specific to idiopathic PD, and highlight neuromelanin as an important player in microglia disease pathology. Further investigations are needed to understand whether the modulation of neuromelanin levels could be relevant in the context of PD therapy. [less ▲] Detailed reference viewed: 47 (10 UL)![]() Riom, Timothée ![]() Doctoral thesis (2022) Programming has become central in the development of human activities while not being immune to defaults, or bugs. Developers have developed specific methods and sequences of tests that they implement to ... [more ▼] Programming has become central in the development of human activities while not being immune to defaults, or bugs. Developers have developed specific methods and sequences of tests that they implement to prevent these bugs from being deployed in releases. Nonetheless, not all cases can be thought through beforehand, and automation presents limits the community attempts to overcome. As a consequence, not all bugs can be caught. These defaults are causing particular concerns in case bugs can be exploited to breach the program’s security policy. They are then called vulnerabilities and provide specific actors with undesired access to the resources a program manages. It damages the trust in the program and in its developers, and may eventually impact the adoption of the program. Hence, to attribute a specific attention to vulnerabilities appears as a natural outcome. In this regard, this PhD work targets the following three challenges: (1) The research community references those vulnerabilities, categorises them, reports and ranks their impact. As a result, analysts can learn from past vulnerabilities in specific programs and figure out new ideas to counter them. Nonetheless, the resulting quality of the lessons and the usefulness of ensuing solutions depend on the quality and the consistency of the information provided in the reports. (2) New methods to detect vulnerabilities can emerge among the teachings this monitoring provides. With responsible reporting, these detection methods can provide hardening of the programs we rely on. Additionally, in a context of computer perfor- mance gain, machine learning algorithms are increasingly adopted, providing engaging promises. (3) If some of these promises can be fulfilled, not all are not reachable today. Therefore a complementary strategy needs to be adopted while vulnerabilities evade detection up to public releases. Instead of preventing their introduction, programs can be hardened to scale down their exploitability. Increasing the complexity to exploit or lowering the impact below specific thresholds makes the presence of vulnerabilities an affordable risk for the feature provided. The history of programming development encloses the experimentation and the adoption of so-called defence mechanisms. Their goals and performances can be diverse, but their implementation in worldwide adopted programs and systems (such as the Android Open Source Project) acknowledges their pivotal position. To face these challenges, we provide the following contributions: • We provide a manual categorisation of the vulnerabilities of the worldwide adopted Android Open Source Project up to June 2020. Clarifying to adopt a vulnera- bility analysis provides consistency in the resulting data set. It facilitates the explainability of the analyses and sets up for the updatability of the resulting set of vulnerabilities. Based on this analysis, we study the evolution of AOSP’s vulnerabilities. We explore the different temporal evolutions of the vulnerabilities affecting the system for their severity, the type of vulnerability, and we provide a focus on memory corruption-related vulnerabilities. • We undertake the replication of a machine-learning based detection algorithms that, besides being part of the state-of-the-art and referenced to by ensuing works, was not available. Named VCCFinder, this algorithm implements a Support- Vector Machine and bases its training on Vulnerability-Contributing Commits and related patches for C and C++ code. Not in capacity to achieve analogous performances to the original article, we explore parameters and algorithms, and attempt to overcome the challenge provided by the over-population of unlabeled entries in the data set. We provide the community with our code and results as a replicable baseline for further improvement. • We eventually list the defence mechanisms that the Android Open Source Project incrementally implements, and we discuss how it sometimes answers comments the community addressed to the project’s developers. We further verify the extent to which specific memory corruption defence mechanisms were implemented in the binaries of different versions of Android (from API-level 10 to 28). We eventually confront the evolution of memory corruption-related vulnerabilities with the implementation timeline of related defence mechanisms. [less ▲] Detailed reference viewed: 89 (11 UL)![]() Tawakuli, Amal ![]() Doctoral thesis (2022) Substantial volumes of data are generated at the edge as a result of an exponential increase in the number of Internet of Things (IoT) applications. IoT data are generated at edge components and, in most ... [more ▼] Substantial volumes of data are generated at the edge as a result of an exponential increase in the number of Internet of Things (IoT) applications. IoT data are generated at edge components and, in most cases, transmitted to central or cloud infrastructures via the network. Distributing data preprocessing to the edge and closer to the data sources would address issues found in the data early in the pipeline. Thus, distribution prevents error propagation, removes redundancies, minimizes privacy leakage and optimally summarizes the information contained in the data prior to transmission. This in turn, prevents wasting valuable yet limited resources at the edge, which would otherwise be used for transmitting data that may contain anomalies and redundancies. New legal requirements such the GDPR and ethical responsibilities render data preprocessing, which addresses these emerging topics, urgent especially at the edge prior to the data leaving the premises of data owners. This PhD dissertation is divided into two parts that focus on two main directions within data preprocessing. The first part focuses on structuring and normalizing the data preprocessing design phase for AI applications. This involved an extensive and comprehensive survey of data preprocessing techniques coupled with an empirical analysis. From the survey, we introduced a holistic and normalized definition and scope of data preprocessing. We also identified the means of generalizing data preprocessing by abstracting preprocessing techniques into categories and sub-categories. Our survey and empirical analysis highlighted dependencies and relationships between the different categories and sub-categories, which determine the order of execution within preprocessing pipelines. The identified categories, sub-categories and their dependencies were assembled into a novel data preprocessing design tool that is a template from which application and dataset specific preprocessing plans and pipelines are derived. The design tool is agnostic to datasets and applications and is a crucial step towards normalizing, regulating and structuring the design of data preprocessing pipelines. The tool helps practitioners and researchers apply a modern take on data preprocessing that enhances the reproducibility of preprocessed datasets and addresses a broader spectrum of issues in the data. The second part of the dissertation focuses on leveraging edge computing within an IoT context to distribute data preprocessing at the edge. We empirically evaluated the feasibility of distributing data preprocessing techniques from different categories and assessed the impact of the distribution including on the consumption of different resources such as time, storage, bandwidth and energy. To perform the distribution, we proposed a collaborative edge-cloud framework dedicated to data preprocessing with two main mechanisms that achieve synchronization and coordination. The synchronization mechanism is an Over-The-Air (OTA) updating mechanism that remotely pushes updated preprocessing plans to the different edge components in response to changes in user requirements or the evolution of data characteristics. The coordination mechanism is a resilient and progressive execution mechanism that leverages the Directed Acyclic Graph (DAG) to represent the data preprocessing plans. Distributed preprocessing plans are shared between different cloud and edge components and are progressively executed while adhering to the topological order dictated by the DAG representation. To empirically test our proposed solutions, we developed a prototype, named DeltaWing, of our edge-cloud collaborative data preprocessing framework that consists of three stages; one central stage and two edge stages. A use-case was also designed based on a dataset obtained from Honda Research Institute US. Using DeltaWing and the use-case, we simulated an Automotive IoT application to evaluate our proposed solutions. Our empirical results highlight the effectiveness and positive impact of our framework in reducing the consumption of valuable resources (e.g., ≈ 57% reduction in bandwidth usage) at the edge while retaining information (prediction accuracy) and maintaining operational integrity. The two parts of the dissertation are interconnected yet can exist independently. Their contributions combined, constitute a generic toolset for the optimization of the data preprocessing phase. [less ▲] Detailed reference viewed: 91 (7 UL)![]() Baniasadi, Mehri ![]() Doctoral thesis (2022) Deep brain stimulation (DBS) is a surgical therapy to alleviate symptoms of numerous movement and psychiatric disorders by electrical stimulation of specific neural tissues via implanted electrodes ... [more ▼] Deep brain stimulation (DBS) is a surgical therapy to alleviate symptoms of numerous movement and psychiatric disorders by electrical stimulation of specific neural tissues via implanted electrodes. Precise electrode implantation is important to target the right brain area. After the surgery, DBS parameters, including stimulation amplitude, frequency, pulse width, and selection of electrode’s active contacts, are adjusted during programming sessions. Programming sessions are normally done by trial and error. Thus, they can be long and tiring. The main goal of the thesis is to make the post-operative experience, particularly the programming session, easier and faster by using visual aids to create a virtual reconstruction of the patient’s case. This enables in silico testing of different scenarios before applying them to the patient. A quick and easy-to-use deep learning-based tool for deep brain structure segmentation is developed with 89% ± 3 accuracy (DBSegment). It is much easier to implement compared to widely-used registration-based methods, as it requires less dependencies and no parameter tuning. Therefore, it is much more practical. Moreover, it segments 40 times faster than the registration-based method. This method is combined with an electrode localization method to reconstruct patients’ cases. Additionally, we developed a tool that simulates DBS-induced electric field distributions in less than a seconds (FastField). This is 1000 times faster than standard methods based on finite elements, with nearly the same performance (92%). The speed of the electric field simulation is particularly important for DBS parameter initialization, which we initialize by solving an optimization problem (OptimDBS). A grid search method confirms that our novel approach convergences to the global minimum. Finally, all the developed methods are tested on clinical data to ensure their applicability. In conclusion, this thesis develops various novel user-friendly tools, enabling efficient and accurate DBS reconstruction and parameter initialization. The methods are by far the quickest among open-source tools. They are easy to use and publicly available, FastField within the LeadDBS toolbox, and DBSegment as a Python pip package and a Docker image. We hope they can boost the DBS post-operative experience, maximize the therapy’s efficacy, and ameliorate DBS research. [less ▲] Detailed reference viewed: 91 (5 UL)![]() Sauvage, Delphine ![]() Doctoral thesis (2022) Detailed reference viewed: 56 (7 UL) |
||