References of "Dissertations and theses"
     in
Bookmark and Share    
Full Text
See detailDEVELOPING INDIVIDUAL-BASED GUT MICROBIOME METABOLIC MODELS FOR THE INVESTIGATION OF PARKINSON’S DISEASE-ASSOCIATED INTESTINAL MICROBIAL COMMUNITIES
Baldini, Federico UL

Doctoral thesis (2019)

The human phenotype is a result of the interactions of environmental factors with genetic ones. Some environmental factors such as the human gut microbiota composition and the related metabolic functions ... [more ▼]

The human phenotype is a result of the interactions of environmental factors with genetic ones. Some environmental factors such as the human gut microbiota composition and the related metabolic functions are known to impact human health and were put in correlation with the development of different diseases. Most importantly, disentangling the metabolic role played by these factors is crucial to understanding the pathogenesis of complex and multifactorial diseases, such as Parkinson’s Disease. Microbial community sequencing became the standard investigation technique to highlight emerging microbial patterns associated with different health states. However, even if highly informative, such technique alone is only able to provide limited information on possible functions associated with specific microbial communities composition. The integration of a systems biology computational modeling approach termed constraint-based modeling with sequencing data (whole genome sequencing, and 16S rRNA gene sequencing), together with the deployment of advanced statistical techniques (machine learning), helps to elucidate the metabolic role played by these environmental factors and the underlying mechanisms. The first goal of this PhD thesis was the development and deployment of specific methods for the integration of microbial abundance data (coming from microbial community sequencing) into constraint-based modeling, and the analysis of the consequent produced data. The result was the implementation of a new automated pipeline, connecting all these different methods, through which the study of the metabolism of different gut microbial communities was enabled. Second, I investigated possible microbial differences between a cohort a Parkinson’s disease patients and controls. I discovered microbial and metabolic changes in Parkinson’s disease patients and their relative dependence on several physiological covariates, therefore exposing possible mechanisms of pathogenesis of the disease.Overall, the work presented in this thesis represents method development for the investigation of before unexplored functional metabolic consequences associated with microbial changes of the human gut microbiota with a focus on specific complex diseases such as Parkinson’s disease. The consequently formulated hypothesis could be experimentally validated and could represent a starting point to envision possible clinical interventions. [less ▲]

Detailed reference viewed: 33 (4 UL)
Full Text
See detailTOWARDS A MODELLING FRAMEWORK WITH TEMPORAL AND UNCERTAIN DATA FOR ADAPTIVE SYSTEMS
Mouline, Ludovic UL

Doctoral thesis (2019)

Self-Adaptive Systems (SAS) optimise their behaviours or configurations at runtime in response to a modification of their environments or their behaviours. These systems therefore need a deep ... [more ▼]

Self-Adaptive Systems (SAS) optimise their behaviours or configurations at runtime in response to a modification of their environments or their behaviours. These systems therefore need a deep understanding of the ongoing situation which enables reasoning tasks for adaptation operations. Using the model-driven engineering (MDE) methodology, one can abstract this situation. However, information concerning the system is not always known with absolute confidence. Moreover, in such systems, the monitoring frequency may differ from the delay for reconfiguration actions to have measurable effects. These characteristics come with a global challenge for software engineers: how to represent uncertain knowledge that can be efficiently queried and to represent ongoing actions in order to improve adaptation processes? To tackle this challenge, this thesis defends the need for a unified modelling framework which includes, besides all traditional elements, temporal and uncertainty as first-class concepts. Therefore, a developer will be able to abstract information related to the adaptation process, the environment as well as the system itself. Towards this vision, we present two evaluated contributions: a temporal context model and a language for uncertain data. The temporal context model allows abstracting past, ongoing and future actions with their impacts and context. The language, named Ain’tea, integrates data uncertainty as a first-class citizen. [less ▲]

Detailed reference viewed: 13 (1 UL)
Full Text
See detailDeep Neural Networks for Personalized Sentiment Analysis with Information Decay
Guo, Siwen UL

Doctoral thesis (2019)

People have different lexical choices when expressing their opinions. Sentiment analysis, as a way to automatically detect and categorize people’s opinions in text, needs to reflect this diversity. In ... [more ▼]

People have different lexical choices when expressing their opinions. Sentiment analysis, as a way to automatically detect and categorize people’s opinions in text, needs to reflect this diversity. In this research, I look beyond the traditional population-level sentiment modeling and leverage socio-psychological theories to incorporate the concept of personalized modeling. In particular, a hierarchical neural network is constructed, which takes related information from a person’s past expressions to provide a better understanding of the sentiment from the expresser’s perspective. Such personalized models can suffer from the data sparsity issue, therefore they are difficult to develop. In this work, this issue is addressed by introducing the user information at the input such that the individuality from each user can be captured without building a model for each user and the network is trained in one process. The evolution of a person’s sentiment over time is another aspect to investigate in personalization. It can be suggested that recent incidents or opinions may have more effect on the person’s current sentiment than the older ones, and the relativeness between the targets of the incidents or opinions plays a role on the effect. Moreover, psychological studies have argued that individual variation exists in how frequently people change their sentiments. In order to study these phenomena in sentiment analysis, an attention mechanism which is reshaped with the Hawkes process is applied on top of a recurrent network for a user-specific design. Furthermore, the modified attention mechanism delivers a functionality in addition to the conventional neural networks, which offers flexibility in modeling information decay for temporal sequences with various time intervals. The developed model targets data from social platforms and Twitter is used as an example. After experimenting with manually and automatically labeled datasets, it can be found that the input formulation for representing the concerned information and the network design are the two major impact factors of the performance. With the proposed model, positive results have been observed which confirm the effectiveness of including user-specific information. The results reciprocally support the psychological theories through the real-world actions observed. The research carried out in this dissertation demonstrates a comprehensive study of the significance of considering individuality in sentiment analysis, which opens up new perspectives for future research in the area and brings opportunities for various applications. [less ▲]

Detailed reference viewed: 132 (4 UL)
Full Text
See detailModeling Parkinson's disease using human midbrain organoids
Monzel, Anna Sophia UL

Doctoral thesis (2019)

With increasing prevalence, neurodegenerative disorders present a major challenge for medical research and public health. Despite years of investigation, significant knowledge gaps exist, which impede the ... [more ▼]

With increasing prevalence, neurodegenerative disorders present a major challenge for medical research and public health. Despite years of investigation, significant knowledge gaps exist, which impede the development of disease-modifying therapies. The development of tools to model both physiological and pathological human brains greatly enhanced our ability to study neurological disorders. Brain organoids, derived from human induced pluripotent stem cells (iPSCs), hold unprecedented promise for biomedical research to unravel novel pathological mechanisms of a multitude of brain disorders. As brain proxies, these models bridge the gap between traditional 2D cell cultures and animal models. Owing to their human origin, hiPSC-derived organoids can recapitulate features that cannot be modeled in animals by virtue of differences in species. Parkinson’s disease (PD) is a human-specific neurodegenerative disorder. The major manifestations are the consequence of degenerating dopaminergic neurons (DANs) in the midbrain. The disease has a multifactorial etiology and a multisystemic pathogenesis and pathophysiology. In this thesis, we used state-of-the-art technologies to develop a human midbrain organoid (hMO) model with a great potential to study PD. hMOs were generated from iPSC-derived neural precursor cells, which were pre-patterned to the midbrain/hindbrain region. hMOs contain multiple midbrain-specific cell types, such as midbrain DANs, as well as astrocytes and oligodendrocytes. We could demonstrate features of neuronal maturation such as myelination, synaptic connections, spontaneous electrophysiological activity and neural network synchronicity. We further developed a neurotoxin-induced PD organoid model and set up a high-content imaging platform coupled with machine learning classification to predict neurotoxicty. Patient-derived hMOs display PD-relevant pathomechanisms, indicative of neurodevelopmental deficits. hMOs as novel in vitro models open up new avenues to unravel PD pathophysiology and are powerful tools in biomedical research. [less ▲]

Detailed reference viewed: 89 (1 UL)
See detailSix regards sur la master-classe de piano : phénoménologie et sémiotique de la rencontre musicale
Kim, Seong Jae UL

Doctoral thesis (2019)

In this thesis, I suggest new ways of grasping the affective dimension of musical experience that which traditional semiotics and musicology take little into account. Inspired by a dynamistic modelling ... [more ▼]

In this thesis, I suggest new ways of grasping the affective dimension of musical experience that which traditional semiotics and musicology take little into account. Inspired by a dynamistic modelling approach –which developed from the 1960s and since then has been influential in the domain of semiolinguistic disciplines–, I sketch out the fluctuating phases of semiogenesis within the field of piano masterclasses. The term ‘semiogenesis’ here, is taken in a broad sense, encompassing any deployment of sign-forms, either vague or articulated, diffused or well-defined. Such forms are conceived as being strained between expressiveness and normativity. They are also valorized in that they call the subject to participate in his or her own ‘lines of life’ which, in turn, may come to exist by those forms. A piano masterclass is given by a genuine master to highly accomplished students, both who truly testify to their own lives, to their own ways of ethical feeling, in the search for a unique musical praxis. In recent years, the field of masterclass has begun to attract the attention of the scientific community, especially in areas related to musical teaching and experience, such as psychology, aesthetics and epistemology or even sociology. Yet it is still suboptimal that most of the problems adopted in these frameworks incorporate nothing or so little in terms of the metamorphosis of sensitivity and the play of musical feeling in the characterization of their research objects. Nevertheless, the field of piano masterclass seems to be a particularly interesting and promising object of research in that the horizon of affect is preeminent in all the semiotic activities tied to it. Thus, I have attended several masterclasses in order to closely follow the praxis of the musicians (e.g., active and passive participation in masterclasses, audiovisual recordings, interviews, conversations and debates on music, etc.), in the spirit of making all its genetic depth to the semiotic activity, by approaching it under the perspective of an encountering and an orientation of musical sensibilities. One of the main tasks of my approach in the designing of the descriptions of this particular musical praxis consists in understanding the acoustic, gestural and linguistic phenomena, as giving birth to semiogenetic conditions of the constitution of a musical meaning. In this way, it is a fundamentally descriptive method, inspired by philosophical (Shaftesbury, Kierkegaard, Wittgenstein, Merleau-Ponty) and semiotic (Peirce, Saussure) minds, which joins the semiotic preoccupation from the very initial levels of a microgenesis, and by promoting it immediately into a hermeneutical and existential phenomenology. The perceptive and semiogenetic issues of musical sensitivity allow us to remodel the notion of a musical motive, understood both as a motive-of-praxis and as an existential-motive. I tried to grasp the idea of a certain listening of the musical praxis by finding there a constant passage between an ethical perception, and the search –through the playing of the music and its motives–, for a musical personality engaged in the musical praxis. Such conceptions on motive and personality proved to be fruitful to the extent that they make it possible to suggest a certain ethic of musical feeling, without reducing it to a skill, a psychology or a ritual. I have thus managed to redefine the notion of a musical ‘sign’ by playing up on the 'motival' horizon of this semiotic activity, understood – in the formal and sensitive nature of musical practices– as participation (i.e., desire and commitment to participate) to a certain regime of human existence. In this way, I believe to be paving the way for a new conception of musical praxis, which interweaves aesthetics and ethics. The thesis thus addresses these problems by casting six successive contemplations on it: Piano Masterclass; Feeling, Knowing and Doing; Field and Form; Motive and Form; Lines of Life; Enchantment. [less ▲]

Detailed reference viewed: 37 (13 UL)
See detailThe impact of macro-substrate on micropollutant degradation in activated sludge systems
Christen, Anne UL

Doctoral thesis (2019)

Wastewater treatment plants are designed as a first barrier to reduce xenobiotic emission into rivers. However, they are not sufficient enough to fully prevent environmental harm of emerging substances in ... [more ▼]

Wastewater treatment plants are designed as a first barrier to reduce xenobiotic emission into rivers. However, they are not sufficient enough to fully prevent environmental harm of emerging substances in the water body. Therefore, advanced treatment processes are currently being investigated but their implementation is cost-intensive. The optimisation of the activated sludge treatment to enhance biological micropollutant removal could reduce operating costs and material. Although the impact of operational parameters, such as sludge retention time and hydraulic retention time on the xenobiotic removal have been investigated, the influence of the macro-substrate composition and load on micropollutant elimination causes a high degree of uncertainty. This study focuses on the latter by analysing 15 municipal wastewater treatment plants, where variations in load and composition of the macro-substrate were expected. Assuming that macro-substrate shapes the biomass and triggers their activity, the impact of macro-substrate composition and load on xenobiotic degradation by microorganisms was analysed. It was hypothesised that on the one hand, a high dissolved organic carbon concentration might lead to enhanced xenobiotic degradation for certain substances due to a high microbial activity. The latter is assumed to be caused by a high labile dissolved organic carbon portion and the tendency for a shorter sludge retention time. On the other hand, a low dissolved organic carbon concentration, probably containing a predominant recalcitrant substrate portion, tends to a longer sludge retention time. Consequently, slow-growing and specialised microorganisms may develop, able to degrade certain xenobiotics. As a second question, the contribution of the autotrophic biomass to xenobiotic degradation was tested by inhibiting the autotrophic microorganisms during the degradation test. To additionally test the hypothesis, the impact of a readily biodegradable substrate (acetate) on the xenobiotic degradation was tested and the sensitivity of the fluorescence signal of tryptophan was used to analyse the impact of tryptophan on xenobiotic degradation. Degradation tests focusing on the removal of macro-substrate and micropollutants within 18 hours incubation in the OxiTop® system were performed. The OxiTop® system is known as fast and easy method for organic matter analysis in the wastewater. To assess the macro-substrate composition prior to and after the degradation test, three characterisation methods were applied. Firstly, to determine the labile and the rather recalcitrant portion in the dissolved organic carbon, absorbance was measured at 280 nm and further analysed. This was verified by the characterisation of both portions based on the oxygen consumption measurements. Secondly, to analyse the organic matter concerning its fluorescent properties, excitation-emission scans were run and analysed using the parallel factor analysis approach. Lastly, the chromophoric and fluorescent organic matter was separated via size-exclusion chromatography to investigate the macro-substrate composition. Micropollutant elimination efficiency was followed by measuring initial and final concentrations of the targeted substances using liquid chromatography tandem-mass spectrometry and calculating pseudo-first-order degradation rates. To distinguish between the contribution of the heterotrophic biomass and the total biomass on xenobiotic degradation, allylthiourea was added to inhibit the autotrophic biomass. No significant composition changes of the chromophoric macro-substrate were observed. A higher initial dissolved organic carbon concentration led to higher chromophoric and fluorescent properties. The same was found for the degraded dissolved organic carbon amount and the loss of signal within the chromophoric and fluorescent portions. Variations in the macro-substrate load or rather concentration were tracked. Derived from the oxygen consumption measurements, a prominent labile and non-chromophoric portion was present at higher dissolved organic carbon levels, impacting the microbial activity. However, a characterisation of the non-chromophoric macro-substrate composition was not done within the study. Regarding the micropollutant removal, varying elimination rates were observed. For 4 out of 17 substances, distinct degradation dynamics were found, suggesting a possible impact of the present macro-substrate load. However, no overall impact of the macro- substrate on xenobiotic removal was observed. Atenolol, bezafibrate and propranolol showed a negative correlation with the initial dissolved organic carbon concentration, meaning higher degradation rates at a lower substrate load. This might indicate the presence of specialised microorganisms and a higher microbial diversity. Furthermore, inhibition studies using allylthiourea suggest a contribution of the autotrophic biomass to xenobiotic degradation. Sulfamethoxazole showed a positive trend with the initial dissolved organic carbon concentration, possibly indicating co-metabolic degradation of sulfamethoxazole by the autotrophic and heterotrophic biomass. Thus, it seemed that the removal efficiencies of sulfamethoxazole benefited from higher substrate loads. With respect to the short term experiments with acetate, higher degradation efficiencies were observed for several substances in the presence of acetate. Ketoprofen and bezafibrate showed in all tested wastewaters enhanced removal efficiencies. The tryptophan test indicated the presence of tryptophan in wastewater, but no clear contribution to the xenobiotic degradation was seen. The presented findings substantially contribute to the understanding of the influencing parameters on xenobiotic degradation in activated sludge systems. By using the OxiTop® application for xenobiotic degradation tests, an easy and fast method was established. Absorbance and fluorescence measurements proved to be a sufficient method for characterisation and biodegradability estimation of organic matter, which could be further applied as online measurements on wastewater treatment plants. Thus, the current study will serve as a base for future work investigating the influencing parameters on the xenobiotic degradation pathways and focusing on the optimisation of the biological and advanced treatment process to overcome current limitations. [less ▲]

Detailed reference viewed: 65 (10 UL)
Full Text
See detailTowards an understanding of the language–integration nexus: a qualitative study of forced migrants’ experiences in multilingual Luxembourg
Kalocsanyiova, Erika UL

Doctoral thesis (2019)

This cumulative thesis offers insights into the under-researched area of linguistic integration in multilingual societies. It is a collection of four papers that seek to address key questions such as: How ... [more ▼]

This cumulative thesis offers insights into the under-researched area of linguistic integration in multilingual societies. It is a collection of four papers that seek to address key questions such as: How can people’s existing language resources be validated and used to aid language learning? What are the politics of language and integration in settings of complex linguistic diversity? What role do language ideologies play in their creation and/or perception? What types of individual trajectories emerge? The research reported here is grounded in the Luxembourgish context, which represents an important European focal point for exploring the dynamics of linguistic integration. Taking a qualitative approach informed by linguistic ethnography (Copland & Creese 2015; Pérez-Milans 2016; Rampton 2007a; Rampton et al. 2015; Tusting & Maybin 2007), this work focuses on the language learning and integration experiences of five men who, fleeing war and violence, sought international protection in the Grand Duchy of Luxembourg. Building on theories of multilingual communication (Canagarajah & Wurr 2011), translanguaging (Creese & Blackledge 2010; García & Li Wei 2014) and receptive multilingualism (ten Thije et. al. 2012), the first paper of this thesis considers the affordances of multilingual learning situations in classroom-based language training for forced migrants. The second paper moves on to scrutinise the instrumental and integrative dimensions of language (Ager 2001), as articulated and perceived by the research participants. It exposes the vagueness and contradictory logics of linguistic integration as currently practiced, and throws light on how people with precarious immigration status interpret, experience and act upon ideologies surrounding language and integration (cf. Cederberg 2014; Gal 2006; Kroskrity 2004; Stevenson 2006). The third paper, likewise, directs attention to the controversies and potential unwarranted adverse effects of current linguistic integration policies. Through juxtaposing the trajectories of two forced migrants – who shared similar, multi-layered linguistic repertoires (Blommaert & Backus 2013; Busch 2012, 2017) – this part of the thesis elucidates the embodied efforts, emotions, and constraints inherent in constructing a new (linguistic) belonging in contemporary societies. Taken together, these papers illustrate and expand the discussion about the language–integration nexus. Additionally, by bringing into focus multilingual realities and mobile aspirations, they seek to provide a fresh impetus for research, and contribute to the creation of language policies that recognise a larger range of communicative possibilities and forms of language knowledge (cf. Ricento 2014; Flubacher & Yeung 2016). The thesis also makes a methodological contribution, by demonstrating the value of cross-language qualitative research methods in migration and integration research. It includes a detailed discussion of the complexities of researching in a multilingual context (Holmes et al. 2013; Phipps 2013b), as well as a novel inquiry into the interactional dynamics of an interpreter-mediated research encounter (fourth paper). [less ▲]

Detailed reference viewed: 105 (7 UL)
Full Text
See detailThe Development and Utilization of Scenarios in European Union Energy Policy
Scheibe, Alexander UL

Doctoral thesis (2019)

Scenarios are a strategic planning tool, which essentially enables decision-makers to identify future uncertainties and to devise or adjust organizational strategies. Increasingly, scenario building has ... [more ▼]

Scenarios are a strategic planning tool, which essentially enables decision-makers to identify future uncertainties and to devise or adjust organizational strategies. Increasingly, scenario building has been applied as a planning instrument by public policymakers. At the European Union (EU) level, scenarios are widely used in various policy areas and for different purposes. However, the development and utilization of scenarios in policymaking as well as their concrete impact on the decision process remain an under-explored research field. The academic literature focuses on scenarios in the business domain, where they are a well-established strategic planning component. In public policy, however, the development and use of scenarios conceivably differ from the private sector. In the case of the EU, the potential impact of its distinctive multi-stakeholder and multi-level policymaking environment on the development of scenarios is not sufficiently accounted for in the literature. Moreover, it is uncertain how scenarios are situated in the wider EU political context. This thesis seeks to explain how scenarios are developed and utilized in the EU’s policymaking process. To that end, an institutionalized scenario development exercise from the Union’s energy policy (the Ten-Year Network Development Plan, TYNDP) is investigated as a case study. Drawing from empirical evidence primarily based on elite interviews, the research applies a qualitative-interpretative research framework that combines the analytical concepts of policy networks, epistemic communities, and strategic constructivism. The combination facilitates the design of a theoretical model of inner and outer spheres in EU energy policymaking, accounting for both the role of scenarios in policymaking and the impact of political goals on their development. The research concludes that the wider EU political context of the outer sphere shapes the development of scenarios in the inner sphere and determines how they are utilized in the policymaking process. The expectations of political actors frame the technical expertise in the scenario development process. With regard to the application of scenarios in wider public policy, the research demonstrates that the closer the scenario building is to the decision-making process, the stronger the political impact on the scenarios is likely to be. This is because political actors and decision-makers seek to align the scenario outcomes to their respective preferences. [less ▲]

Detailed reference viewed: 70 (6 UL)
Full Text
See detailFrom Persistent Homology to Reinforcement Learning with Applications for Retail Banking
Charlier, Jérémy Henri J. UL

Doctoral thesis (2019)

The retail banking services are one of the pillars of the modern economic growth. However, the evolution of the client’s habits in modern societies and the recent European regulations promoting more ... [more ▼]

The retail banking services are one of the pillars of the modern economic growth. However, the evolution of the client’s habits in modern societies and the recent European regulations promoting more competition mean the retail banks will encounter serious challenges for the next few years, endangering their activities. They now face an impossible compromise: maximizing the satisfaction of their hyper-connected clients while avoiding any risk of default and being regulatory compliant. Therefore, advanced and novel research concepts are a serious game-changer to gain a competitive advantage. In this context, we investigate in this thesis different concepts bridging the gap between persistent homology, neural networks, recommender engines and reinforcement learning with the aim of improving the quality of the retail banking services. Our contribution is threefold. First, we highlight how to overcome insufficient financial data by generating artificial data using generative models and persistent homology. Then, we present how to perform accurate financial recommendations in multi-dimensions. Finally, we underline a reinforcement learning model-free approach to determine the optimal policy of money management based on the aggregated financial transactions of the clients. Our experimental data sets, extracted from well-known institutions where the privacy and the confidentiality of the clients were not put at risk, support our contributions. In this work, we provide the motivations of our retail banking research project, describe the theory employed to improve the financial services quality and evaluate quantitatively and qualitatively our methodologies for each of the proposed research scenarios. [less ▲]

Detailed reference viewed: 41 (7 UL)
Full Text
See detailLiquid Metals and Liquid Crystals Subject to Flow: From Fundamental Fluid Physics to Functional Fibers
Honaker, Lawrence William UL

Doctoral thesis (2019)

Technology over the past few decades has pushed strongly towards wearable technology, one such form being textiles which incorporate a functional component. There are several ways to produce polymer ... [more ▼]

Technology over the past few decades has pushed strongly towards wearable technology, one such form being textiles which incorporate a functional component. There are several ways to produce polymer fibers on both laboratory and industrial scales, but the implementation of these techniques to spin fibers incorporating a functional heterocore has proven challenging for certain combinations of materials. In general, fiber spinning from polymer solutions, regardless of the method, is a multifaceted process with concerns in chemistry, materials science, and physics, both from fundamental and applied standpoints, requiring balancing of flow parameters (interfacial tension, viscosity, and inertial forces) against solvent extraction. This becomes considerably more complicated when multiple interfaces are present. This thesis explores the concerns involved in the spinning of fibers incorporating functional materials from several standpoints. Firstly, due to the importance of interfacial forces in jet stability, I present a microfluidic interfacial tensiometry technique for measuring the interfacial tension between two immiscible fluids, assembled using glass capillary microfluidics techniques. The advantage of this technique is that it can measure the interfacial tension without reliance on sometimes imprecise external parameters and data, obtaining interfacial tension measurements solely from experimental observations of the deformation of a droplet into a channel and the pressure needed to induce the same. Using the knowledge gained from both microfluidic device assembly and the interfacial tension, I then present the wet spinning of polymer fibers using a glass capillary spinneret. This technique uses a polymer dope flowed along with a coagulation bath tooled to extract solvent, leaving behind a continuous polymer fiber. We were able to spin both pure polymer fibers and elastomer microscale fibers containing a continuous heterocore of a liquid crystal, with the optical properties of the liquid crystal maintained within the fiber. While we were not able to spin fibers of a harder polymer containing a continuous core, either liquid crystalline or of a liquid metal, I present analysis of why the spinning was unsuccessful and analysis that will lead us towards the eventual spinning of such fibers. [less ▲]

Detailed reference viewed: 74 (4 UL)
Full Text
See detailLe jugement par défaut dans l'espace judiciaire européen
Richard, Vincent Jérôme UL

Doctoral thesis (2019)

French judges regularly refuse to enforce foreign judgements rendered by default against a defendant who has not appeared. This finding is also true for other Member States, as many European regulations ... [more ▼]

French judges regularly refuse to enforce foreign judgements rendered by default against a defendant who has not appeared. This finding is also true for other Member States, as many European regulations govern cross-border enforcement of decisions rendered in civil and commercial matters between Member States. The present study examines this problem in order to understand the obstacles to the circulation of default decisions and payment orders in Europe. When referring to the recognition of default judgments, it would be more accurate to refer to the recognition of decisions made as a result of default proceedings. It is indeed this (default) procedure, more than the judgment itself, which is examined by the exequatur judge to determine whether the foreign decision should be enforced. This study is therefore firstly devoted to default procedures and payment order procedures in French, English, Belgian and Luxembourgish laws. These procedures are analysed and compared in order to highlight their differences, be they conceptual or simply technical in nature. Once these discrepancies have been identified, this study turns to private international law in order to understand which elements of the default procedures are likely to hinder their circulation. The combination of these two perspectives makes it possible to envisage a gradual approximation of national default procedures in order to facilitate their potential circulation in the European area of freedom, security and justice. [less ▲]

Detailed reference viewed: 42 (8 UL)
Full Text
See detailParliamentary involvement in EU affairs during treaty negotiations in a historical comparative perspective: the cases of the Austrian, Finnish and Luxembourgish parliaments
Badie, Estelle Céline UL

Doctoral thesis (2019)

Until recently, studies on the Europeanisation of national parliaments mostly tended to focus on the evolution of their institutional capacities rather than on their actual behaviour in EU affairs. This ... [more ▼]

Until recently, studies on the Europeanisation of national parliaments mostly tended to focus on the evolution of their institutional capacities rather than on their actual behaviour in EU affairs. This thesis seeks to identify variations in behavioural patterns between the Austrian, Finnish and Luxembourgish legislatures. The historical comparative perspective bases mainly on political and societal similarities between the countries. Based on historical and Sociological Institutionalism, the thesis aims to analyse the evolution and motivations of parliamentary involvement in the field of European affairs over a period running from the negotiations on the Treaty establishing a Constitution for Europe until the Treaty on Stability, Coordination and Governance in the EMU. By including both institutional and motivational indicators, the objective consists of identifying the extent to which parliamentary involvement in EU matters has been challenged in the framework of EU treaties and intergovernmental treaties on the EMU. We address the following questions: What institutional and motivational factors influenced parliamentary involvement in EU affairs? What parliamentary initiatives have been taken to improve participation in EU affairs? In which direction did institutional change happen and who triggered it? The present thesis bases primarily on qualitative data, i.e. interviews with parliamentarians, civil servants from parliamentary administrations and parliamentary group collaborators. Thereby we aim to produce empirical in-depth knowledge on actual parliamentary behaviour in each studied country. Thus, the assessment of parliamentary involvement in EU affairs through the lenses of parliamentarians’ motivations and their institutional context helps to investigate the parliamentary “black box”. [less ▲]

Detailed reference viewed: 37 (6 UL)
Full Text
See detailEntwicklung und Modellierung eines Hybrid-Solarmodulkollektor-basierten Wärmepumpensystems auf der Basis von CO2 Direktverdampfung in Mikrokanälen
Rullof, Johannes UL

Doctoral thesis (2019)

As early as the end of the 1970s, heat pumps in combination with glycol-based large-area combined radiation environmental heat absorbers were developed as evaporators which, in comparison to forced ... [more ▼]

As early as the end of the 1970s, heat pumps in combination with glycol-based large-area combined radiation environmental heat absorbers were developed as evaporators which, in comparison to forced convection-based air heat pumps, not only used ambient energy but also solar energy as an energy source. However, due to the falling oil prices after the oil crisis, this technology, which was for the most part not yet economical and moreover required large absorber surfaces, could not prevail. Due to the significant re-duction in the heating requirement of new buildings, nowadays much smaller absorber surfaces are needed in combination with a heat pump, which leads to a renewed interest in the combination of heat pumps and absorbers. Above all, the combination of thermal absorber, based on free convection and radiation, and photovoltaic (PV) in one module (PVT module), may be an alternative to forced convection-based air heat pumps. The use of solar energy as the heat source of the heat pump leads one to expect higher coef-ficients of performance by achieving higher evaporation temperatures than conventional forced convection-based air heat pumps. Numerous publications describe the market potential of solar hybrid modules with di-rect evaporation (PVT-direct module), and several theoretical studies describe con-structive approaches and related calculations. However, to date, there is still no practi-cal implementation of a PVT hybrid module in combination with a module-integrated direct evaporation of the natural refrigerant CO2 in microchannels. So far, no experi-mental studies on CO2-PVT-based heat pump systems with direct evaporation have been carried out by research institutions. Thus, the proof of the constructive and func-tional feasibility of a CO2-based PVT-direct module as well as the energetic feasibility of the PVT-based CO2 heat pump system is still a desideratum. The three objectives of this work can be summarized as follows: 1. Development and production of the PVT-direct module for the analysis of the constructional feasibility of the PVT-direct module 2. Experimental investigation of the PVT-direct module for the analysis of both the thermal and electrical functional feasibility of the PVT-direct module 3. Analysis of the energetic feasibility of the PVT-based CO2 heat pump system [less ▲]

Detailed reference viewed: 43 (8 UL)
Full Text
See detailAccess Control Mechanisms Reconsidered with Blockchain Technologies
Steichen, Mathis UL

Doctoral thesis (2019)

Detailed reference viewed: 101 (4 UL)
See detailInvestigation of the immune functions of DJ-1
Zeng, Ni UL

Doctoral thesis (2019)

Detailed reference viewed: 50 (4 UL)
See detailIMPROVED DESIGN METHODS FOR THE BEARING CAPACITY OF FOUNDATION PILES
Rica, Shilton UL

Doctoral thesis (2019)

Pile foundations are often used for civil structures, both offshore and onshore, which are placed on soft soils. Nowadays, there are many different methods used for the prediction of the pile bearing ... [more ▼]

Pile foundations are often used for civil structures, both offshore and onshore, which are placed on soft soils. Nowadays, there are many different methods used for the prediction of the pile bearing capacity. However, the resulting design values are often different from the values measured at pile load field tests. A reason for this is that there are many pile installation effects and (unknown) soil conditions which influence the pile bearing capacity. Another problem is that for many pile load field tests in the past, the residual stresses at the pile after pile installation, have been ignored unfortunately. This ignoring leads to a measured tip bearing resistance which is lower than the real tip bearing resistance (capacity), and a measured pile shaft friction which is higher than the real pile shaft friction. The main aim of this thesis is, to come to a better understanding of the pile performance and especially the pile bearing capacity. In order to achieve this aim, many numerical loading simulations were computed for small displacements with the Finite Element Model Plaxis and many existing pile design methods have been studied. The pile installation process itself was modelled and simulated with the help of the material point method, MPM, which is able to handle large displacement numerical simulations. The used version of this MPM method was recently developed at the research institute Deltares in the Netherlands. The results from the MPM simulations showed that there is a big difference between the bearing capacity of a pre-installed pile (no installation effect are taken into account) and the bearing capacity of a pile where the installations effects are taken into account. This proves in a numerical way the importance of the pile installation effects on the pile bearing capacity. However, the MPM numerical simulations were done only for jacked piles. Therefore, impact piles, vibrated piles etc., were not simulated. For this reason, there is not a detailed numerical study for the effect of each installation method specific on the pile bearing capacity. The fact that the installation effects, in general, has an important influence on the pile bearing capacity was already proven by field tests and centrifuge tests, and has been published before by several authors. The performed numerical simulations show that during the loading and failure of a pile, a balloon shaped plastic zone develops around the pile tip, which is in fact the failure mechanism. A better understanding of this zone could lead to a better estimation of the pile tip bearing capacity because the size and position of this plastic zone are directly related to the pile tip bearing capacity. Therefore, this plastic zone has been studied for different soil and pile parameters. Also, the influence of each parameter has been studied and discussed. A similar balloon shaped plastic zone was found for both small and large displacement simulations. The tip bearing capacity of a pile is regarded to depend only on the soil in a certain zone around the pile tip. This zone is called the influence zone. The influence zone is found to be similar to the plastic zone of a pile tip. Therefore, the influence of a soft soil layer, near the influence zone of the pile tip, has also been studied. The numerical results have been validated with laboratory tests made by Deltares. The influence zone is roughly from 2 times the pile diameter, 𝐷, above the pile tip, to 5 or 6 times 𝐷 below the pile tip. Laboratory tests, using the direct shear test machine, have been performed in order to define the difference between the soil-pile friction angle and the soil-cone friction angle. The tests were done for different surface roughnesses and for three different sand types. The results were compared with the roughness of the sleeve of the Cone Penetration Test (CPT) apparatus. Based on the numerical simulations and the laboratory tests of Deltares, a new design method has been proposed for the estimation of the pile bearing capacity. This method has as main input value, the CPT results, therefore it is a CPT-based design method. The proposed method has been validated using pile field tests that were performed in Lelystad in the Netherlands. During this research, several axial and lateral pile field tests were performed at the West Coast of Mexico. Their results have been reported and discussed in the appendices. [less ▲]

Detailed reference viewed: 148 (4 UL)
Full Text
See detailAssessment and Improvement of the Practical Use of Mutation for Automated Software Testing
Titcheu Chekam, Thierry UL

Doctoral thesis (2019)

Software testing is the main quality assurance technique used in software engineering. In fact, companies that develop software and open-source communities alike actively integrate testing into their ... [more ▼]

Software testing is the main quality assurance technique used in software engineering. In fact, companies that develop software and open-source communities alike actively integrate testing into their software development life cycle. In order to guide and give objectives for the software testing process, researchers have designed test adequacy criteria (TAC) which, define the properties of a software that must be covered in order to constitute a thorough test suite. Many TACs have been designed in the literature, among which, the widely used statement and branch TAC, as well as the fault-based TAC named mutation. It has been shown in the literature that mutation is effective at revealing fault in software, nevertheless, mutation adoption in practice is still lagging due to its cost. Ideally, TACs that are most likely to lead to higher fault revelation are desired for testing and, the fault-revelation of test suites is expected to increase as their coverage of TACs test objectives increase. However, the question of which TAC best guides software testing towards fault revelation remains controversial and open, and, the relationship between TACs test objectives’ coverage and fault-revelation remains unknown. In order to increase knowledge and provide answers about these issues, we conducted, in this dissertation, an empirical study that evaluates the relationship between test objectives’ coverage and fault-revelation for four TACs (statement, branch coverage and, weak and strong mutation). The study showed that fault-revelation increase with coverage only beyond some coverage threshold and, strong mutation TAC has highest fault revelation. Despite the benefit of higher fault-revelation that strong mutation TAC provide for software testing, software practitioners are still reluctant to integrate strong mutation into their software testing activities. This happens mainly because of the high cost of mutation analysis, which is related to the large number of mutants and the limitation in the automation of test generation for strong mutation. Several approaches have been proposed, in the literature, to tackle the analysis’ cost issue of strong mutation. Mutant selection (reduction) approaches aim to reduce the number of mutants used for testing by selecting a small subset of mutation operator to apply during mutants generation, thus, reducing the number of analyzed mutants. Nevertheless, those approaches are not more effective, w.r.t. fault-revelation, than random mutant sampling (which leads to a high loss in fault revelation). Moreover, there is not much work in the literature that regards cost-effective automated test generation for strong mutation. This dissertation proposes two techniques, FaRM and SEMu, to reduce the cost of mutation testing. FaRM statically selects and prioritizes mutants that lead to faults (fault-revealing mutants), in order to reduce the number of mutants (fault-revealing mutants represent a very small proportion of the generated mutants). SEMu automatically generates tests that strongly kill mutants and thus, increase the mutation score and improve the test suites. First, this dissertation makes an empirical study that evaluates the fault-revelation (ability to lead to tests that have high fault-revelation) of four TACs, namely statement, branch, weak mutation and strong mutation. The outcome of the study show evidence that for all four studied TACs, the fault-revelation increases with TAC test objectives’ coverage only beyond a certain threshold of coverage. This suggests the need to attain higher coverage during testing. Moreover, the study shows that strong mutation is the only studied TAC that leads to tests that have, significantly, the highest fault-revelation. Second, in line with mutant reduction, we study the different mutant quality indicators (used to qualify "useful" mutants) proposed in the literature, including fault-revealing mutants. Our study shows that there is a large disagreement between the indicators suggesting that the fault-revealing mutant set is unique and differs from other mutant sets. Thus, given that testing aims to reveal faults, one should directly target fault-revealing mutants for mutant reduction. We also do so in this dissertation. Third, this dissertation proposes FaRM, a mutant reduction technique based on supervised machine learning. In order to automatically discriminate, before test execution, between useful (valuable) and useless mutants, FaRM build a mutants classification machine learning model. The features for the classification model are static program features of mutants categorized as mutant types and mutant context (abstract syntax tree, control flow graph and data/control dependency information). FaRM’s classification model successfully predicted fault-revealing mutants and killable mutants. Then, in order to reduce the number of analyzed mutants, FaRM selects and prioritizes fault-revealing mutants based of the aforementioned mutants classification model. An empirical evaluation shows that FaRM outperforms (w.r.t. the accuracy of fault-revealing mutant selection) random mutants sampling and existing mutation operators-based mutant selection techniques. Fourth, this dissertation proposes SEMu, an automated test input generation technique aiming to increase strong mutation coverage score of test suites. SEMu is based on symbolic execution and leverages multiple cost reduction heuristics for the symbolic execution. An empirical evaluation shows that, for limited time budget, the SEMu generates tests that successfully increase strong mutation coverage score and, kill more mutants than test generated by state-of-the-art techniques. Finally, this dissertation proposes Muteria a framework that enables the integration of FaRM and SEMu into the automated software testing process. Overall, this dissertation provides insights on how to effectively use TACs to test software, shows that strong mutation is the most effective TAC for software testing. It also provides techniques that effectively facilitate the practical use of strong mutation and, an extensive tooling to support the proposed techniques while enabling their extensions for the practical adoption of strong mutation in software testing. [less ▲]

Detailed reference viewed: 102 (19 UL)
Full Text
See detailDISSECTING GENETIC EPISTASIS IN FAMILIAL PARKINSON’S DISEASE USING A DIGENIC PATIENT-DERIVED STEM CELL MODEL
Hanss, Zoé UL

Doctoral thesis (2019)

Parkinson’s disease (PD) is the second most common neurodegenerative disorder worldwide. 10% of PD patients present a familial form of the disease implicating genetic mutations. A variability in terms of ... [more ▼]

Parkinson’s disease (PD) is the second most common neurodegenerative disorder worldwide. 10% of PD patients present a familial form of the disease implicating genetic mutations. A variability in terms of disease expressivity, severity and penetrance can be observed among familial cases. The idea that the classical one-gene one-trait model may not catch the full picture of genetic contribution to PD pathophysiology is increasingly recognized. Therefore, a polygenic model where multiple genes would influence the disease risk and the phenotypic traits in PD should be investigated. Mutations in PRKN, encoding the E3 ubiquitin-protein ligase Parkin, cause young onset autosomal recessive forms of PD. A variability in terms of clinical presentation and neuropathology have been observed in PD patients carrying mutations in Parkin. On the other hand, mutations in GBA were recently recognized as the most common genetic risk factor for developing PD. The incomplete penetrance of the disease in patients with GBA mutations may implicate other genetic factors. Therefore, it can be hypothesized that the interactions between common PD genes like PRKN and GBA can contribute to the phenotypic heterogeneity observed in PD cases. To explore this hypothesis, we generated patient-derived cellular models from several PD patients carrying pathogenic mutations in either both PRKN and GBA (triallelic models) or in only one of them (bi- or monoallelic models). We developed a novel strategy to gene edit the N370S mutation in GBA via CRISPR-Cas9, without interference with its respective pseudogene, which allows for the dissection of the role of GBA in the context of a PRKN mutation on an isogenic background. We identified a specific α-synuclein homeostasis in the triallelic model. The genetic and pharmacological rescue of GBA in the triallelic model modified the observed α-synuclein phenotype, proving the contribution of GBA to the observed phenotype. We then investigated whether Parkin was contributing to the phenotype. The modulation of Parkin function in the context of a GBA mutation induced a modification of the α-synuclein homeostasis. We therefore concluded that both PRKN and GBA are influencing α-synuclein homeostasis in the triallelic model. Nevertheless, the phenotypic outcome of the co-occurrence of these mutations was not additive nor synergistic. We therefore suggest the existence of an epistatic interaction between mutant GCase and Parkin that would underlie the clinical heterogeneity observed in PD patients carrying these mutations. [less ▲]

Detailed reference viewed: 36 (5 UL)
Full Text
See detailDynamical Modeling Techniques for Biological Time Series Data
Mombaerts, Laurent UL

Doctoral thesis (2019)

The present thesis is articulated over two main topics which have in common the modeling of the dynamical properties of complex biological systems from large-scale time-series data. On one hand, this ... [more ▼]

The present thesis is articulated over two main topics which have in common the modeling of the dynamical properties of complex biological systems from large-scale time-series data. On one hand, this thesis analyzes the inverse problem of reconstructing Gene Regulatory Networks (GRN) from gene expression data. This first topic seeks to reverse-engineer the transcriptional regulatory mechanisms involved in few biological systems of interest, vital to understand the specificities of their different responses. In the light of recent mathematical developments, a novel, flexible and interpretable modeling strategy is proposed to reconstruct the dynamical dependencies between genes from short-time series data. In addition, experimental trade-offs and optimal modeling strategies are investigated for given data availability. Consistent literature on these topics was previously surprisingly lacking. The proposed methodology is applied to the study of circadian rhythms, which consists in complex GRN driving most of daily biological activity across many species. On the other hand, this manuscript covers the characterization of dynamically differentiable brain states in Zebrafish in the context of epilepsy and epileptogenesis. Zebrafish larvae represent a valuable animal model for the study of epilepsy due to both their genetic and dynamical resemblance with humans. The fundamental premise of this research is the early apparition of subtle functional changes preceding the clinical symptoms of seizures. More generally, this idea, based on bifurcation theory, can be described by a progressive loss of resilience of the brain and ultimately, its transition from a healthy state to another characterizing the disease. First, the morphological signatures of seizures generated by distinct pathological mechanisms are investigated. For this purpose, a range of mathematical biomarkers that characterizes relevant dynamical aspects of the neurophysiological signals are considered. Such mathematical markers are later used to address the subtle manifestations of early epileptogenic activity. Finally, the feasibility of a probabilistic prediction model that indicates the susceptibility of seizure emergence over time is investigated. The existence of alternative stable system states and their sudden and dramatic changes have notably been observed in a wide range of complex systems such as in ecosystems, climate or financial markets. [less ▲]

Detailed reference viewed: 42 (11 UL)
See detailIntegrity and Confidentiality Problems of Outsourcing
Pejo, Balazs UL

Doctoral thesis (2019)

Cloud services enable companies to outsource data storage and computation. Resource-limited entities could use this pay-per-use model to outsource large-scale computational tasks to a cloud-service ... [more ▼]

Cloud services enable companies to outsource data storage and computation. Resource-limited entities could use this pay-per-use model to outsource large-scale computational tasks to a cloud-service-provider. Nonetheless, this on-demand network access raises the issues of security and privacy, which has become a primary concern of recent decades. In this dissertation, we tackle these problems from two perspectives: data confidentiality and result integrity. Concerning data confidentiality, we systematically classify the relaxations of the most widely used privacy preserving technique called Differential Privacy. We also establish a partial ordering of strength between these relaxations and enlist whether they satisfy additional desirable properties, such as composition and privacy axioms. Tackling the problem of confidentiality, we design a Collaborative Learning game, which helps the data holders to determine how to set the privacy parameter based on economic aspects. We also define the Price of Privacy to measure the overall degradation of accuracy resulting from the applied privacy protection. Moreover, we develop a procedure called Self-Division, which bridges the gap between the game and real-world scenarios. Concerning result integrity, we formulate a Stackelberg game between outsourcer and outsourcee where no absolute correctness is required. We provide the optimal strategies for the players and perform a sensitivity analysis. Furthermore, we extend the game by allowing the outsourcer no to verify and show its Nash Equilibriums. Regarding integrity verification, we analyze and compare two verification methods for Collaborative Filtering algorithms: the splitting and the auxiliary data approach. We observe that neither methods provides a full solution for the raised problem. Hence, we propose a solution, which besides outperforming these is also applicable to both stage of the algorithms. [less ▲]

Detailed reference viewed: 38 (10 UL)
Full Text
See detailNumerical parametric study on minimum degree of shear connection in steel-concrete composite beams
Romero Guzman, Alfredo UL

Bachelor/master dissertation (2019)

In steel-concrete composite beams, the mechanical shear connectors are used to provide the shear transfer at the steel-concrete interface by connecting the concrete slab and the steel beam. Eurocode 4 (EN ... [more ▼]

In steel-concrete composite beams, the mechanical shear connectors are used to provide the shear transfer at the steel-concrete interface by connecting the concrete slab and the steel beam. Eurocode 4 (EN 1994-1-1), provides the design rules to achieve an adequate degree of shear connection. In recent years, revised rules have been proposed for the new generation of Eurocodes and research has been done to assess their suitability. The aim of this study is to complement previous studies and to evaluate the revised rules for minimum degree of shear connection of propped composite beams with ductile shear connectors and symmetric cross sections. First, a simple and comprehensive numerical model of a simply supported composite beam was developed in the finite element software ABAQUS and validated against previous numerical studies. Then, a parametric study was performed to evaluate the proposed rules on solid composite beams and composite beams with profiled steel sheeting for four span lengths (i.e. Lₑ= 6, 9, 12, and 15 m). For each span length, ten distinct configurations were analysed and each configuration was evaluated for five different degrees of shear connection (i.e. η= 0.20, 0.40, 0.60, 0.80, and 1.0). Thus, more than 200 simulations were completed in this parametric study. The results were post-processed and the suitability of the proposed rules was assessed in terms of end slip and midspan deflection at ULS and SLS respectively. The design rules led to adequate results in terms of midspan deflection. However, in some cases the end slip at ULS exceeded the 6 mm upper bound given in EN 1994-1-1 for ductile shear connectors. In order to properly identify and limit the end slip of the unsafe cases, the data from previous studies focusing on long span beams (i.e. 15m<Lₑ≤25m) was gathered with the results of the present study. Based on the assessment of the results, additional limitations to the revised rules were provided for the unsafe configurations. In these cases, the load level shall be limited by a reduction factor βₓ varying between 0.85 and 1, as a function of the plastic neutral axis depth to the overall height ratio (i.e. xₚₗ/h) and the composite to steel beam plastic bending resistance ratio (i.e. Mpl,Rd/Mpl,a,Rd). The effectiveness of this condition was re-evaluated and the results showed that more than 90% of the cases exhibited a maximum allowable end slip lower than 6 mm. Finally, a design proposal that accounts for the reduction factor βₓ was developed. [less ▲]

Detailed reference viewed: 69 (7 UL)
Full Text
See detailIntegrative Network-Based Approaches For Modeling Human Disease
Ali, Muhammad UL

Doctoral thesis (2019)

The large-scale development of high-throughput sequencing technologies has allowed the generation of reliable omics data related to various regulatory levels. Moreover, integrative computational modeling ... [more ▼]

The large-scale development of high-throughput sequencing technologies has allowed the generation of reliable omics data related to various regulatory levels. Moreover, integrative computational modeling has enabled the disentangling of a complex interplay between these interconnected levels of regulation by interpreting concomitant large quantities of biomedical information (‘big data’) in a systematic way. In the context of human disorders, network modeling of complex gene-gene interactions has been successfully used for understanding disease-related dysregulation and for predicting novel drug targets to revert the diseased phenotype. Recent evidence suggests that changes at multiple levels of genomic regulation are responsible for the development and course of multifactorial diseases. Although existing computational approaches have been able to explain cell-type-specific and disease-associated transcriptional regulation, they so far have been unable to utilize available epigenetic data for systematically dissecting underlying disease mechanisms. In this thesis, we first provided an overview of recent advances in the field of computational modeling of cellular systems, its major strengths and limitations. Next, we highlighted various computational approaches that integrate information from different regulatory levels to understand mechanisms behind the onset and progression of multifactorial disorders. For example, we presented INTREGNET, a computational method for systematically identifying minimal sets of transcription factors (TFs) that can induce desired cellular transitions with increased efficiency. As such, INTREGNET can guide experimental attempts for achieving effective in vivo cellular transitions by overcoming epigenetic barriers restricting the cellular differentiation potential. Furthermore, we introduced an integrative network-based approach for ranking Alzheimer’s disease (AD)-associated functional genetic and epigenetic variation. The proposed approach explains how genetic and epigenetic variation can induce expression changes via gene-gene interactions, thus allowing for a systematic dissection of mechanisms underlying the onset and progression of multifactorial diseases like AD at a multi-omics level. We also showed that particular pathways, such as sphingolipids (SL) function, are significantly dysregulated in AD. In-depth integrative analysis of these SL-related genes reveals their potential as biomarkers and for SL-targeted drug development for AD. Similarly, in order to understand the functional consequences of CLN3 gene mutation in Batten disease (BD), we conducted a differential gene regulatory network (GRN)-based analysis of transcriptomic data obtained from an in vitro BD model and revealed key regulators maintaining the disease phenotype. We believe that the work conducted in this thesis provides the scientific community with a valuable resource to understand the underlying mechanism of multifactorial diseases from an integrative point of view, helping in their early diagnosis as well as in designing potential therapeutic treatments. [less ▲]

Detailed reference viewed: 64 (12 UL)
Full Text
See detailFirst-Principles High-Throughput Study of Linear and Nonlinear Optical Materials
Naccarato, Francesco UL

Doctoral thesis (2019)

Nonlinear optical (NLO) processes, such as second harmonic generation (SHG), play an important role in modern optics, especially in laser-related science and technology. They are at the core of a wide ... [more ▼]

Nonlinear optical (NLO) processes, such as second harmonic generation (SHG), play an important role in modern optics, especially in laser-related science and technology. They are at the core of a wide variety of applications ranging from optoelectronics to medicine. Among the various NLO materials, insulators are particularly important for second-order NLO properties. In particular, only crystals which are non-centrosymmetric can display a non-zero second-order NLO susceptibility. However, given the large number of requirements that a material needs to meet in order to be a good nonlinear optical material, the choice of compounds is drastically limited. Indeed, despite recent progress, a systematic approach to design NLO materials is still lacking. In this work, we conduct a first-principles high-throughput study on a large set of semiconductors for which we computed the linear and nonlinear susceptibility using Density Functional Perturbation Theory. For the linear optical properties, our calculations confirm the general trend that the refractive index is roughly inversely proportional to the band gap. In order to explain the large spread in the data distribution, we have found that two descriptors successfully describe materials with relatively high refraction index: (i) a narrow distribution in energy of the optical transitions which brings the average optical gap close to the direct band gap (ii) a large number of transitions around the band edge and/or high dipole matrix elements. For non-centrosymmetric crystals, we perform the calculation of the efficiency of SHG. We observe some materials with particularly high SHG, much stronger than the general relation with the linear refraction index through Miller’s rule predicts. We relate the value of Miller’s coefficient to geometric factors, i.e., how strongly the crystal deviates from a centrosymmetric one. We also identified interesting materials that show high optical responses for which it would be worth performing further analysis. [less ▲]

Detailed reference viewed: 58 (4 UL)
Full Text
See detailTHE INFLUENCE OF FAMILY OWNERSHIP ON M&AS AND INNOVATION
Issah, Abdul-Basit UL

Doctoral thesis (2019)

I draw from the concept of mixed gambles to investigate the socioemotional wealth trade-offs associated with high risk strategic decisions such as firm acquisition decisions of family firms. We contrast ... [more ▼]

I draw from the concept of mixed gambles to investigate the socioemotional wealth trade-offs associated with high risk strategic decisions such as firm acquisition decisions of family firms. We contrast the predictions from mixed gambles with those of the commonly used behavioural agency model (BAM). Our empirical results for a panel data set of large U.S. firms support the mixed gambles predictions and reject those derived from BAM. They reveal that family firms are more likely to engage in horizontal acquisitions than non-family firms and that the engagement of family firms in horizontal acquisitions is even higher when they are in a gain frame. [less ▲]

Detailed reference viewed: 56 (6 UL)
See detailDie Entwicklung der Entfremdung vom Lernen in der unteren Sekundarstufe in Luxemburg: Der Beitrag differenzieller schulischer Lern- und Entwicklungsmilieus
Grecu, Alyssa Laureen UL

Doctoral thesis (2019)

This dissertation provides research on alienation from learning in differential learning and developmental milieus (vgl. Baumert/Stanat/Watermann 2006) within the strongly stratified educational system of ... [more ▼]

This dissertation provides research on alienation from learning in differential learning and developmental milieus (vgl. Baumert/Stanat/Watermann 2006) within the strongly stratified educational system of Luxembourg. Alienation from learning is defined as a student´s generalised negative orientation towards learning (Hascher/Hadjar 2018, S. 179) and collocates with severe consequences including school dropout (see Hascher/Hagenauer 2010, S. 220). The study focusses on the question of how the development of alienation from learning differs between the differential learning and developmental milieus within Luxembourgish secondary school tracks. The conceptual framework is based on the perspective of differential learning and developmental milieus (Baumert/Stanat/Watermann 2006), the theory of school culture (Helsper 2008; Kramer/Thiersch/Ziems 2015) and further culture sociological and resonance pedagogical approaches (Willis 1978; Beljan 2017). Combining research on school culture and research on school alienation, this works examines the interplay between school culture and individual students and its impact on alienation from learning. Conceptualized as a mixed method study, this research aims for a holistic picture on alienation from learning by combining quantitative and qualitative approaches. The quantitative part is based on a three-year panel study with students from all secondary school tracks (grade 7-9). Employing random effects models and growth curves models it investigated the degree and development of alienation from learning over time. Qualitative group discussions and interviews with classes (grade 7) and their teachers from the high-achieving and low-achieving school track (corresponding to grammar school and the lowest secondary school track) were conducted to research how schools´ specific demands contribute to alienation from learning. The method employed in the qualitative analysis was the sequence-analytical habitus reconstruction approach. Quantitative analysis reveals a moderate but increasing degree of alienation from learning in all secondary school tracks from grade 7 to 9. Therefore, alienation from learning develops similarly in all secondary school tracks. Students from the high-achieving ES-Track show the strongest degree of alienation from learning whereas students from the low-achieving Modulaire-Track show the lowest degree of alienation from learning. Qualitative analysis identifies the high-achieving and low-achieving secondary school tracks as differential learning and developmental milieus characterized by diverging educational demands and standards on the class level. Consequently, when students’ orientations conflict with the school´s demands and offers there is a high risk for them to develop alienation. Track-specific possibilities and risks were identified with regard to bonding and alienation. Strong achievement orientation, theoretical educational content and highly standardised educational settings may foster alienation from learning depending on the students´ individual competencies and orientations. [less ▲]

Detailed reference viewed: 84 (6 UL)
Full Text
See detailStudy of nanopulsed discharges for plasma-polymerization: experimental characterization and theoretical understanding of growth mechanisms in the deposition of functional polymer thin films
Loyer, François UL

Doctoral thesis (2019)

Plasma processes are highly versatile methods able to promote the formation of thin films from a vast variety of compounds thanks to the numerous energetic species they generate. Notably, plasma-enhanced ... [more ▼]

Plasma processes are highly versatile methods able to promote the formation of thin films from a vast variety of compounds thanks to the numerous energetic species they generate. Notably, plasma-enhanced chemical vapor deposition (PECVD) processes have already led to the simultaneous synthesis and deposition of a wide variety of organic functional materials. Reciprocally, the many reactive species composing the plasma induce a non-negligible number of side reactions, yielding altered chemistry compared to conventional polymerization processes. Therefore, there is a strong need for controlled methods promoting conventional polymerization pathway in plasma-based processes to increase their range of application. To this end, the atmospheric-pressure plasma-initiated chemical vapor deposition (AP-PiCVD) process was developed, differing from classical PECVD by its initiation source. AP-PiCVD relies on square-wave nanopulsed discharges with single ultra-short plasma on times (t_on ≈ 100 ns) and long plasma off-times (t_off = 0.1 – 100 ms) rather than alternative current sources yielding long and repeated discharges (t_on = 10 µs). Methacrylate monomers initiated by nanopulsed discharges highlighted a shift of growth mechanisms from classical PECVD pathways (plasma-polymerization) to conventional free-radical polymerization. Notably, the growth of polymer layers with an extremely high retention of the monomer’s chemistry and unprecedented molecular weights for an atmospheric-pressure plasma process is demonstrated at long plasma off-times. Moreover, a transition from gas phase to surface growth mechanisms is observed, allowing the deposition of conformal coatings similarly to low-pressure alternative chemical vapor deposition processes. A thorough investigation of the thin films’ chemistry confirms the conventional nature of the layers grown by AP-PiCVD and shed light upon the growth mechanisms in low-frequency nanosecond pulsed plasmas. While the on-time induces the formation of free radicals from the monomers’ fragmentation which are able to initiate and terminate the chain addition process, conventional polymerization mechanisms are strongly promoted during the off-time yielding linear polymer core. Interestingly, new insights on selective initiation mechanisms based on sacrificial functions for an enhanced control of the molecular fragmentation are put forward and discussed. The development of a model describing the kinetic of pulsed plasma was carried out and correlated with experimental observations, providing deeper understanding of the interaction between the gas phase and the surface. The extraction of important physical parameters for the description of the growth kinetics in AP-PiCVD is demonstrated and their significance discussed. Using the fundamental knowledge developed on nanosecond pulsed plasma-initiated polymerization mechanisms, thermoresponsive copolymer layers grown by AP-PiCVD are reported for the first time. The properties of the layers are evaluated and related to their chemistry, allowing the determination of an optimal co-monomers ratio to integrate both of their individual properties as a unique functional thin film. [less ▲]

Detailed reference viewed: 67 (7 UL)
Full Text
See detailMULTILINE HOLDING CONTROL AND INTEGRATION OF COOPERATIVE ITS
Laskaris, Georgios UL

Doctoral thesis (2019)

Transportation is an important sector of the global economy. The rapid urbanization and urban sprawl comes with continuous demand for additional transportation infrastructure in order to satisfy the ... [more ▼]

Transportation is an important sector of the global economy. The rapid urbanization and urban sprawl comes with continuous demand for additional transportation infrastructure in order to satisfy the increasing and variable demand. Public transportation is a major contributor in alleviating traffic congestion in the modern megacities and provide a sustainable alternative to car for accessibility. Public transport operation is inherently stochastic due to the high variability in travel times and passenger demand. This yields to disruptions and undesired phenomena such as vehicles arriving in platoons at stops. Due to the correlation between the headway between vehicles and passenger demand, bunching leads to long waiting time at stops, overcrowded vehicles, discomfort for the passengers and from the operators side poor management of available resources and overall a low of service of the system. The introduction of intelligent transport systems provided innovative applications in order to monitor the operation, collect data and react dynamically to any disruption of the transit system. Advanced Public Transport Systems extended the range of control strategies and their objectives beyond schedule adherence and reliance on historical data alone. Among strategies, holding is a thoroughly investigated and applicable control strategy. With holding, a vehicle is instructed to remain at a designated stop for an additional amount of time after the completion of dwell time, until a criterion is fulfilled. Depending on the characteristics of the line the criterion aim for schedule adherence or regularity or minimization of passenger costs and its components. So far, holding is used for regulating single line operation. Beyond single line, it has been used for transfer synchronization at transfer hubs and recently has been extended to regulate the operation on consecutive stops that are served by multiple lines. The first part of this dissertation is dedicated to real time holding control of multiple lines. A rule based holding criterion is formulated based on the passenger travel time that accounts for the passengers experiencing the control action. Total holding time is estimated based on the size of all passenger groups that interact. The formulated criterion can be applied on all different parts of trunk and branch network. Additionally, the criterion is coupled with a rule based criterion for synchronization and the decision between the two is taken based on the passenger cost. The criterion has been tested for different trunk and branch networks and compared with different control schemes and its performance has been assessed using regularity indices as well as passenger cost indicators for the network in total but also per passenger group. Finally, an analysis has been conducted in order to define under which network and demand configuration multiline control can be preferred over single line control. Results shown that under specific demand distributions multiline control can outperform single line control in network level. Continuously new technologies are introduced to transit operation. Recently, Cooperative Intelligent Transport Systems utilized in the form of Driver Advisory Systems (DAS) shown that can provide the same level of priority with transit signal priority without changing the time and the phases of a traffic light. However, until now the available DASs focus exclusively on public transport priority neglecting completely the sequence of the vehicles and the effects on the operation. In the second part of the dissertation, two widely used DASs are combined with holding in order to meet both the objective of reducing the number of stops at traffic signals and at the same time maintain regularity. Two hybrid controllers are introduced, a combination of two holding criteria and a combination of holding and speed advisory. Both controllers are tested using simulation in comparison to the independent application of the controllers and different levels of transit signal priority. The hybrid controllers can drastically reduce transit signal priority requests while they manage to achieve both objectives. [less ▲]

Detailed reference viewed: 36 (8 UL)
Full Text
See detailAttentional bias to body- and sexually-relevant stimuli
Czeluscinska-Peczkowska, Agnieszka UL

Doctoral thesis (2019)

Sexual dysfunctions and body image dissatisfaction in women have reached significant levels, with prevalence rates being currently estimated at 50% and 38%, respectively. The potential societal and health ... [more ▼]

Sexual dysfunctions and body image dissatisfaction in women have reached significant levels, with prevalence rates being currently estimated at 50% and 38%, respectively. The potential societal and health costs are considerable, as a negative body image is considered a high risk factor for the development and maintenance of eating disorders, and sexual dysfunctions can negatively impact overall well-being. Previous research has separately examined body image dissatisfaction and sexual functioning but research linking these two areas is missing. Study 1 demonstrated the significance of contextual body image in evaluating visual sexual images. Valence ratings of sexually explicit stimuli were found to be associated with sexual functioning level mediated by contextual body image: women, who rated sexually explicit pictures less positively scored lower on sexual functioning if they reported a more self-conscious focus and avoidance of the body in the context of sexual experiences. In Study 2 we were able to prove the relevance of new sexually-explicit images in evoking sexual arousal, which was reflected by evaluative judgements and psychophysiological indicators of arousal. Study 3 aimed to compare responses to sexual stimuli and stimuli related to body image dissatisfaction (images of own body) in participants with sexual dysfunctions (SD) and a healthy control (HC) group. Contrary to our expectations, women in the SD group looked significantly longer and more frequently at self-defined most satisfying than dissatisfying body parts when compared to HC participants. There were no significant group differences in gaze duration and frequency of sexually explicit images, but the women with SDs rated these stimuli as less positive, less arousing and expressed less motivation to keep looking at them. Furthermore, by inducing a positive or negative attention bias (AB) to own body parts we aimed at changing state body image satisfaction and state sexual arousal in response to sexually explicit video-clip. The proposed AB induction was not sufficient and did not affect body image and sexual experiences. Altogether, the findings from current study suggests that it visual attention and general arousal in response to sexual stimuli in women with SD is not disturbed but rather the process of evaluation. [less ▲]

Detailed reference viewed: 70 (31 UL)
Full Text
See detailCoopération, Sécurité, Paix et Développement durable : Quelle nouvelle stratégie des Systèmes d'Alerte précoce (SAP) de la troisième génération pour plus d'efficacité dans la prévention et gestion des conflits violents?
Nadialine, Biagui Alexandre UL

Doctoral thesis (2019)

The atrocities associated with the wars in the Balkans in the 1990s and the Rwandan genocide represent a critical period leading to the questioning of the relevance of Conflict Early Warning Systems (CEWS ... [more ▼]

The atrocities associated with the wars in the Balkans in the 1990s and the Rwandan genocide represent a critical period leading to the questioning of the relevance of Conflict Early Warning Systems (CEWS), which purpose consists in preventing violent conflicts. On the one hand, the supporters of these mechanisms are enthusiatic about the invaluable contributions of CEWS and the notion that these mechanisms are crucial in understanding the underlying causes and dynamics of violent conflicts in order to enable their prevention, resolution and peaceful transformation. The critics, on the other hand, believe that these mechanisms are incapable of fulfilling their assigned mission. Thus, in order to fully understand these mechanisms and to verify the validity of the argument supporting their modest contribution in the process of preventing violent conflicts, a large-scale literature review was undertaken. With regards to all generations of CEWS, from 1990 to 2016, the main objective consisted in identifying difficulties in the literature and recommendations for the improvement of these mechanisms. Considering that CEWS and response mechanisms interact constantly, identifying difficulties and recommendations related to response mechanisms for the same time period in the relevant literature was also a focus. In order to strengthen our investigation, a focus-group with CEWS practitioners was held in March 2018 in Senegal. With regard to the online literature review, a search strategy was set up. It allowed the exploration of several documents from a variety of sources. As for the focus-group, inclusion criteria for participant’s selection were also defined. On the one hand, 14504 documents were explored, of which 259 thoroughly reviewed, of which 153 were selected. In addition, 20 other so-called mixed documents, focusing on the main concepts of the thesis (5 documents per concept) were selected. On the other hand, 5 participants took part in the focus-group. As for the online literature review’s results, several types of difficulties and recommendations related to CEWS and response mechanisms were identified and divided into categories and sub-categories. Similarly, with regard to the focus-group’s results, several types of difficulties and recommendations related to CEWS and response mechanisms were identified and dispatched into the corresponding categories and sub-categories. Consequently, a compendium of difficulties and recommendations related to CEWS and response mechanisms from 1990 to 2016 was established. By associating the Maxqda software to the document and content analysis methods, the two types of data (review and empirical) were valorized through the principal of inference. A cross-analysis of these two types of data made it possible to invalidate and confirm the existence of certain failures, but also to confirm the persistence of several difficulties that CEWS and response mechanisms face. A critical analysis of these mechanisms has demonstrated that, despite their contributions and progress, much work is still to be done, especially when it comes to the collection and management of information for early warning purposes. Indeed, the failure to take into account a number of technical issues could have negative effects on the whole Collection-Verification-Transmission-Analysis-Referal process (CVTAR) and on the management of intra- or inter-state conflicts. Thus, with regard to the later, the geopolitical and geostrategic game dictating the outcome of international mediation degrades its image by promoting a distorted and selfish culture of violent conflicts prevention. As a result, such flaws have prompted the formulation of alternative and relevant strategies with the aim of optimazing the effectiveness of violent conflict prevention and management. In addition to theoretical and practical contributions, the provision of a product, currently under conceptualization, to a federal structure of CEWS and response mechanisms at the national level, promises an invaluable contribution to social unrests and/ or violent conflicts prevention, but also a transparent management of public affairs and the promotion of good governance. [less ▲]

Detailed reference viewed: 101 (8 UL)
Full Text
See detailRobotic Trajectory Tracking: Position- and Force-Control
Klecker, Sophie UL

Doctoral thesis (2019)

This thesis employs a bottom-up approach to develop robust and adaptive learning algorithms for trajectory tracking: position and torque control. In a first phase, the focus is put on the following of a ... [more ▼]

This thesis employs a bottom-up approach to develop robust and adaptive learning algorithms for trajectory tracking: position and torque control. In a first phase, the focus is put on the following of a freeform surface in a discontinuous manner. Next to resulting switching constraints, disturbances and uncertainties, the case of unknown robot models is addressed. In a second phase, once contact has been established between surface and end effector and the freeform path is followed, a desired force is applied. In order to react to changing circumstances, the manipulator needs to show the features of an intelligent agent, i.e. it needs to learn and adapt its behaviour based on a combination of a constant interaction with its environment and preprogramed goals or preferences. The robotic manipulator mimics the human behaviour based on bio-inspired algorithms. In this way it is taken advantage of the know-how and experience of human operators as their knowledge is translated in robot skills. A selection of promising concepts is explored, developed and combined to extend the application areas of robotic manipulators from monotonous, basic tasks in stiff environments to complex constrained processes. Conventional concepts (Sliding Mode Control, PID) are combined with bio-inspired learning (BELBIC, reinforcement based learning) for robust and adaptive control. Independence of robot parameters is guaranteed through approximated robot functions using a Neural Network with online update laws and model-free algorithms. The performance of the concepts is evaluated through simulations and experiments. In complex freeform trajectory tracking applications, excellent absolute mean position errors (<0.3 rad) are achieved. Position and torque control are combined in a parallel concept with minimized absolute mean torque errors (<0.1 Nm). [less ▲]

Detailed reference viewed: 86 (5 UL)
Full Text
See detailCreating better ground truth to further understand Android malware: A large scale mining approach based on antivirus labels and malicious artifacts
Hurier, Médéric UL

Doctoral thesis (2019)

Mobile applications are essential for interacting with technology and other people. With more than 2 billion devices deployed all over the world, Android offers a thriving ecosystem by making accessible ... [more ▼]

Mobile applications are essential for interacting with technology and other people. With more than 2 billion devices deployed all over the world, Android offers a thriving ecosystem by making accessible the work of thousands of developers on digital marketplaces such as Google Play. Nevertheless, the success of Android also exposes millions of users to malware authors who seek to siphon private information and hijack mobile devices for their benefits. To fight against the proliferation of Android malware, the security community embraced machine learning, a branch of artificial intelligence that powers a new generation of detection systems. Machine learning algorithms, however, require a substantial number of qualified samples to learn the classification rules enforced by security experts. Unfortunately, malware ground truths are notoriously hard to construct due to the inherent complexity of Android applications and the global lack of public information about malware. In a context where both information and human resources are limited, the security community is in demand for new approaches to aid practitioners to accurately define Android malware, automate classification decisions, and improve the comprehension of Android malware. This dissertation proposes three solutions to assist with the creation of malware ground truths. The first contribution is STASE, an analytical framework that qualifies the composition of malware ground truths. STASE reviews the information shared by antivirus products with nine metrics in order to support the reproducibility of research experiments and detect potential biases. This dissertation reports the results of STASE against three typical settings and suggests additional recommendations for designing experiments based on Android malware. The second contribution is EUPHONY, a heuristic system built to unify family clusters belonging to malware ground truths. EUPHONY exploits the co-occurrence of malware labels obtained from antivirus reports to study the relationship between Android applications and proposes a single family name per sample for the sake of facilitating malware experiments. This dissertation evaluates EUPHONY on well-known malware ground truths to assess the precision of our approach and produce a large dataset of malware tags for the research community. The third contribution is AP-GRAPH, a knowledge database for dissecting the characteristics of malware ground truths. AP-GRAPH leverages the results of EUPHONY and static analysis to index artifacts that are highly correlated with malware activities and recommend the inspection of the most suspicious components. This dissertation explores the set of artifacts retrieved by AP-GRAPH from popular malware families to track down their correlation and their evolution compared to other malware populations. [less ▲]

Detailed reference viewed: 81 (14 UL)
Full Text
See detailNumerische Untersuchung zur Strömungsentwicklung im passiven Flutsystem des Reaktordesigns Kerena
Kaczmarkiewicz, Nadine UL

Doctoral thesis (2019)

In case of a loss-of-coolant accident in a nuclear power plant, the passive gravity-driven core flooding system recovers the reactor’s water inventory so that the core is covered at all times. It is ... [more ▼]

In case of a loss-of-coolant accident in a nuclear power plant, the passive gravity-driven core flooding system recovers the reactor’s water inventory so that the core is covered at all times. It is essential to estimate, when passive flooding valves open, whether reverse flow occurs and to which extent this may delay flooding or even lead to initial mass displacement from the reactor to the flooding pools. Experimental investigations including passive safety systems of the KERENA-design are performed in a scaled-down facility. Such so-called integral tests are carried out with initial and boundary conditions of different design basis accidents. The behavior of the passive core flooding system is analyzed proofing the influence through other passive safety systems. Evidence is shown that a stable two-phase reverse flow occurs, due to incomplete condensation at the outlet of the emergency condenser and to adiabatic evaporation in the flooding line. The hydrostatic column is reduced during reverse flow but regained in all considered cases. As a result the onset of flooding is delayed only moderately. The influence on the flow development of the containment pressure, of the geometry of the flooding line, and of other peculiar boundary conditions of the considered experiments is discussed. Results from numerical simulations are presented and confirm the experimental results. The considered numerical model of the flooding system can predict fundamental phenomena governing the flow development in the flooding line with satisfactory accuracy. [less ▲]

Detailed reference viewed: 90 (2 UL)
See detailCritical perspectives on plurilingual students' identity performance in sustainability education
Gorges, Anna UL

Doctoral thesis (2019)

The purpose of this critical video-ethnographic study is to gain new perspectives on how the open, dialogic structures in an alternative high school setting afford plurilingual adult students’ to take ... [more ▼]

The purpose of this critical video-ethnographic study is to gain new perspectives on how the open, dialogic structures in an alternative high school setting afford plurilingual adult students’ to take agency and the development of a positive student identity. With this goal, the study seeks to contribute to the research literature on plurilingual adult students and on nontraditional students learning about sustainability. The overarching question that guides this study is how does the figured world of multilingual sustainability education based on socioscientific issues mediate plurilingual adult students identities as competent science students? This study is multi-theoretical, multi-method, and multi-level, and as such, the manuscriptstyle dissertation presents several case studies of students as they participate in an interdisciplinary unit on sustainability. Situated in a superdiverse context, the data collection took place at an alternative high school, where students who dropped out of the traditional school system have the chance to get a leaving certificate. Data resources include video data, audio data, student artefacts, fieldnotes and photographs. Multimodal discourse analysis based on Gee (1996) was used to reveal the structures at the micro, meso and macro level that mediate learning for students in the alternative educational setting and afford them to make connections to their everyday lives. Three individual manuscripts examine three separate case studies regarding students’ use of transmodalling, students’ engagement in socio-scientific issues, and students’ perspectives on the holistic teaching and learning approaches in their school. Drawing on dialectic understandings of learning, the three case studies each illuminate holistic approaches to learning focusing on students’ multimodal means of expression, emotions and identity development that afford them with opportunities to take agency and create solidarity in the class and school community. This study concludes that dialogic pedagogy as space of possibilities mediates student’ development of identities as competent science learners, which has implications for their identities as agents positioned for transformative actions. [less ▲]

Detailed reference viewed: 65 (17 UL)
Full Text
See detailExtracting the spatio-temporal variations in the gravity field recovered from GRACE spatial mission: methods and geophysical applications
Prevost, Paoline Fleur UL

Doctoral thesis (2019)

Measurements of the spatio-temporal variations of Earth’s gravity field recovered from the Gravity Recovery and Climate Experiment (GRACE) mission have led to unprecedented insights into large spatial ... [more ▼]

Measurements of the spatio-temporal variations of Earth’s gravity field recovered from the Gravity Recovery and Climate Experiment (GRACE) mission have led to unprecedented insights into large spatial mass redistribution at secular, seasonal, and sub-seasonal time scales. GRACE solutions from various processing centers, while adopting different processing strategies, result in rather coherent estimates. However, these solutions also exhibit random as well as systematic errors, with specific spatial and temporal patterns in the latter. In order to dampen the noise and enhance the geophysical signals in the GRACE data, several methods have been proposed. Among these, methods based on filtering techniques require a priori assumptions regarding the spatio-temporal structure of the errors. Despite the large effort to improve the quality of GRACE data for always finer geophysical applications, removing noise remains a problematic question as discussed in Chapter 1. In this thesis, we explore an alternative approach, using a spatio-temporal filter, namely the Multichannel Singular Spectrum Analysis (M-SSA) described in Chapter 2. M-SSA is a data-adaptive, multivariate, and non-parametric method that simultaneously exploits the spatial and temporal correlations of geophysical fields to extract common modes of variability. We perform an M-SSA simultaneously on 13 years of GRACE spherical harmonics solutions from five different processing centers. We show that the method allows for the extraction of common modes of variability between solutions, and removal of the solution-specific spatio-temporal errors arising from each processing strategies. In particular, the method filters out efficiently the spurious North-South stripes, most likely caused by aliasing of the imperfect geophysical correction models of known phenomena. In Chapter 3, we compare our GRACE solution to other spherical harmonics solutions and to mass concentration (mascon) solutions which use a priori information on the spatio-temporal pattern of geophysical signals. We also compare performance of our M-SSA GRACE solution with respect to others by predicting surface displacements induced by GRACE-derived mass loading and comparing results with independent displacement data from stations of the Global Navigation Satellite System (GNSS). Finally, in Chapter 4 we discuss the possible application of a refined GRACE solution to answer debated post-glacial rebound questions. More precisely, we focus on separating the post-glacial rebound signal related to past ice melting and the present ice melting in the region of South Georgia. [less ▲]

Detailed reference viewed: 57 (2 UL)
See detailFichte und der Skeptizismus
Motz, Oliver Tobias UL

Doctoral thesis (2019)

Detailed reference viewed: 21 (7 UL)
Full Text
See detailAutomated optimisation of stem cell-derived neuronal cell culture in three dimensional microfluidic device
Kane, Khalid Ibnou Walid UL

Doctoral thesis (2019)

This dissertation is a compilation of publications and manuscripts that aim 1) to integrate an automated platform optimised for long term in vitro cell culture maintenance for Parkinson’s disease, long ... [more ▼]

This dissertation is a compilation of publications and manuscripts that aim 1) to integrate an automated platform optimised for long term in vitro cell culture maintenance for Parkinson’s disease, long term live cell imaging and the handling of many cell lines, 2) to combine physics principles with imaging techniques to optimise the seeding of Matrigel embedded human neuroepithelial stem cells into a three-dimensional microfluidic device, and 3) to combine engineering principles with cell biology to optimise the design of a three-dimensional microfluidic system based on phaseguide technology. In the first publication manuscript, we investigated Matrigel as a surrogate extracellular matrix in three-dimensional cell culture systems, including microfluidic cell culture. The study aimed at understanding and characterising the properties of Matrigel. Using classical rheological measurements of Matrigel (viscosity versus shear rate) in combination with fluorescence microscopy and fluorescent beads for particle image velocimetry measurements (velocity profiles), the shear rates experienced by cells in a microfluidic device for three-dimensional cell culture was characterised. We discussed how the result of which helped to mechanically optimise the use of Matrigel in microfluidic systems to minimise the shear stress experienced by cells during seeding in a microchannel. The second manuscript proposes a methodology to passively control the flow of media in a three-dimensional microfluidic channel. We used the fluid dynamic concept of similitude to dynamically replicate cerebral blood flow in a rectangular cross-sectional microchannel. This similarity model of a target cell type and a simple fluid flow mathematical prediction model was used to iterate the most optimum dimensions within some manufacturing constraints to adapt the design of the OrganoPlate, a cell culture plate fully compatible with laboratory automation, which allowed its re-dimension to achieve over 24h of flow for the culture of human neuroepithelial stem cells into midbrain specific dopaminergic neurons. In the third publication manuscript, we propose an automated cell culture platform optimised for long-term maintenance and monitoring of different cells in three-dimensional microfluidic cell culture devices. The system uses Standard in Laboratory Automation or SiLA, an open source standardisation which allows rapid software integration of laboratory automation hardware. The automation platform can be flexibly adapted to various experimental protocols and features time-lapse imaging microscopy for quality control and electrophysiology monitoring to assess cellular activity. It was biologically validated by differentiating Parkinson’s disease patient derived human neuroepithelial stem cells into midbrain specific dopaminergic neurons. This system is the first example of an automated Organ-on-a-Chip culture and has the potential to enable a versatile array of in vitro experiments for patient-specific disease modelling. Finally, the fourth manuscript initiates the assessment of the neuronal activity of induced pluripotent stem cell derived neurons from Parkinson’s Disease patients with LRRK2-G2019S mutations and isogenic controls. A novel image analysis pipeline that combined semi-automated neuronal segmentation and quantification of calcium transient properties was developed and used to analyse neuronal firing activity. It was found that LRRK2-G2019S mutants have shortened inter-spike intervals and an increased rate of spontaneous calcium transient induction than control cell lines. [less ▲]

Detailed reference viewed: 59 (5 UL)
Full Text
See detailShear Transfer in Heavy Steel-Concrete Composite Columns with Multiple Encased Steel Profiles
Chrzanowski, Maciej UL

Doctoral thesis (2019)

The PhD Thesis is dealing with the shear transfer in heavy steel-concrete composite columns with multiple encased steel profiles. In the conducted research, a focus was placed on two aspects of shear ... [more ▼]

The PhD Thesis is dealing with the shear transfer in heavy steel-concrete composite columns with multiple encased steel profiles. In the conducted research, a focus was placed on two aspects of shear transfer: (1) local shear transfer at the steel-concrete interface and (2) global shear transfer between the embedded steel profiles. With reference to the current practice, gaps in the knowledge and in the available solutions have been identified. The development of a novel, easy-applicable and efficient type of shear connectors for an application in composite columns with one or multiple-encased steel profiles satisfies the improvement need to assure local shear transfer between steel and concrete materials. The novel type of flat shear connectors has a form of reinforcement bars welded to the flanges of steel profiles. Three main orientations, in accordance to the steel beam longitudinal axis, were investigated: 1) transversal, 2) longitudinal and 3) angled under 45° - V-shaped. The force transfer mechanism of the proposed shear connectors considers the usage of the external stirrups to anchorage the compression struts and bear the tensile forces. In parallel, the steel-concrete bond phenomenon was examined and analysed. In the consideration of the global shear transfer and in respect to the mechanical engineering model and behaviour of large composite columns with more than one embedded steel profile, the common practice so far is to assume a homogenous system with one stiffness and analyse it based on the Bernoulli beam theory. However, the results of the recent tests caused some doubts whether this approach is correct. The derivation away from the Bernoulli beam behaviour leads to a changed stiffness of the column and hence changed critical buckling load. An innovative hybrid conception has been proposed, where a transition towards the Vierendeel truss model embedded into the Timoshenko beam model is considered in order to govern the identified significant effects of shear deformation. Within this objective, big-scale beam/column members with two embedded steel profiles were examined in order to investigate an internal forces distribution and its bending and shear stiffness. A complex interaction between the materials and lack of knowledge regarding composite behaviour in columns with multiple encased steel profiles opens opportunities to develop novel systems and design methods. The described objectives are investigated experimentally and by the FE numerical simulations. As an outcome, an analytical model for the resistance of the developed novel shear connectors and innovative mechanical engineering model for the description of the structural behaviour, as well as effective stiffness, of a composite member with more than one embedded steel profile are given. [less ▲]

Detailed reference viewed: 63 (13 UL)
See detailEmpirical Essays on Institutional Determinants of Firm Entry and Exit
Murmann Geb. Wagner, Simona Christine UL

Doctoral thesis (2019)

This thesis is a collection of three essays investigating unintended and non-obvious effects of economic policy changes and established institutional systems on firm entry and exit. The first essay ... [more ▼]

This thesis is a collection of three essays investigating unintended and non-obvious effects of economic policy changes and established institutional systems on firm entry and exit. The first essay investigates the effect of personal preferences of insolvency trustees and judges on insolvency outcomes in Germany. The second essay reveals that governmental wage setting through minimum wages will not only affect dependent employees but also self-employment. In the third essay, introduction of high-speed internet effects on firm entries and exits are analyzed. [less ▲]

Detailed reference viewed: 44 (4 UL)
Full Text
See detailMultiscale modeling of mitochondria
Garcia, Guadalupe Clara UL

Doctoral thesis (2019)

Life is based on energy conversion by which cells and organisms can adapt to the environment. The involved biological processes are intrinsically multiscale phenomena since they are based on molecular ... [more ▼]

Life is based on energy conversion by which cells and organisms can adapt to the environment. The involved biological processes are intrinsically multiscale phenomena since they are based on molecular interactions on a small scale leading to the emerging behavior of cells, organs and organisms. To understand the underlying regulation and to dissect the mechanisms that control system behavior, appropriate mathematical multiscale models are needed. Such models do not only offer the opportunity to test different hypothesized mechanisms but can also address current experimental technology gaps by zooming in and out of the dynamics, changing scales, coarse-graining the dynamics and giving us distinct views of the phenomena. In this dissertation substantial efforts were done to combine different computational modeling strategies based on different assumptions and implications to model an essential system of eukaryotic life -- the energy providing mitochondria -- where the spatiotemporal domain is suspected to have a substantial influence on its function. Mitochondria are highly dynamic organelles that fuse, divide, and are transported along the cytoskeleton to ensure cellular energy homeostasis. These processes cover different scales, in space and time, where on the more global scale mitochondria exhibit changes in their molecular content in response to their physiological context including circadian modulation. On the smaller scales, mitochondria show also faster adaptation by changing their morphology within minutes. For both processes, the relation between the underlying structure of either their regulating network or the spatial morphology and the functional consequences are essential to understand principles of energy homeostasis and their link to health and disease conditions. This thesis focuses on different scales of mitochondrial adaptation. On the small scales, fission and fusion of mitochondria are rather well established but substantial evidence indicates that the internal structure is also highly variable in dependence on metabolic condition. However, a quantitative mechanistic understanding how mitochondrial morphology affects energetic states is still elusive. In the first part of this dissertation I address this question by developing an agent-based dynamic model based on three-dimensional morphologies from electron microscopy tomography, which considers the molecular dynamics of the main ATP production components. This multiscale approach allows for investigating the emergent behavior of the energy generating mechanism in dependence on spatial properties and molecular orchestration. Interestingly, comparing spatiotemporal simulations with a corresponding space-independent approach, I found only minor space dependence in equilibrium conditions but qualitative difference in fluctuating environments and in particular indicate that the morphology provides a mechanism to buffer energy at synapses. On the more global scale of the regulation of mitochondrial protein composition, I applied a data driven approach to explore how mitochondrial activity is changing during the day and how food intake restrictions can effect the structure of the underlying adaptation process. To address the question if at different times of the day, the mitochondrial composition might adapt and have potential implications for function, I analyzed temporal patterns of hepatic transcripts of mice that had either unlimited access to food or were hold under temporal food restrictions. My analysis showed that mitochondrial activity exhibits a temporal activity modulation where different subgroups of elements are active at different time points and that food restriction increases temporal regulation. Overall, this thesis provides new insights into mitochondrial biology at different scales by providing an innovative computational modeling framework to investigate the relation between morphology and energy production as well as by characterizing temporal modulation of the regulatory network structure under different conditions. [less ▲]

Detailed reference viewed: 83 (12 UL)
Full Text
See detailNonlinear Observation and Control of a Lightweight Robotic Manipulator Actuated by Shape Memory Alloy (SMA) Wires
Quintanar Guzman, Serket UL

Doctoral thesis (2019)

In the last decade, the industry of Unmanned Aerial Vehicles (UAV) has gone through immense growth and diversification. Nowadays, we find drone based applications in a wide range of industries, such as ... [more ▼]

In the last decade, the industry of Unmanned Aerial Vehicles (UAV) has gone through immense growth and diversification. Nowadays, we find drone based applications in a wide range of industries, such as infrastructure, agriculture, transport, among others. This phenomenon has generated an increasing interest in the field of aerial manipulation. The implementation of aerial manipulators in the UAV industry could generate a significant increase in possible applications. However, the restriction on the available payload is one of the main setbacks of this approach. The impossibility to equip UAVs with heavy dexterous industrial robotic arms has driven the interest in the development of lightweight manipulators suitable for these applications. In the pursuit of providing an alternative lightweight solution for the aerial manipulators, this thesis proposes a lightweight robotic arm actuated by Shape Memory Alloy (SMA) wires. Although SMA wires represent a great alternative to conventional actuators for lightweight applications, they also imply highly nonlinear dynamics, which makes them difficult to control. Seeking to present a solution for the challenging task of controlling SMA wires, this work investigates the implications and advantages of the implementation of state feedback control techniques. The final aim of this study is the experimental implementation of a state feedback control for position regulation of the proposed lightweight robotic arm. Firstly, a mathematical model based on a constitutive model of the SMA wire is developed and experimentally validated. This model describes the dynamics of the proposed lightweight robotic arm from a mechatronics perspective. The proposed robotic arm is tested with three output feedback controllers for angular position control, namely a PID, a Sliding Mode and an Adaptive Controller. The controllers are tested in a MATLAB simulation and finally implemented and experimentally tested in various different scenarios. Following, in order to perform the experimental implementation of a state feedback control technique, a state and unknown input observer is developed. First, a non-switching observable model with unknown input of the proposed robotic arm is derived from the model previously presented. This model takes the martensite fraction rate of the original model as an unknown input, making it possible to eliminate the switching terms in the model. Then, a state and unknown input observer is proposed. This observer is based on the Extended Kalman Filter (EKF) for state estimation and sliding mode approach for unknown input estimation. Sufficient conditions for stability and convergence are established. The observer is tested in a MATLAB simulation and experimentally validated in various different scenarios. Finally, a state feedback control technique is tested in simulation and experimentally implemented for angular position control of the proposed lightweight robotic arm. Specifically, continuous and discrete-time State-Dependent Riccati Equation (SDRE) control laws are derived and implemented. To conclude, a quantitative and qualitative comparative analysis between an output feedback control approach and the implemented state feedback control is carried out under multiple scenarios, including position regulation, position tracking and tracking with changing payloads. [less ▲]

Detailed reference viewed: 72 (7 UL)
See detailZeit- und Zukunftskonzepte in Konrads von Würzburg "Trojanerkrieg"
Krämer, Charlotte UL

Doctoral thesis (2019)

Den Ausgangspunkt für diese Untersuchung zu Konrads von Würzburg „Trojanerkrieg“ – einem durch den Tod des Autors 1287 unvollendet gebliebenen Roman von mehr als 40.000 Versen, der als der am weitesten ... [more ▼]

Den Ausgangspunkt für diese Untersuchung zu Konrads von Würzburg „Trojanerkrieg“ – einem durch den Tod des Autors 1287 unvollendet gebliebenen Roman von mehr als 40.000 Versen, der als der am weitesten verbreitete deutschsprachige Antikenroman des Mittelalters gilt – bilden die Zeit- und Zukunftskonzepte der in der erzählten Welt physisch präsenten menschlichen Akteure: Was wissen sie über ihre Zukunft? Wie weit reicht die Zeitspanne, die sie aus ihrem jeweiligen Jetztpunkt antizipieren, in die Zukunft voraus, und welche Entwürfe eigenen Handelns schmieden sie? Wie gehen sie mit konkurrierenden Zukunftsmodellen um, insbesondere mit solchen, die die Möglichkeit eines alternativen Geschehensverlaufs eröffnen? Kurz: Wie stellt Konrad im „Trojanerkrieg“ das Verhältnis des handelnden Menschen zur eigenen Zukunft dar? Augenfällig ist das Spannungsfeld zwischen kausaler und finaler Handlungsmotivierung, in dem sich die Figuren in unterschiedlichen Geschehenszusammenhängen – als Eltern, Liebende oder Rächende – bewegen: Einerseits werden sie immer wieder mit Informationen über die Zukunft konfrontiert, die eigentlich außerhalb ihres „natürlichen“ Wissenshorizonts liegen und ihren Handlungsspielraum erweitern. Andererseits wird dieser von verschiedenen metaphysischen Mächten (wie etwa Gott, Glück und Zufall) eingeschränkt oder gar gänzlich nivelliert. Denn Konrad spielt im „Trojanerkrieg“ mit der Eigendynamik der fatalen Prozesse, die den Untergang der Trojaner vorbereiten: Anders etwa als den Protagonisten der volkssprachlichen Heldenepik gelingt es seinen Figuren nicht, Konflikte durch die Anwendung von Gewalt beizulegen – vielmehr entstehen dadurch neue Probleme. Als Alternative zur gewaltsamen Reaktion werden zukunftsorientiertere Handlungsstrategien diskutiert, wobei Konrad sich besonders für den Umschlagpunkt interessiert, an dem aus subjektiven Handlungsplänen eine Geschichte scheiternder Personenverbände entsteht, weil die Zukunftskonzepte der Figuren überwiegend über keine oder nur eine minimale soziale Ausdehnung verfügen. [less ▲]

Detailed reference viewed: 33 (5 UL)
Full Text
See detailNumerical Modeling of air-gap membrane distillation
Cramer, Kerstin Julia UL

Doctoral thesis (2019)

Fresh water supply is a problem in large parts of the world and present on every continent. Many countries facing physical water scarcity, however, have access to the sea and lie in arid zones of the ... [more ▼]

Fresh water supply is a problem in large parts of the world and present on every continent. Many countries facing physical water scarcity, however, have access to the sea and lie in arid zones of the earth where solar energy is plentiful available. Membrane distillation (MD) describes an emerging desalination technology which has advantages when driven by solar energy or waste heat. In MD, seawater is thermally desalinated by generating a temperature gradient between hot salt water and produced fresh water which are separated by a membrane. In air-gap membrane distillation (AGMD) an insulating air-gap is introduced between membrane and distillate in order to minimize conductive losses. Despite its advantages, the permeate stream needs to be increased for large-scale application. To improve performance and energy efficiency, a detailed understanding of the highly coupled heat and mass transfer is crucial. However, for AGMD not many models exist and the existing models simplify the heat and mass transfer processes. The goal of this thesis is therefore to increase the understanding of the AGMD process and the predictive power of numerical models. A three-dimensional (3D) macro-scale model is developed with emphasis on the heat and mass transfer. It integrates aspects from multiphase flow modeling namely energy conservation over phase-change interfaces and the thermodynamic concept of moist air in the air-gap. Thereby, it computes the condensation mass flow independently from the evaporation mass flow, allowing to study the influence of convection on the heat and mass transfer in the air-gap. The model is accelerated for computation on graphical processing units (GPU). Employing the macro-scale model, a comparative analysis of the effects of module orientation on module performance and efficiency is performed. Vortexes in the air-gap are observed when using a module configuration where the hot feed flows below air-gap and membrane and the temperature gradient is opposing gravity. These vortexes lead to a significantly increased energy utilization also at low feed velocities. As the main advantage of AGMD is the reduction of heat losses, this configuration could bring further improvement. Furthermore, membrane transport properties are determined from high-resolution 3D membrane imaging combined with Lattice-Boltzmann simulation. Thereby, the 3D structure of membrane samples is obtained and porosity, tortuosity and permeability values are computed for the investigated membranes. Following the findings in the papers, further studies are suggested employing the modeling approaches developed in this thesis. [less ▲]

Detailed reference viewed: 79 (20 UL)
Full Text
See detailGeneration of midbrain organoids as a model to study Parkinson's disease
Smits, Lisa UL

Doctoral thesis (2019)

The study of 3D cell culture models not only bridges the gap between traditional 2D in vitro experiments and in vivo animal models, it also addresses processes that cannot be recapitulated by these ... [more ▼]

The study of 3D cell culture models not only bridges the gap between traditional 2D in vitro experiments and in vivo animal models, it also addresses processes that cannot be recapitulated by these traditional models. Therefore, it offers an opportunity to better understand complex biology, for instance brain development, where conventional models have not proven successful. The so{called brain organoid technology provides a physiologically relevant context, which holds great potential for its application in modelling neurological diseases. To obtain these highly specialised structures, resembling specifically key features of the human midbrain, we derived a human midbrain-specific organoid (hMO) system from regionally patterned neural stem cells (NSCs). The resulting neural tissue exhibited abundant neurons with midbrain dopaminergic neuron (mDAN) identity, as well as astroglia and oligodendrocyte di erentiation. Within the hMOs, we could observe neurite myelination and the formation of synaptic connections. Regular fire patterning and neural network synchronicity were determined by multielectrode array (MEA) recordings. In addition to electrophysiologically functional mDANs producing and secreting dopamine (DA), we also detected responsive neuronal subtypes, like GABAergic and glutamatergic neurons. To investigate Parkinson's disease (PD)-relevant pathomechanisms, we derived hMOs from PD patients carrying the LRRK2-G2019S mutation and compared them to healthy control hMOs. In addition to a reduced number and complexity of mDANs, we determined a signi cant increase of the stem cell marker FOXA2 in the patient-derived hMOs. This suggests a neurodevelopmental defect induced by a PD-specific mutation and emphasises the importance of advanced three-dimensional (3D) stem cell-based in vitro models. The in this thesis described hMOs are suitable to reveal PD{relevant phenotypes, thus constitute as a powerful tool for human-specific in vitro disease modelling of neurological disorders with a great potential to be utilised in advanced therapy development. [less ▲]

Detailed reference viewed: 130 (11 UL)
Full Text
See detailIl "Tieste" di Ugo Foscolo e l'estetica teatrale di Melchiorre Cesarotti. Per la storia e le implicazioni di un'inconciliabilità ideologica e filosofica
Scagnetti, Matteo Martino UL

Doctoral thesis (2019)

This work analyzes the tragedy written by Ugo Foscolo (1778-1827) at the end of his adolescence : Tieste. The drama has not been sufficiently studied yet, but presents various and important elements of ... [more ▼]

This work analyzes the tragedy written by Ugo Foscolo (1778-1827) at the end of his adolescence : Tieste. The drama has not been sufficiently studied yet, but presents various and important elements of interest. The idea of literature emerging from it is definitely new, and Tieste tries untrodden ways, incompatible with the dominant idea of tragedy at its epoch. Most of all, Tieste marks a rebellion against the aesthetic canons of Melchiorre Cesarotti (1730-1808), a well-known philosopher who had a deep influence in the theatrical field and who had established the standards of a good tragedy. Cesarotti’s parameters were still those of the Enlightenment, and imposed a moral message to every tragedy, whose characters should be rewarded or punished on the basis of their goodness or their wickedness. For Cesarotti, a character would have encountered an unfavourable fate only as a consequence of a moral crime. His virtue, instead, would have avoided any danger. In Foscolo, on the contrary, there is no providence, and the destiny of human beings doesn’t depend on their behaviour. Virtuous characters are powerless and succumb without even understanding why, while the evil tyrant triumphs, moved only by his sadism. The evil is ineffable and inexplicable, and Reason, which solves every problem in Cesarotti’s Weltanschauung, is now helpless and meaningless. Foscolo’s first tragedy therefore represents the transition from an Ancien Régime world view to the phantoms and the nightmares of the contemporary age, when no certitude is possible anymore. [less ▲]

Detailed reference viewed: 48 (7 UL)
Full Text
See detailComputational and symbolic analysis of distance-bounding protocols
Toro Pozo, Jorge Luis UL

Doctoral thesis (2019)

Contactless technologies are gaining more popularity everyday. Credit cards enabled with contactless payment, smart cards for transport ticketing, NFC-enabled mobile phones, and e-passports are just a few ... [more ▼]

Contactless technologies are gaining more popularity everyday. Credit cards enabled with contactless payment, smart cards for transport ticketing, NFC-enabled mobile phones, and e-passports are just a few examples of contactless devices we are familiar with nowadays. Most secure systems meant for these devices presume physical proximity between the device and the reader terminal, due to their short communication range. In theory, a credit card should not be charged of an on-site purchase if the card is not up to a few centimeters away from the payment terminal. In practice, this is not always true. Indeed, some contactless payment protocols, such as Visa's payWave, have been shown vulnerable to relay attacks. In a relay attack, a man-in-the-middle uses one or more relay devices in order to make two distant devices believe they are close. Relay attacks have been implemented also to bypass keyless entry and start systems in various modern cars. Relay attacks can be defended against with distance-bounding protocols, which are security protocols that measure the round-trip times of a series of challenge/response rounds in order to guarantee physical proximity. A large number of these protocols have been proposed and more sophisticated attacks against them have been discovered. Thus, frameworks for systematic security analysis of these protocols have become of high interest. As traditional security models, distance-bounding security models sit within the two classical approaches: the computational and the symbolic models. In this thesis we propose frameworks for security analysis of distance-bounding protocols, within the two aforementioned models. First, we develop an automata-based computational framework that allows us to generically analyze a large class of distance-bounding protocols. Not only does the proposed framework allow us to straightforwardly deliver computational (in)security proofs but it also permits us to study problems such as optimal trade-offs between security and space complexity. Indeed, we solve this problem for a prominent class of protocols, and propose a protocol solution that is optimally secure amongst space-constrained protocols within the considered class. Second, by building up on an existing symbolic framework, we develop a causality-based characterization of distance-bounding security. This constitutes the first symbolic property that guarantees physical proximity without modeling continuous time or physical location. We extend further our formalism in order to capture a non-standard attack known as terrorist fraud. By using our definitions and the verification tool Tamarin, we conduct a security survey of over 25 protocols, which include industrial protocols based on the ISO/IEC 14443 standard such as NXP's MIFARE Plus with proximity check and Mastercard's PayPass payment protocol. For the industrial protocols we find attacks, propose fixes and deliver security proofs of the repaired versions. [less ▲]

Detailed reference viewed: 117 (12 UL)
Full Text
See detailPROVABLE SECURITY ANALYSIS FOR THE PASSWORD AUTHENTICATED KEY EXCHANGE PROBLEM
Lopez Becerra, José Miguel UL

Doctoral thesis (2019)

Password-based Authenticated Key-Exchange (PAKE) protocols allow the establishment of secure communications despite a human-memorable password being the only secret that is previously shared between the ... [more ▼]

Password-based Authenticated Key-Exchange (PAKE) protocols allow the establishment of secure communications despite a human-memorable password being the only secret that is previously shared between the participants. After more than 25 years since the initial proposal, the PAKE problem remains an active area of research, probably due to the vast amount of passwords deployed on the internet as password-based still constitutes the most extensively used method for user authentication. In this thesis, we consider the computational complexity approach to improve the current understanding of the security provided by previously proposed PAKE protocols and their corresponding security models. We expect that this work contributes to the standardization, adoption and more efficient implementation of the considered protocols. Our first contribution is concerning forward secrecy for the SPAKE2 protocol of Abdalla and Pointcheval (CT-RSA 2005). We prove that the SPAKE2 protocol satisfies the so-called notion of weak forward secrecy. Furthermore, we demonstrate that the incorporation of key-confirmation codes in the original SPAKE2 results in a protocol that provably satisfies the stronger notion of perfect forward secrecy. As forward secrecy is an explicit requirement for cipher suites supported in the TLS handshake, we believe our results fill the gap in the literature and facilitate the adoption of SPAKE2 in the recently approved TLS 1.3. Our second contribution is regarding tight security reductions for EKE-based protocols. We present a security reduction for the PAK protocol instantiated over Gap Diffie-Hellman groups that is tighter than previously known reductions. We discuss the implications of our results for concrete security. Our proof is the first to show that the PAK protocol can provide meaningful security guarantees for values of the parameters typical in today's world. Finally, we study the relation between two well-known security models for PAKE protocols. Security models for PAKEs aim to capture the desired security properties that such protocols must satisfy when executed in the presence of an adversary. They are usually classified into i) indistinguishability-based (IND-based) or ii) simulation-based (SIM-based), however, controversy remains within the research community regarding what is the most appropriate security model that better reflects the capabilities that an adversary is supposed to have in real-world scenarios. Furthermore, the relation between these two security notions is unclear and mentioned as a gap in the literature. We prove that SIM-BMP security from Boyko et al. (EUROCRYPT 2000) implies IND-RoR security from Abdalla et al. (PKC 2005) and that IND-RoR security is equivalent to a slightly modified version of SIM-BMP security. We also investigate whether IND-RoR security implies (unmodified) SIM-BMP security. [less ▲]

Detailed reference viewed: 86 (20 UL)
Full Text
See detailEffective Testing Of Advanced Driver Assistance Systems Using Evolutionary Algorithms And Machine Learning
Ben Abdessalem (helali), Raja UL

Doctoral thesis (2019)

Improving road safety is a major concern for most car manufacturers. In recent years, the development of Advanced Driver Assistance Systems (ADAS) has subsequently seen a tremendous boost. The development ... [more ▼]

Improving road safety is a major concern for most car manufacturers. In recent years, the development of Advanced Driver Assistance Systems (ADAS) has subsequently seen a tremendous boost. The development of such systems requires complex testing to ensure vehicle’s safety and reliability. Performing road tests tends to be dangerous, time-consuming, and costly. Hence, a large part of testing for ADAS has to be carried out using physics-based simulation platforms, which are able to emulate a wide range of virtual traffic scenarios and road environments. The main difficulties with simulation-based testing of ADAS are: (1) the test input space is large and multidimensional, (2) simulation platforms provide no guidance to engineers as to which scenarios should be selected for testing, and hence, simulation is limited to a small number of scenarios hand-picked by engineers, and (3) test executions are computationally expensive because they often involve executing high-fidelity mathematical models capturing continuous dynamic behaviors of vehicles and their environment. The complexity of testing ADAS is further exacerbated when many ADAS are employed together in a self-driving system. In particular, when self-driving systems include many ADAS (i.e., features), they tend to interact and impact one another’s behavior in an unknown way and may lead to conflicting situations. The main challenge here is to detect and manage feature interactions, in particular, those that violate system safety requirements, hence leading to critical failures. In practice, once feature interaction failures are detected, engineers need to devise resolution strategies to resolve potential conflicts between features. Developing resolution strategies is a complex task and despite the extensive domain expertise, these resolution strategies can be erroneous and are too complex to be manually repaired. In this dissertation, in addition to testing individual ADAS, we focus on testing self-driving systems that include several ADAS. In this dissertation, we propose a set of approaches based on meta-heuristic search and machine learning techniques to automate ADAS testing and to repair feature interaction failures in self-driving systems. The work presented in this dissertation is motivated by ADAS testing needs at IEE, a world-leading part supplier to the automotive industry. In this dissertation, we focus on the problem of design time testing of ADAS in a simulated environment, relying on Simulink models. The main research contributions in this dissertation are: - A testing approach for ADAS that combines multi-objective search with surrogate models to guide testing towards the most critical behaviors of ADAS, and to explore a larger part of the input search space with less computational resources. - An automated testing algorithm that builds on learnable evolution models and uses classification decision trees to guide the generation of new test scenarios within complex and multidimensional input spaces and help engineers interpret test results. - An automated technique that detects feature interaction failures in the context of self-driving systems based on analyzing executable function models typically developed to specify system behaviors at early development stages. - An automated technique that uses a new many-objective search algorithm to localize and repair errors in the feature interaction resolution rules for self-driving systems. [less ▲]

Detailed reference viewed: 124 (21 UL)
Full Text
See detailALIGNED MULTI-WALL CARBON NANOTUBE SHEETS FOR LIQUID CRYSTAL DISPLAYS
Rahman, Md Asiqur UL

Doctoral thesis (2019)

The great interest in carbon nanotubes (CNTs) was triggered by their discovery by Iijima and has led to significant research efforts finding exceptional electrical and mechanical properties. The ... [more ▼]

The great interest in carbon nanotubes (CNTs) was triggered by their discovery by Iijima and has led to significant research efforts finding exceptional electrical and mechanical properties. The extraordinarily high anisotropy is not just limited to the shape of CNTs, but it is also reflected in their properties that show strong orientational dependence. However, a crucial step involves the incorporation of CNTs into macro-size devices while keeping the nanotubes perfectly aligned in a single direction with a high degree of nanotube straightness. It has been an additional challenge to produce CNT assemblies that meet all these requirements until Kaili Jiang et al introduced a solid state method to produce highly aligned multi-wall CNTs pulled from a forest. They reported that it is possible to pull continuous strings of nanotubes from vertically-aligned CNT forests, forming parallel arrays aligned along the pulling direction. Due to their high alignment, transparency, flexibility, conductivity and optical anisotropy, sheets formed by aligned CNTs are promising as optical polarizer, heaters, sensors, energy devices and aligning and electrodes layers in displays. The alignment of a liquid crystal (LC) on a CNT surface was first realized by Giusy Scalia et al. Later, Russel et al. showed that surfaces with aligned CNTs align LCs unidirectionally, followed by Fu et al. demonstrating that coated CNT sheets can also act as transparent electrodes for switching LC. Thus, aligned CNT sheets show promise as attractive multifunctional systems for LC displays, being able to simultaneously serve diverse functions by replacing both polyimide (PI) and indium tin oxide (ITO) layers, thus, minimizing costs and simplifying the fabrication process. The mechanical properties of CNTs offer also better performance than ITO when used on flexible substrates. However, the optical anisotropy of MWCNT sheets in the range of visible wavelengths remains almost unexplored. There is thus an urgent need to investigate and fundamentally understand the interaction of light with CNT sheets in order to accurately realize CNT-based liquid crystal optical devices. In LC displays, the modulation of light is based on the use of polarized light, and the introduction of an optically anisotropic layer can affect the modulation; thus, it is important to acquire fundamental knowledge on the interaction between aligned MWCNT sheets and light. We followed the technique reported by Kaili Jiang and Ray Baughman to produce highly aligned CNT sheets by pulling CNTs from a spinnable CNT forest. We further deposited the aligned CNT sheets on a glass substrate and characterized them in the visible wavelength range, finding that the aligned CNT sheets anisotropically absorb light. Furthermore, the linearly polarized light travelling through the CNT sheets is rotated and the polarization of the light is affected by the presence of even a single layer of CNTs. Moreover, the magnitude of rotation of polarization increases as the layer thickness increases. We performed theoretical investigations which closely fit the experimental data, suggests that the origin of the rotation is mainly due to the anisotropic absorption. However, other contributions, such as from birefringence, cannot be ruled out. By optical investigations, the dependence of the optical behavior on the thickness of CNTs was also established. Moreover, the average orientational order parameter of the CNT sheets was evaluated from the anisotropic absorption of aligned CNT sheets. A high value of orientational order parameter in CNT sheets is needed since the alignment of the CNT sheets translates to LC alignment. The order parameter of free-standing CNT sheets was found to be ~0.6; however, it decreases once deposited on a substrate. The adhesion between the CNT sheets and the substrate is an additional problem and was studied using different strategies correlating the adhesion to the final alignment of the CNTs on a substrate. Parts of this research effort were devoted to investigating CNT sheets on various polymer surfaces, leaving the surface of CNTs almost free from polymer, for a direct investigation of the LC alignment on the CNT graphitic surface. The general goal was to improve the adhesion while keeping the alignment of the CNTs intact as pulled from the forest. We found a tradeoff between the adhesion of the CNTs and their alignment on a substrate; however, achieving highly-ordered CNTs and perfect adhesion on the surface is an issue. A second approach was based on complete coverage of CNTs by coating the nanotube films with inorganic dielectric layers (SiO_2 or Al_2 O_3). We found that SiO_2 coating preserved the freely-suspended CNT alignment while improving the film flatness. These inorganic coatings help to obtain good electrical performance of LC in cells made with the CNT-based substrates. The alignment of the liquid crystal 4-cyano-4'-pentylbiphenyl (5CB) in the cells was generally planar and unidirectional, with differences in the quality depending on the type of coating layer and on the value of the order parameter of the CNT sheets. We investigated both the uniformity of the LC alignment as well as the switching voltages and times and compared this to the performance of the LC in commercial cells. Integration of aligned CNTs with LC requires the understanding of the interactions of CNT layers with light to realize CNT devices. Aligned CNTs from forests can be obtained easily; sequentially depositing CNT layers, however, while maintaining control of the degree of alignment when integrating them into devices is an open issue. This work shows the occurrence of unexpected interactions with polarized light due to the intrinsic properties of CNTs and due to their alignment. By exploring and optimizing the optical performance of CNT sheets, through their orientational order, it can be possible to use them as optical films for producing, among other optical devices, variable rotation of polarization, polarizers and transparent electrodes that also can align LCs integrated into LCDs. [less ▲]

Detailed reference viewed: 52 (8 UL)
Full Text
See detailOptical investigation of voltage losses in high-efficiency Cu(In,Ga)Se2 thin-film solar cells
Wolter, Max UL

Doctoral thesis (2019)

The increases in power conversion efficiencies up to 23.35 % in thin-film Cu(In,Ga)Se2 (CIGS) solar cells in recent years can mainly be ascribed to the alkali post-deposition treatment (PDT). The latter ... [more ▼]

The increases in power conversion efficiencies up to 23.35 % in thin-film Cu(In,Ga)Se2 (CIGS) solar cells in recent years can mainly be ascribed to the alkali post-deposition treatment (PDT). The latter consists of an additional treatment step after absorber growth where alkali elements, such as sodium (Na) or rubidium (Rb), are injected into the absorber. While the beneficial effects of the alkali PDT, attributed partly to a reduction of voltage losses, are undeniable, it is not yet entirely clear what underlying mechanisms are responsible. To clarify the specific influence of the alkali PDT on the voltage of the CIGS solar cells, photoluminescence (PL) spectroscopy experiments were conducted on state-of-the-art CIGS absorbers having undergone different alkali PDTs. Photoluminescence allows the investigation of possible voltage losses on the absorbers through the analysis of optoelectronic quantities such as the absorption coefficient, the quasi-Fermi level splitting (QFLS), electronic defects, and potential fluctuations. Mainly due to a smooth surface and a band gap minimum inside the bulk, the PL spectra of state-of-the-art CIGS absorbers are distorted by interference fringes. To remove the interference fringes at room temperature, an experimental method, which revolves around the measurement of PL under varying angles, is developed in this thesis. In addition, to enable PL experiments even at low temperatures, an auxiliary polystyrene-based scattering layer is conceptualized and deposited on the surface of the absorbers. With the influence of the interference fringes under control, the quasi-Fermi level splitting can be measured on bare and CdS-covered absorbers. The results reveal an improvement of the QFLS in absorbers that contain Na with an additional increase being recorded in absorbers that also contain Rb. The improvement of the QFLS is present in both bare and CdS-covered absorbers, indicating that the beneficial effect of the alkali PDT is not only occurring on the surface but also inside the bulk. To identify possible origins of the QFLS increase, various PL-based experiments were performed. At room temperature, spatially-resolved PL measurements on the microscopic scale do not reveal any optoelectronic inhomogeneities in state-of-the-art CIGS absorbers. Defect spectroscopy at low temperatures also does not reveal the presence of deep-level trap states. Through temperature- and excitation-dependent PL experiments, a reduction of electrostatic potential fluctuations is observed in absorbers that contain Na with a stronger reduction witnessed in absorbers that contain Rb as well. The extraction of the absorption coefficient through PL measurements at room temperature reveals a reduction of band tails with alkali PDT that empirically correlates to the measured increase in the QFLS. This correlation might indicate that the band tails, through non-radiative recombination, may be the origin of the performance-limiting voltage losses. In combination with reports from literature, it is suggested that the beneficial effect of the light alkali PDT (Na) is mainly a doping effect i.e. an increase in the QFLS through an increase in the hole carrier concentration. The beneficial effect of the heavier alkali PDT (Rb) is attributed partly to a surface effect but mainly to a grain boundary effect, either through a reduction in band bending or a reduction of non-radiative recombination through tail states. Finally, the various voltage losses in state-of-the-art CIGS solar cells are compared to the best crystalline silicon device, revealing almost identical losses. This shows that the alkali PDT enables the fabrication of high-efficiency CIGS solar cells that show, in terms of voltage, identical performance. To bridge the gap between CIGS and the even better performing GaAs, the results of this thesis suggest that grain boundaries are crucial in this endeavour. [less ▲]

Detailed reference viewed: 163 (19 UL)
Full Text
See detailOptimization of logical networks for the modelling of cancer signalling pathways
De Landtsheer, Sébastien UL

Doctoral thesis (2019)

Cancer is one of the main causes of death throughout the world. The survival of patients diagnosed with various cancer types remains low despite the numerous progresses of the last decades. Some of the ... [more ▼]

Cancer is one of the main causes of death throughout the world. The survival of patients diagnosed with various cancer types remains low despite the numerous progresses of the last decades. Some of the reasons for this unmet clinical need are the high heterogeneity between patients, the differentiation of cancer cells within a single tumor, the persistence of cancer stem cells, and the high number of possible clinical phenotypes arising from the combination of the genetic and epigenetic insults that confer to cells the functional characteristics enabling them to proliferate, evade the immune system and programmed cell death, and give rise to neoplasms. To identify new therapeutic options, a better understanding of the mechanisms that generate and maintain these functional characteristics is needed. As many of the alterations that characterize cancerous lesions relate to the signaling pathways that ensure the adequacy of cellular behavior in a specific micro-environment and in response to molecular cues, it is likely that increased knowledge about these signaling pathways will result in the identification of new pharmacological targets towards which new drugs can be designed. As such, the modeling of the cellular regulatory networks can play a prominent role in this understanding, as computational modeling allows the integration of large quantities of data and the simulation of large systems. Logical modeling is well adapted to the large-scale modeling of regulatory networks. Different types of logical network modeling have been used successfully to study cancer signaling pathways and investigate specific hypotheses. In this work we propose a Dynamic Bayesian Network framework to contextualize network models of signaling pathways. We implemented FALCON, a Matlab toolbox to formulate the parametrization of a prior-knowledge interaction network given a set of biological measurements under different experimental conditions. The FALCON toolbox allows a systems-level analysis of the model with the aim of identifying the most sensitive nodes and interactions of the inferred regulatory network and point to possible ways to modify its functional properties. The resulting hypotheses can be tested in the form of virtual knock-out experiments. We also propose a series of regularization schemes, materializing biological assumptions, to incorporate relevant research questions in the optimization procedure. These questions include the detection of the active signaling pathways in a specific context, the identification of the most important differences within a group of cell lines, or the time-frame of network rewiring. We used the toolbox and its extensions on a series of toy models and biological examples. We showed that our pipeline is able to identify cell type-specific parameters that are predictive of drug sensitivity, using a regularization scheme based on local parameter densities in the parameter space. We applied FALCON to the analysis of the resistance mechanism in A375 melanoma cells adapted to low doses of a TNFR agonist, and we accurately predict the re-sensitization and successful induction of apoptosis in the adapted cells via the silencing of XIAP and the down-regulation of NFkB. We further point to specific drug combinations that could be applied in the clinics. Overall, we demonstrate that our approach is able to identify the most relevant changes between sensitive and resistant cancer clones. [less ▲]

Detailed reference viewed: 91 (8 UL)
Full Text
See detailSelf-Organizing Cellulose Nanorods: From the fundamental physical chemistry of self-assembly to the preparation of functional films
Honorato Rios, Camila UL

Doctoral thesis (2019)

Cellulose nanocrystals (CNCs), nanorods isolated by acid hydrolysis from cellulose sources, be- long to a selective type of functional biomaterials. The intriguing ability of these nanoparticles to self ... [more ▼]

Cellulose nanocrystals (CNCs), nanorods isolated by acid hydrolysis from cellulose sources, be- long to a selective type of functional biomaterials. The intriguing ability of these nanoparticles to self-organize and develop a chiral nematic liquid crystal phase when suspended in aqueous suspensions, is increasing interest regardless of the diverse range of research fields. Unfortu- nately (or fortunately, for this thesis), pristine CNCs are always disperse, with great variations in rod length within a single sample. Of particular interest is the fractionation of CNC rods by separation of the coexisting phases: isotropic phase from the liquid crystalline (LC) part. Since the aspect ratio is considered to be the critical parameter that dictates the particle fraction at which cholesteric-isotropic phase separation starts, it is expected that the high aspect ratio rods will separate from low aspect ratio rods, and this is indeed what I found in this thesis. By a systematic repetition of separation of phases, I could reach a quality of separation of long from short rods that is remarkable. The fractionation procedure was then improved by varying the equilibrium phase volume fraction at which the phases were separated, reducing with this new procedure the multiple separations from five cycles to only one. The onset of liquid crystallinity was drastically reduced in the long rod fraction and the decrease in the threshold for complete liquid crystallinity was even stronger. The mass fraction threshold at which gelation of the CNC suspension is triggered is not at all affected by the fractionation. Since gelation is a percolation phenomenon, the expectation was that also the onset of gelation would move to lower mass fractions, but this remained at about the same value. Together with the shift to lower mass fractions of the cholesteric liquid crystal phase formation we have thus opened access to a whole new range of the equilibrium phase diagram, where the full sample is cholesteric yet not gelled. I demonstrate that the critical parameter for inducing gelation is in fact not the fraction of CNC, but the concentration of counterions in the solution. This suggests that the gelation is more complex than direct percolation between individual CNC rods, and instead is related to loss of colloidal stability due to reduced electrostatic screening. I also show that the behavior of key parameters, such as the period of the helical modulation, so-called pitch, that is characteristic of the cholesteric phase, is very different in the range of phase coexistence compared to the range of complete liquid crystallinity. In addition, I found that the dependence of the pitch on CNC mass fraction has less to do with the size of the nanorods but rather than with the variation of effective volume fraction as a result of more rods in the suspension or higher counterion concentration. I corroborate this hypothesis by adding different amounts of salt to CNC suspensions of varying mass fraction such that the ion concentration is held constant, thereby tuning the pitch to the same value throughout the suspensions. In films prepared by drying CNC suspensions, the pitch can go down to a few hundred nanome- ters, resulting in circularly polarized colorful Bragg re ection of visible light. By working with the long-rod fraction we can absolutely obtain a highly-ordered monodomain structure that results in uniform color of films, with only one circular polarization re ected, as should be the case. While the study is carried out on CNCs, the implications go far beyond this particular nanoma- terial, revealing new challenges and opportunities in general liquid crystal and colloid physics, as well as in strategic research where fractionation and the drying of initially disperse populations of nanorods is desirable. [less ▲]

Detailed reference viewed: 49 (9 UL)
Full Text
See detailMathematical Histopathology and Systems Pharmacology of Melanoma
Albrecht, Marco UL

Doctoral thesis (2019)

Treated metastatic melanoma often becomes resistant and relapses whereby resistance mechanisms can be found at the level of biochemical, histological, and pharmacological data. By using this data in a ... [more ▼]

Treated metastatic melanoma often becomes resistant and relapses whereby resistance mechanisms can be found at the level of biochemical, histological, and pharmacological data. By using this data in a mathematical form, an integrative understanding of tumour progression can be gained that reveal the functionality of more complex and hidden recurrence mechanisms. The aims of this thesis were - to investigate how a new engineering concept on tumour growth, based on porous media theory, can be leveraged to support medicine and cancer biology research, - to identify suitable tests for cancer growth model validation, - to study how elements of biochemical cancer pathways are linked to the elements of physical growth, and - to establish a pharmacokinetics module for the melanoma cancer drug dabrafenib. The studied engineering concept is qualitatively suitable to represent late-stage metastatic melanoma in irregular fibrous tissue types, whereby all equations are tested for biological relevance and parametrisation. The framework allows modelling of tissue-specific growth, and the thesis shows that the simulated tumour can shift between compact growth with ECM displacement and invasive growth with ECM circumvention as a consequence of cell plasticity/viscosity change. This is unique among continuous models of tumour growth. However, the investigation also shows that the pressure-saturation relationships are not biologically motivated and can be replaced by a swelling polymer model which captures the water absorbing effect of glycans. The thesis addresses a biologically and computationally reasonable strategy to validate the tumour growth model as complete as possible. A suitable way to validate a part of the tumour growth model is using time course data of spheroid growth in hydrogels of different stiffness values. Spheroids generated from the LU451 melanoma cell line mainly grow due to ECM degradation, have a time-variant growth rate increasing with gel rigidity, and the confined environment renders the melanoma cell line drug-resistant upon dabrafenib dose escalation. This setting reveals the interplay between mechanical and biochemical development over time. The dependency between biological elements of cancer pathways and the mechanical elements of the engineering concept on tumour growth were clarified. Therefore, the literature on mechanoregulation has been reviewed and serves as a computational link between systems biology and physical oncology. Finally, the thesis provides preliminary steps and a concept toward a serious interdisciplinary methodology to understand tumour growth, although this cannot be considered a final model for any of the known melanoma growth settings. Additionally, the thesis provides a novel quantitative systems pharmacology approach to consider liver-enzyme-induction and drug-drug-interaction. The finding is that the potent dabrafenib metabolite desmethyl-dabrafenib accumulates with consequential efficacy loss in a confined tumour environment. [less ▲]

Detailed reference viewed: 109 (3 UL)
Full Text
See detailSecond-principles methods for large-scale simulations of realistic functional oxides
Escorihuela Sayalero, Carlos UL

Doctoral thesis (2019)

The application of Condensed Matter theory via simulation has been over the last decades a solid approach to research in Materials Science. In particular for the case of Perovskite materials the research ... [more ▼]

The application of Condensed Matter theory via simulation has been over the last decades a solid approach to research in Materials Science. In particular for the case of Perovskite materials the research has been extensive, and customarily (but not only) performed using Density Functional Theory. The collective effort to develop lighter simulation techniques and exploring different theoretical approaches to computationally study materials has provided the scientific community with the possibility to strengthen the interaction between experimental and theoretical research. However, the access to large-scale simulations is still nowadays limited due to the high computational cost of such simulations. In 2013 J. C. Wojdel et al. presented a theory of modelization of crystals known as second-principles models, and which are the central point of the development of my work. In this Thesis I develop in depth a novel methodology to produce second-principles models efficiently and in a quasi-automatic way from Density Functional Theory data. The scheme presented here identifies, given a set of reliable data to be fit, the most relevant atomic couplings of a system. The fitting process that I present is also analytical, which translates into a fast and accurate model production. I also explore the modelization of chemically inhomogeneous or nanostructured systems using second-principles models. Moreover, I present a heuristic procedure to produce models of the inhomogeneous material which is efficient and sound. Finally, I also show examples of complex problems that can be tackled thanks to the second-principles models, such as the character of 180º anti-phase domain walls in SrTiO3, thermodynamical studies of heat transport across 180º domain walls in PbTiO3 and the reproduction of experimentally-observed polarization vortices in (PbTiO3)n/(SrTiO3)n superlattices. [less ▲]

Detailed reference viewed: 42 (3 UL)
See detailTrading Zones of Digital History
Kemman, Max Jonathan UL

Doctoral thesis (2019)

As long as there have been computers, there have been scholars pulling at historians, challenging them to use these computers for historical research. Yet what role computers can have in historical ... [more ▼]

As long as there have been computers, there have been scholars pulling at historians, challenging them to use these computers for historical research. Yet what role computers can have in historical research is a matter of continuous debate. Under the signifier of “digital history”, historians have experimented with tools, concepts, and methods from other disciplines, mostly computer science and computational linguistics, to benefit the historical discipline. The collaborations that emerge through these experiments can be characterised as a two-sided uncertainty: historians uncertain how they as historians should use digital methods, and computational experts uncertain how digital methods should work with historical data sets. The opportunity that arises from these uncertainties is that historians and computational experts need to negotiate the methods and concepts under development. In this thesis, I investigate these negotiations as trading zones, as local spaces of negotiation. These negotiations are characterised as a duality of boundary practices. First, boundary crossing, the crossing of boundaries of disciplines, discourses, and institutions to collaborate. Second, boundary construction, the establishment of boundaries of groups and communities to preserve disciplinary values and remain recognisable as part of a community of practice. How boundary crossing and construction are balanced, whether disciplinary boundaries are shifted, and to what extent historians’ practices are transformed by continued interaction with computational experts, are open questions demanding closer scrutiny. These considerations lead to the research question underlying this thesis: how are historians affected by interactions with computational experts in the context of digital history collaborations? I investigate this question through a mixed-methods, multi-sited ethnographic approach, consisting of an open online survey which received 173 responses, 4.5 years of observations at the University of Luxembourg, 37 interviews, and an LDA topic modelling analysis of 10,918 blog posts from 73 historians between 2008-2017. Through these approaches, I examine trading zones as configured by three different dimensions. First, connectedness, the extent to which collaborators connect with one another through physical proximity, communication, and the sharing of practices. Second, power asymmetry, the extent to which participants shape their own field of action as well as the fields of action of their collaborators. Third, cultural maintenance, the extent to which collaborators become more alike or stay apart by adopting new practices or displacing previous practices. On a macro level, referring to the global historical discipline, I conclude that methodological approaches developed in local trading zones have hardly diffused to macro solutions. Insofar digital infrastructures were appropriated in the macro community, these were aligned with traditional practices. Rather than transforming historical scholarship, the challenge was to provide infrastructures congruent with existing values and practices. On a meso level, referring to the historians engaged in digital history trading zones, I conclude that the effect of interactions was dependent on individual decisions and incentives. Some historians experimented with or adopted computational practices and concepts. Yet other historians detached their work from the shared objective of a collaboration in order to reduce risks, as well as to maintain disciplinary practices. The majority of participants in trading zones were scholars from the humanities, physically distant from collaborators, communicating more often with disciplinary peers than with cross-disciplinary collaborators. As such, even when participating in trading zones of digital history, a significant number of historians remained aligned with traditional practices. Changing practices were regularly not in the direction of computational practices, but to incentives of politics or funding. While historians that participated in digital history trading zones therefore did learn new practices, this did not entail a computational transformation of their scholarship. Finally, on a micro level, some historians chose to engage intensively with computational experts. I call these individuals digital history brokers, who exemplified significant shifts in practices. Brokers conducted project management; coordinated practices from archival and library domains such as data collection, transformation, and description; learned about the potential and limitations of computational technologies and where to apply these; employed inter-languages to translate between the different collaborating domains; and finally transformed historical questions into infrastructural problems. Digital history brokers thereby not only developed interactional expertise to collaborate with computational experts. They furthermore developed political proficiency to negotiate the socio- economic potential of digital history strategies with politics, university administrators, and funding agencies. I therefore describe the practices of brokers as infrastructuring, covering a duality of negotiations. First, cross-disciplinary socio-technical negotiations with computational experts how to support scholarly practices with digital technology. Second, intra-disciplinary socio-political negotiations how to diffuse those practices within the community of practice. Digital history brokers therefore transform their own practices, so that other historians do not have to on meso or macro levels, but can employ digitised sources and digital methodology through infrastructures in a fashion that naturally fits into their practices as historians. I thereby provide a critical view on digital history grounded in how it is conducted and negotiated. This thesis is therefore aimed mainly at scholars interested in digital history and its relation to the historical discipline and to digital humanities, as well as scholars interested in studying digital history as a specific case of cross-disciplinarity. [less ▲]

Detailed reference viewed: 38 (12 UL)
Full Text
See detailThe Effect of ATP INDUCED CALCIUM DYNAMICS ON EPITHELIAL TO MESENCHYMAL TRANSITIONS
Grzyb, Kamil UL

Doctoral thesis (2019)

Cells respond to a multitude of external triggers by a limited number of signaling pathways activated by receptors on plasma membrane, such as receptor tyrosine kinases (RTKs) or G protein-coupled ... [more ▼]

Cells respond to a multitude of external triggers by a limited number of signaling pathways activated by receptors on plasma membrane, such as receptor tyrosine kinases (RTKs) or G protein-coupled receptors (GPCRs). These pathways do not simply convey the downstream signal, but instead the signal is very often processed by encoding and integrated with the current state of the cell. A traditional transcriptional analysis tends to provide an averaged output measured in a population, what often masks the behavior of individual cells. However, with recent single cell techniques developments, it is possible to investigate transcription in individual living cells. This contributed tremendously to the understanding of development and progression of many diseases including cancer. The more we understand about this high complexity of signaling mechanisms and multitude of cellular safety countermeasures, the more we see cancer as a microevolution state of “rebellious cells” (cells entering the fate opposite to the one intended) following a patch through a discreet system. This thesis specifically focused on the temporal aspect of signaling in the context of the epithelia-to-mensenchymal transition (EMT) by combining single cell experiments and bioinformatics analysis. We investigated cellular signaling changes in response to different dynamical profiles of the stimuli. In particular, we used the HMLER cell line, which is a metastatic breast cancer model for the epithelial to mesenchymal transition. By applying stochastic or oscillatory pulses of extracellular ATP-induced Ca2+ signals with different interspike intervals, we were investigating different transcription states from those evoked by constant ATP-induced Ca2+ dose responses. In order to precisely apply those stimulation profiles, we have developed and established a perfusion system. This device allows to treat population of cells simultaneously with the exact same dynamical profiles. Cells treated by these well controlled signals were subsequently processed by the single cell RNA-seq technique Drop-seq for transcriptional analysis. The resulting high dimensional digital gene expression matrices were analyzed by a developed high-throughput computational analysis pipeline. This analysis includes the identification of differentially expressed genes and cellular clusters (states) by dimensionality reduction methods (PCA, t-SNE) and pathway analysis. We evaluated changes and trends of genes from difference dynamical profiles by investigating their involvement in stress, stemness and regulation of motility. First, we confirmed that oscillatory stimulation with extracellular ATP (eATP) tends to lower the burden of cellular stress and apoptosis related pathways while maintaining its other effector functions compared to constant eATP stimulation. Interestingly, stochastic spiking of extracellular ATP in our setup led to a massive (~80%) increase in overall differential gene expression compared to deterministic oscillatory stimulation with the same period. Consequentially, stochastic signaling seems to activate a much wider range of biological pathways, which indicates the much higher complexity in information processing capability of producing rebellious cells during cancer progression and metastasis. On the other hand, our findings suggests that oscillatory eATP stimulation could contribute to EMT by lowering ID3 expression compared to stochastic stimulation where we observed a stronger upregulation of IRS2. Finally, we integrated the DEGs into biological processes involved in each conditions and put these new insights into the context of the eATP-induced Ca2+ induced epithelial to mesenchymal transition. Overall, this thesis has applied recent single cell technologies to characterize underlying principles of cellular heterogeneity induced by cell signaling and specifically investigated the complex mechanisms of cell fate in the context of EMT [less ▲]

Detailed reference viewed: 69 (9 UL)
Full Text
See detailModeling CLN3 and ATP13A2 deficiency in yeast and zebrafish and use of the ATP13A2 models for drug repurposing
Heins Marroquin, Ursula UL

Doctoral thesis (2019)

Neuronal ceroid lipofuscinoses (NCL) are a heterogeneous group of inherited recessive neurodegenerative disorders that appear during childhood and result in premature death. Nowadays, mutations in 14 ... [more ▼]

Neuronal ceroid lipofuscinoses (NCL) are a heterogeneous group of inherited recessive neurodegenerative disorders that appear during childhood and result in premature death. Nowadays, mutations in 14 genes are known to cause NCL and this project focused on CLN3 and ATP13A2 (CLN12), two genes linked to a juvenile form of NCL (JNCL). Mutations in CLN12 are known to cause two additional rare neurodegenerative disorders called Kufor-Rakeb syndrome and spastic paraplegia- 78. Since the number of people affected with a rare disease is relatively small and the cost of the drug development process is high, the chance for a patient to get therapeutic treatment is very low. Therefore, the aim of this PhD project was to develop a new drug screening pipeline for the identification of drug candidates that could be used for the treatment of some of these rare diseases. In this work, we successfully developed a phenotypic high-throughput assay based on a decreased zinc resistance phenotype in an ATP13A2-deficient yeast model and we screened more than 2500 compounds, resulting in the identification of 11 hits. Subsequently, we created a stable ATP13A2 knockout line in zebrafish and developed a validation platform based on decreased manganese resistance in this line. Using this approach, N-acetylcysteine and furaltadone emerged as promising compounds for follow-up studies. A similar strategy could not be implemented for CLN3, due to failure, despite extensive efforts, to find a suitable phenotype in yeast for a drug screening. Nevertheless, we successfully created two stable cln3 mutant lines in zebrafish. No overt phenotype was initially observed, but behavioral tests suggested that cln3 mutants display subtle neurological dysfunction, making them more susceptible to treatment with picrotoxin, a pro-convulsive drug. Further investigation is needed, but our preliminary data indicate that cln3 mutant larvae may recapitulate certain aspects of JNCL pathology. On the whole, this work provides a time- and cost-efficient pipeline for the discovery of drugs against ATP13A2 deficiencies, which can be applied for the screening of larger compound libraries in the future. In addition, we generated a new CLN3 disease model in zebrafish that will be instrumental for the development of drug screens and also may help to elucidate the molecular disease mechanism of JNCL. [less ▲]

Detailed reference viewed: 102 (7 UL)
See detailNumerical analysis of gait load distribution in the human pelvis and design of a biomechanical testing device: experimental assessment of two implants for anterior fragility fractures
Ricci, Pierre-Louis UL

Doctoral thesis (2019)

The current project research was conducted at the University of Luxembourg in cooperation with orthopaedic surgeons from the Centre Hospitalier de Luxembourg and the Universitätsklinikum des Saarlandes ... [more ▼]

The current project research was conducted at the University of Luxembourg in cooperation with orthopaedic surgeons from the Centre Hospitalier de Luxembourg and the Universitätsklinikum des Saarlandes. The main objective was to investigate the gait load distribution in the human pelvis and the influence of the stiffness of the pubic symphysis and the sacroiliac joints on this force transmission in order to numerically and experimentally assess the stability provided by two reconstruction systems for anterior fragility fractures. To begin with, the global approach consisted in combining inverse dynamics and finite element methods to investigate physiological loadings applied to the pelvis during the gait cycle. Then, an experimental test bench was designed to reproduce those gait conditions on artificial pelvises for biomechanical assessment of different systems used for fragility fractures of the pelvis. At first, muscles forces and joint contact forces from the gait applied to the pelvis were calculated by inverse dynamics with an experimentally validated musculoskeletal model. Implementation in a finite element model including bones and joints of the pelvis highlighted that superior rami experience the highest stresses. Fracture of a superior ramus changed the initial load distribution by increasing the stresses at the inferior ramus and on the posterior structures. Combination of superior and inferior rami fractures on the same side redirected the forces backwards and showed high stresses on the sacral alae where compression fractures are commonly seen clinically. Reconstruction devices showed differences in stability at early stage of healing with benefits provided by the iliopubic subcutaneous plate. No noticeable differences compared to the Supra-Acetabular External Fixator were seen during later healing. Regarding the influence of the joint stiffness on the load distribution in a healthy pelvis, an increase of PS stiffness redirected loads to the anterior pelvis whereas an increase of PS laxity redirected loads to the posterior structures. A fusion of the sacroiliac joints did not show noticeable changes in the normal load distribution. Following the computational investigation, an experimental test bench was designed with numerical engineering tools. The biomechanical setup aimed at reproducing loadings observed during previously studied moments of the gait on artificial pelvises with fused joints. Static loadings and cyclic loadings were performed on artificial pelvises with and without reconstruction devices: first with a superior ramus fracture only and then with superior and inferior rami fractures. The Supra-Acetabular External Fixator and the iliopubic subcutaneous plate did not show any significant stability difference when a superior ramus fracture is considered. When including the inferior ramus fracture on the same side, the iliopubic subcutaneous plate significantly improved the stability of the reconstructed pelvis by reducing IV the displacement of the superior fracture, contrary to the Supra-Acetabular External Fixator that did not show any improvement. For both configurations, no fatigue phenomenon was observed during cyclic loadings simulating four days of walking for a patient (5 000 cycles). There is no conflict of interest related to this work. [less ▲]

Detailed reference viewed: 43 (7 UL)
Full Text
See detailDesign and Cryptanalysis of Symmetric-Key Algorithms in Black and White-box Models
Udovenko, Aleksei Nikolaevich UL

Doctoral thesis (2019)

Cryptography studies secure communications. In symmetric-key cryptography, the communicating parties have a shared secret key which allows both to encrypt and decrypt messages. The encryption schemes used ... [more ▼]

Cryptography studies secure communications. In symmetric-key cryptography, the communicating parties have a shared secret key which allows both to encrypt and decrypt messages. The encryption schemes used are very efficient but have no rigorous security proof. In order to design a symmetric-key primitive, one has to ensure that the primitive is secure at least against known attacks. During 4 years of my doctoral studies at the University of Luxembourg under the supervision of Prof. Alex Biryukov, I studied symmetric-key cryptography and contributed to several of its topics. Part I is about the structural and decomposition cryptanalysis. This type of cryptanalysis aims to exploit properties of the algorithmic structure of a cryptographic function. The first goal is to distinguish a function with a particular structure from random, structure-less functions. The second goal is to recover components of the structure in order to obtain a decomposition of the function. Decomposition attacks are also used to uncover secret structures of S-Boxes, cryptographic functions over small domains. In this part, I describe structural and decomposition cryptanalysis of the Feistel Network structure, decompositions of the S-Box used in the recent Russian cryptographic standard, and a decomposition of the only known APN permutation in even dimension. Part II is about the invariant-based cryptanalysis. This method became recently an active research topic. It happened mainly due to recent extreme cryptographic designs, which turned out to be vulnerable to this cryptanalysis method. In this part, I describe an invariant-based analysis of NORX, an authenticated cipher. Further, I show a theoretical study of linear layers that preserve low-degree invariants of a particular form used in the recent attacks on block ciphers. Part III is about the white-box cryptography. In the white-box model, an adversary has full access to the cryptographic implementation, which in particular may contain a secret key. The possibility of creating implementations of symmetric-key primitives secure in this model is a long-standing open question. Such implementations have many applications in industry; in particular, in mobile payment systems. In this part, I study the possibility of applying masking, a side-channel countermeasure, to protect white-box implementations. I describe several attacks on direct application of masking and provide a provably-secure countermeasure against a strong class of the attacks. Part IV is about the design of symmetric-key primitives. I contributed to design of the block cipher family SPARX and to the design of a suite of cryptographic algorithms, which includes the cryptographic permutation family SPARKLE, the cryptographic hash function family ESCH, and the authenticated encryption family SCHWAEMM. In this part, I describe the security analysis that I made for these designs. [less ▲]

Detailed reference viewed: 255 (36 UL)
Full Text
See detailLe Droit de la Coopération Décentralisée au Mali: Une Approche Juridique du Droit Administratif International
Diallo, Mory UL

Doctoral thesis (2019)

Le droit des collectivités territoriales autorise un grand nombre de types de coopération allant du jumelage à la forme contractuelle ou conventionnelle. Parmi ceux-ci, la coopération décentralisée occupe ... [more ▼]

Le droit des collectivités territoriales autorise un grand nombre de types de coopération allant du jumelage à la forme contractuelle ou conventionnelle. Parmi ceux-ci, la coopération décentralisée occupe une place déterminante. Elle concerne toutes les actions extérieures effectuées par les collectivités territoriales. Cependant, cette coopération décentralisée des collectivités territoriales maliennes demeure toujours subtile .Le concept manque encore de précision, ce qui provoque une confusion dans sa compréhension par les élus et les agents des collectivités territoriales à la recherche de solution aux multiples questionnements sur le régime juridique de ces conventions. Si le législateur a tenté de règlementer la compétence, il reste muet toujours sur le régime juridique. Ces accords de coopération décentralisée semblent n’appartenir à aucune catégorie juridique précise. Dans l’optique de la détermination d’un droit, nous avons tenté de prendre les considérations des exigences du droit interne et du droit international. Dans un premier temps, l’étude s’intéresse aux questions relatives à l’élaboration du régime de la coopération décentralisée. Elle fait ressortir les fondements légaux nationaux et internationaux ainsi que la qualification de ces accords. Dans un second temps, l’étude s’intéresse aux problèmes de la mise en œuvre des conventions de coopération décentralisée dont l’objectif est de démontrer le caractère exécutoire des conventions de coopération ainsi que les différentes alternatives de résolution du contentieux. Enfin, l’objet de cette thèse tente de préciser le régime juridique des conventions conclues entre les collectivités territoriales maliennes et leurs partenaires étrangers dans la perspective de privilégier le droit administratif local à caractère international [less ▲]

Detailed reference viewed: 157 (12 UL)
Full Text
See detailEntwicklung eines intelligenten, robotergestützten Assistenzsystems für die Demontage industrieller Produkte
Jungbluth, Jan UL

Doctoral thesis (2019)

Technical assistance systems are used in many aspects of our private environment to simplify our daily lives. Such assistance would also be desirable in our working environment, especially in physically ... [more ▼]

Technical assistance systems are used in many aspects of our private environment to simplify our daily lives. Such assistance would also be desirable in our working environment, especially in physically demanding activities such as dismantling products for maintenance, corrective maintenance, or remanufacturing. However, the use of robot-supported assistance systems is prevented by the imponderabilities in the dismantling process of different products due to the lack of autonomy of the technical systems. In the course of this dissertation, the development of intelligent, robot-supported assistance systems that can support people in such complex processes in a target-oriented manner are considered, and the following research question is posed: What are the technical requirements for such assistance systems and how can they be implemented? In the reviewed scientific literature, no holistic approach has been identified for the development of these systems, but many approaches to partial aspects of such a system have been collected across several research disciplines. In order to address the research question, this dissertation discusses the theoretical fields of technical assistance systems, human-robot systems, and the field of application in order to define technical requirements. A demonstrator for experimental validation is implemented in the form of a multi-agent system in which various technical systems are integrated and interconnected by software. The function of the developed robot-based assistance system could be verified in concrete dismantling processes in conjunction with suitable man-machine communication interfaces. Finally, this dissertation identifies further research questions that must be addressed before such systems can be introduced in the industrial environment. [less ▲]

Detailed reference viewed: 101 (11 UL)
Full Text
See detailLegal Design for the General Data Protection Regulation. A Methodology for the Visualization and Communication of Legal Concepts
Rossi, Arianna UL

Doctoral thesis (2019)

Privacy policies are known to be impenetrable, lengthy, tedious texts that are hardly read and poorly understood. Therefore, the General Data Protection Regulation (GDPR) introduces provisions to enhance ... [more ▼]

Privacy policies are known to be impenetrable, lengthy, tedious texts that are hardly read and poorly understood. Therefore, the General Data Protection Regulation (GDPR) introduces provisions to enhance the transparency of such documents and suggests icons as visual elements to provide “in an easily visible, intelligible and clearly legible manner a meaningful overview of the intended processing.” The present dissertation discusses how design, and in particular legal design, can support the concrete implementation of the GDPR’s transparency obligation. Notwithstanding the many benefits that visual communication demonstrably provides, graphical elements do not improve comprehension per se. Research on graphical symbols for legal concepts is still scarce, while both the creation and consequent evaluation of icons depicting abstract or unfamiliar concepts represent a challenge. More- over, precision of representation can support the individuals’ sense-making of the meaning of graphical symbols, but at the expense of simplicity and us- ability. Hence, this research proposed a methodology that combines semantic web technologies with principles of semiotics and ergonomics, and empirical methods drawn from the emerging discipline of legal design, that was used to create and evaluate DaPIS, the Data Protection Icon Set meant to support the data subjects’ navigation of privacy policies. The icon set is modeled on PrOnto, an ontological representation of the GDPR, and is organized around its core modules: personal data, roles and agents, processing operations, processing purposes, legal bases, and data subjects’ rights. In combination with the description of a privacy policy in the legal standard XML Akoma Ntoso, such an approach makes the icons machine-readable and semi-automatically retrievable. Icons can thus serve as information markers in lengthy privacy statements and support the navigation of the text by the data subject. [less ▲]

Detailed reference viewed: 40 (7 UL)
Full Text
See detailThe Phenomenon of Online Live-Streaming of Child Sexual Abuse: Challenges and Legal Responses
Dushi, Desara UL

Doctoral thesis (2019)

In the recent years, the importance of Internet in the education of children all over the world has grown enormously. But as every other phenomenon, the easy access to the Internet creates a great number ... [more ▼]

In the recent years, the importance of Internet in the education of children all over the world has grown enormously. But as every other phenomenon, the easy access to the Internet creates a great number of concerns that should not be neglected. Over the past two decades, the internet has become a new medium through which child exploitation and sexual abuse happens. Technology is being used not only as a means of committing old forms of sexual abuse and exploitation of children, but also for creating new ones. This variety of crime types ranges from child pornography, sexting and sextortion to online grooming, and live- streaming of child abuse. This dissertation focuses on a very current, fast developing, and not very explored topic, the phenomenon of live-streaming of child abuse. The research includes a perspective of (public) international law, the situation in Europe due to the activities of the Council of Europe and the EU and also a “reality” test with two legal system approaches, Italy and England & Wales, on how to handle online child sexual abuse material and more specifically live-streaming of such abuses. On the basis of this observation, the main objective is to critically analyze the status quo of existing framework in the area of online child sexual abuse and exploitation in order to find out how flexible it is to be applied to this specific crime, if it can be applied, and how can it be improved in order to better respond to this new global reality. Based on all of this I draw conclusions over the insufficiency of existing framework to cover the crime of live-streaming of child abuse and plead for filling the legal lacunae by extending specific criminal provisions -ideally harmonized on an international level- specially made to tackle this crime. [less ▲]

Detailed reference viewed: 72 (6 UL)
Full Text
See detailStress - modulated bulk photovoltaic effect in polar oxide crystals
Nadupalli, Shankari UL

Doctoral thesis (2019)

Light-induced phenomena in ferroelectric materials have been exploited for decades for optoelectronic applications. Homogeneous illumination of a non-centrosymmetric ferroelectric material creates ... [more ▼]

Light-induced phenomena in ferroelectric materials have been exploited for decades for optoelectronic applications. Homogeneous illumination of a non-centrosymmetric ferroelectric material creates anomalously high voltages exceeding a value which is usually limited by its band gap. This phenomenon is called the bulk-photovoltaic effect (BPVE). Lithium niobate is a prototypical material for BPVE. The only limiting factor in lithium niobate is its low photo-current values, which can be improved by doping the crystal with donor metals. This study focuses primarily on light induced processes in mono-domain lithium niobate single crystals doped with transition metal ions, particularly the influence of stress on the BPVE. The effect of stress on BPVE is termed the piezo-photovoltaic effect (PPVE). This thesis report is framed to systemically introduce topics which cause, influence and aid in understanding the PPVE. Topics such as the symmetry in crystals, their physical properties, the intrinsic bulk photovoltaic effect (BPVE) are introduced and the structure, defects, light-induced charge transport in donor doped lithium niobate and the reason behind the appearance of BPVE are discussed in this report. The techniques and experimental arrangements used in this work are detailed in this thesis. A direct evidence of BPVE and the influence of stress is shown in the results. Transition metal doped lithium niobate crystals are oriented via x-ray diffraction (XRD) and a basic chemical characterization is undertaken using secondary ion mass spectrometry (SIMS) to identify dopant elements. Absorption spectroscopy in the UV/VIS/NIR range revealed windows in the spectra indicating photo-excitation of the donor doped ions. The absorption lines show that a shift in the fundamental band-edge occurs in lithium niobate for different dopant elements. Electron paramagnetic resonance (EPR) spectrometry is performed on the samples to confirm the location of the dopant ion in the crystal matrix by indicating its symmetry. The difference in the dopant concentration and the change in the oxidation state of the dopant ion under light illumination is obtained from EPR study. Direct measurements to obtain bulk photovoltaic current density in iron doped- lithium niobate single crystals are performed at increasing intensities at different wavelengths to determine the BPV coefficients. This study provides a quantitative analysis of different components of the BPV tensor values. The highest BPV component measured along the polar axis with extraordinary light polarisation is observed when iron doped lithium niobate is illuminated with light wavelength 450 nm. Obtained BPV tensor components are corroborated by the influence of the structural environment and the dipole interactions on charge transport mechanism of BPVE. The charge transport mechanism and the obtained values of the BPV tensor components are justified and discussed on the basis of the polaronic charge transport phenomena existing in the literature. The influence of stress on BPVE is measured using a custom-designed set-up. The PPV components in lithium niobate are experimentally investigated for stress levels in the 1MPa - 10MPa range. A detailed discussion on the experimental observations are given in this report. The prime discovery of this thesis is the intrinsic character of the piezo-photovoltaic effect (PPVE), where increase in the light induced current is observed when the crystal is subjected to uniaxial compressive stress. The Young's modulus of lithium niobate is 202 GPa. Applying 10 MPa compressive stress translates to strain levels of just 50 ppm. 10 MPa of compressive stress along the polar axis of the crystal increased the short-circuit photo-current by 73%. When stress is applied perpendicular to the polar axis, about 370% increase in short-circuit photocurrent was observed with just 50 ppm of strain, which is a drastic for such moderate amounts of stress levels. This study proves the vitality of strain tuning to increase the PV properties in crystalline solar cells. Extrapolating the observed effect, PPVE is envisioned as a phenomenon which could be exploited in other polar oxide ceramics and thin-films where large photovoltaic energy generation can be made possible beating the existing limits. [less ▲]

Detailed reference viewed: 71 (6 UL)
Full Text
See detailTiming-aware Model Based Design with Application to Automotive Embedded Systems
Sundharam, Sakthivel Manikandan UL

Doctoral thesis (2019)

Cyber-Physical System (CPS) are systems piloting physical processes which have become an integral part of our daily life. We use them for many purposes: transportation (cars, planes, trains), space ... [more ▼]

Cyber-Physical System (CPS) are systems piloting physical processes which have become an integral part of our daily life. We use them for many purposes: transportation (cars, planes, trains), space (satellite, spacecrafts), medical application, robotics, energy management, home appliance, manufacturing, and so many other applications. Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS), for instance, the control software for automotive engines, which are deployed on modern multi-core hardware architectures. Such an engine control system consists of different sub-systems, ranging from an air system to the exhaust system. Each of these sub-systems, again, consists of software functions which are necessary to read from the sensors and write to the actuators. In this setting, MBD provides indispensable means to model and implement the desired functionality, and to validate the functional, the non-functional, and in particular the real-time behavior against the requirements. Current industrial practice in model-based development completely relies on generative MBD, i.e., on code generation to bridge the gap between model and implementation. An alternative approach, although not yet used in the automotive domain is model interpretation. In this thesis, in the place of code generation, we investigate the applicability of model interpretation to automotive software development with a help of a control function design. We present the benefits compared to the existing code-generation practice. The control laws of these software functions typically assume deterministic sampling rates and constant delays from input to output. However, on the target processors, the execution times of the software will depend on many factors such as the amount of interferences from other tasks, resulting in varying delays from sensing to actuating. The literature approaches support the simulation of control algorithms, but not their actual implementation. Further in the thesis, we present the CPAL model interpretation engine running in a co-simulation environment to study control performances while taking the run-time delays into account. The main advantage is that the model developed for simulation can be re-used on the target processors. Additionally, the simulations performed at design phase can be made realistic in the timing dimension through the use of timing annotations inserted in the models to capture the delays on the actual hardware. Introspection features natively available facilitate the implementation of self-adaptive and fault-tolerance strategies to mitigate and compensate the run-time latencies. Experiments on controller tasks with injected delays show that our approach is on-par with the existing techniques with respect to simulation. We then discuss the main benefits of our development approach which are the support for rapid-prototyping and the re-use of the simulation model at run-time, resulting in productivity and quality gains. As the processing power is increasingly available with today's hardware, other concerns than execution performance such as simplicity and predictability become important factors towards functional safety objective. The motivation towards predictable execution behavior, we revisited FIFO scheduling with o set and strictly periodic task activations. The execution order in this case is uniquely and statically determined. This means that whatever the execution platform and the task execution times, be it in simulation mode in a design environment or at run-time on the actual target, the task execution order will remain identical. Beyond the task execution order, the reading and writing events that can be observed outside the tasks occur in the same order. This property, leveraged by our MBD environment CPAL design flow provides a form of timing equivalent behavior between development phase and run-time phase which eases the implementation of the application and the verification of its timing correctness. Thus, the proposed development environment facilitates where also the non-experts are able to quickly model and deploy complex embedded systems without having to master real-time scheduling and resource-sharing protocols. In practice, the design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of the field, he / she neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In the thesis, we present a model-driven co-design framework based on the timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design veri fied by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on earlier mentioned CPAL design environment, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. Through various case studies, we show that our tool enables not only to automate the analysis process at design time but also to enhance the design process by systematically combining models and analyses. [less ▲]

Detailed reference viewed: 32 (12 UL)
Full Text
See detailThe cause of interface recombination in Cu-rich CIS thin film solar cells
Elanzeery, Hossam UL

Doctoral thesis (2019)

Cu(In,Ga)Se2 (CIGS) thin film solar cells are considered one of the most promising thin film technologies reaching efficiencies beyond 22 %. The record efficiencies for CIGS thin film solar cells are ... [more ▼]

Cu(In,Ga)Se2 (CIGS) thin film solar cells are considered one of the most promising thin film technologies reaching efficiencies beyond 22 %. The record efficiencies for CIGS thin film solar cells are based on CIGS absorbers grown under Cu-deficiency conditions. CIGS absorbers grown under Cu-excess (Cu-rich) show larger grains and better transport properties compared to CIGS absorbers grown under Cu-deficiency (Cu-poor) conditions. However, solar cells based on Cu-rich CIGS absorbers suffer from significantly lower efficiencies compared to Cu-poor CIGS solar cells. The lower efficiency of Cu-rich CIGS solar cells compared to Cu-poor CIGS cells is attributed to lower open circuit voltage (VOC) in Cu-rich CIGS cells compared to Cu-poor CIGS cells. The reason behind the lower VOC values was investigated and was attributed to recombination losses at the absorber/buffer interface and higher doping of Cu-rich CIGS cells compared to Cu-poor CIGS cells but the complete picture behind the origin of these interface recombination losses and high doping in Cu-rich CIGS cells was not fully understood. The work of this thesis explains why Cu-rich CIGS cells suffer from interface recombination losses, higher doping and lower efficiencies. This explanation is divided into three parts: The first part characterizes Cu-rich and Cu-poor solar cells of the ternary CIS and the quaternary CIGS. This part confirms that Cu-rich CI(G)S solar cells suffer from lower efficiencies, lower VOC, interface recombination losses and higher doping compared to Cu-poor CI(G)S solar cells. Moreover, a 200±20 meV defect was observed for Cu-rich CIS cells. The second part introduces different post-deposition treatments (PDTs) to Cu-rich CI(G)S cells. An ex-situ KF, in-situ KF and a Se-only PDT were introduced to Cu-rich CIS cells. All the three treatments succeeded in improving the VOC, improving the interface recombination losses, decreasing the doping and passivating the 200±20 meV defect that has been identified as a Se-related defect in Cu-rich CIS solar cells. A Ga-Se PDT was introduced to Cu-rich CIGS solar cells and successfully improved the VOC, improved the interface recombination losses and decreased the doping of Cu-rich CIGS solar cells. The third part analyses the changes observed on Cu-rich CI(G)S cells before and after the PDTs. Based on these observations, it was concluded that the origin behind both the interface recombination losses and the high doping of Cu-rich CI(G)S cells is a Se-related acceptor defect (detected by admittance measurements for Cu-rich CIS and speculated for Cu-rich CIGS). The passivation of this defect reduces the recombination losses at the absorber/buffer interface, decreases the doping, improves the VOC and consequently leads to an increase in the efficiency of Cu-rich CI(G)S solar cells. Moreover, this part shows that the Se-related defect is formed as a result of the strong etching step that is mandatory for Cu-rich CI(G)S absorbers to remove conductive copper selenide secondary phases. Applying the same strong etching conditions to Cu-poor CIS absorbers leads to the formation of the Se-related defect. After understanding that the Se-related defect is formed as a result of the strong etching conditions and that the Se-related defect can be passivated with PDTs that are rich in Se, an alternative mean of passivating this defect without PDTs was proposed. The Se-related defect was shown to be passivated using buffer layers of high enough thiourea (source of Sulphur) and without any PDTs leading to the reduction of interface recombination losses, decrease of the doping, increase of the VOC and increase of the efficiency of Cu-rich CIS cells. To conclude, the reason behind the interface recombination losses and high doping in Cu-rich CI(G)S solar cells is a Se-related acceptor defect originating after etching the absorbers with strong etching conditions. This defect can be passivated with high enough chalcogen either with PDTs (high enough Selenium) or buffer layers (high enough Sulphur). [less ▲]

Detailed reference viewed: 122 (8 UL)
Full Text
See detailPOST-PROCESS AND IN-PROCESS ANALYSIS METHODS FOR LASER WELDING OF ALUMINUM-COPPER
Schmalen, Pascal Guy UL

Doctoral thesis (2019)

Copper is well-known for excellent conductivity and corrosion resistance, whereas aluminium is known for low density and great formability. The laser joining of Al and Cu combines those properties, e.g ... [more ▼]

Copper is well-known for excellent conductivity and corrosion resistance, whereas aluminium is known for low density and great formability. The laser joining of Al and Cu combines those properties, e.g. in manufacturing of solar absorbers and wiring harnesses. Furthermore, the joining of Al and Cu is an enabler of new products, e.g. the manufacturing of battery modules, where a reliable joining process for numerous Al-Cu connections, the tabs of the Li-ion batteries, is needed. The joining of Al and Cu is considered as complex due to the formation of intermetallic compounds inside the joint, which are causing a brittle joint with increased resistance. The focus of this work is the improvement and extension of the laser welding process of Al and Cu. It was found that the determination of suitable process parameters is one of the major restrictions in the application of the process. Hence, this work deals with the analysis of methods to determine process parameters, which was performed in three main parts: o The comprehensive process understanding is one essential part to improve the joint quality. The formation of the intermetallics inside the joint is a process, which takes place in a short time frame of a few μs and a scale of few hundreds of μm. Metallographic studies were performed to gain insights into the local formation of specific intermetallic compounds. Furthermore, intensive research on etchants was combined with the information gained from a micro-XRD analysis in order to identify and localize the most critical phases. o The post–process quality measurements are the essential part to quantify and evaluate the properties of the joint and thus to adapt process parameters. Yet, the quality measurements are varying in recent literature. In this work, the mechanical, electrical and optical methods were combined with hardness measurements and metallographic studies. The methods were enhanced and it was found that especially electrical methods have a great potential to assist the determination of process parameters. o In-process analysis methods were studied to identify the potential of process monitoring of Al-Cu weld seams. It was found that the optical analysis of the vapour plume, which is formed during the keyhole welding, can be used to estimate the joint quality. Chromatic filters were used to analyse specific process radiation of Al and Cu, which contains information about the current intermixture of the joint. The investigations were carried out with a spectroscope. Concluding, it will be shown that the present work assists the choice of suitable process parameters, and thus supports the future implementation of laser technology to join Al and Cu. [less ▲]

Detailed reference viewed: 45 (13 UL)
Full Text
See detailMOBILITY ANALYSIS AND PROFILING FOR SMART MOBILITY SERVICES: A BIG DATA DRIVEN APPROACH. An Integration of Data Science and Travel Behaviour Analytics
Toader, Bogdan UL

Doctoral thesis (2019)

Smart mobility proved to be an important but challenging component of the smart cities paradigm. The increased urbanization and the advent of sharing economy require a complete digitalisation of the way ... [more ▼]

Smart mobility proved to be an important but challenging component of the smart cities paradigm. The increased urbanization and the advent of sharing economy require a complete digitalisation of the way travellers interact with the mobility services. New sharing mobility services and smart transportation models are emerging as partial solutions for solving some tra c problems, improve the resource e ciency and reduce the environmental impact. The high connectivity between travellers and the sharing services generates enormous quantity of data which can reveal valuable knowledge and help understanding complex travel behaviour. Advances in data science, embedded computing, sensing systems, and arti cial intelligence technologies make the development of a new generation of intelligent recommendation systems possible. These systems have the potential to act as intelligent transportation advisors that can o er recommendations for an e cient usage of the sharing services and in uence the travel behaviour towards a more sustainable mobility. However, their methodological and technological requirements will far exceed the capabilities of today's smart mobility systems. This dissertation presents a new data-driven approach for mobility analysis and travel behaviour pro ling for smart mobility services. The main objective of this thesis is to investigate how the latest technologies from data science can contribute to the development of the next generation of mobility recommendation systems. Therefore, the main contribution of this thesis is the development of new methodologies and tools for mobility analysis that aim at combining the domain of transportation engineering with the domain of data science. The addressed challenges are derived from speci c open issues and problems in the current state of the art from the smart mobility domain. First, an intelligent recommendation system for sharing services needs a general metric which can assess if a group of users are compatible for speci c sharing solutions. For this problem, this thesis presents a data driven indicator for collaborative mobility that can give an indication whether it is economically bene cial for a group of users to share the ride, a vehicle or a parking space. Secondly, the complex sharing mobility scenarios involve a high number of users and big data that must be handled by capable modelling frameworks and data analytic platforms. To tackle this problem, a suitable meta model for the transportation domain is created, using the state of the art multi-dimensional graph data models, technologies and analytic frameworks. Thirdly, the sharing mobility paradigm needs an user-centric approach for dynamic extraction of travel habits and mobility patterns. To address this challenge, this dissertation proposes a method capable of dynamically pro ling users and the visited locations in order to extract knowledge (mobility patterns and habits) from raw data that can be used for the implementation of shared mobility solutions. Fourthly, the entire process of data collection and extraction of the knowledge should be done with near no interaction from user side. To tackle this issue, this thesis presents practical applications such as classi cation of visited locations and learning of users' travel habits and mobility patterns using historical and external contextual data. [less ▲]

Detailed reference viewed: 14 (3 UL)
Full Text
See detailMixed Frequency Single Receiver Architectures and Calibration Procedures for Linear and Non-Linear Vector Network Analysis
Harzheim, Thomas UL

Doctoral thesis (2019)

In this thesis several new advancements in the field of linear and non-linear vector network analysis are presented. Three distinct but interconnected topics are addressed in this work: First the concept ... [more ▼]

In this thesis several new advancements in the field of linear and non-linear vector network analysis are presented. Three distinct but interconnected topics are addressed in this work: First the concept and feasibility of the single receiver vector network analyzer (VNA) architecture and the implications for existing error models are analyzed, starting with the one-port reflectometer, through two-port unidirectional 5-term, bidirectional 10-term and finally 7-term error models. New VNA error models, which are able to capture the effects of the leaky RF receiver input wave selector switch, are derived, along with new calibration and correction procedures for this architecture. Modifications to the existing test-set architectures are introduced to reduce the effects of the leaky RF receiver input wave selector switch and shorten the required measurement time in this VNA architecture. A purpose built 275 MHz to 6000 MHz single receiver VNA system based upon commercial-of-the-shelf components is presented and analyzed. Measurements carried out with this VNA system are used in conjunction with numerical test-set and VNA simulations to verify the efficacy of the new calibration and correction methods as well as different VNA test-set architectures according to EURAMET standards and procedures. The second main topic of this thesis is the introduction of phase repeatable synthesizers as a new calibration and correction phase reference standard for non-linear VNA measurements. Due to the high output power capability of this new phase reference standard, new non-linear test-set and measurement scenarios such as the full non-linear two port characterization of high power solid-state amplifiers become possible, which were out of reach before due to low system signal-to-noise ratios provided by comb-generator based sources in this setup. The third and final topic of this thesis integrates the contents and achievements of the two previous topics to prove and verify the feasibility of VNA based harmonic, i.e. non-linear, transponder-based stepped-FMCW radar systems operating directly in the frequency domain. A new stepped-FMCW theory based on mixed-frequency S-parameters is presented in conjunction with a phase-slope based ranging procedure which avoids time-domain transformation. A complete system-analysis and modeling of the harmonic radar system including the passive transponder tag is provided. Numerous high-resolution measurements are presented and analyzed to verify the validity and accuracy of the non-linear harmonic radar equation, to evaluate illumination and harmonic return signal polarization based propagation effects in a multi-path indoor measurement scenario and to demonstrate the performance of the harmonic radar system in severe clutter situations. [less ▲]

Detailed reference viewed: 57 (5 UL)
Full Text
See detailOPTICAL ANALYSIS OF EFFICIENCY LIMITATIONS OF CU(IN,GA)SE2 GROWN UNDER COPPER EXCESS
Babbe, Finn UL

Doctoral thesis (2019)

Solar cells made from the compound semiconductor Cu(In,Ga)Se2 reach efficiencies of 22:9 % and are thus even better than multi crystalline silicon solar cells. All world records are achieved using ... [more ▼]

Solar cells made from the compound semiconductor Cu(In,Ga)Se2 reach efficiencies of 22:9 % and are thus even better than multi crystalline silicon solar cells. All world records are achieved using absorber layers with an overall copper deficient composition, but Cu-rich grown samples have multiple favourable properties. However, especially losses in the open circuit voltage limit the device performance. Within this work these efficiency limitations of chalcopyrites grown with copper excess are investigated. The work has been divided into four chapters addressing different scientific questions. (i) Do alkali treatments improve Cu-rich absorber layers? The alkali treatment, which lead to the recent improvements of the efficiency world record, is adapted to CuInSe2 samples with Cu-rich composition. The treatment leads to an improvement of the VOC which originates roughly equally from an improvement of the bulk and the removal of a defect close to the interface. The treatment also improves the VOC of Cu-poor samples. In both cases, the treatment increases the fill factor (FF) and leads to a reduction of copper content at the surface. (ii) Is the VOC limited by deep defects in Cu-rich Cu(In,Ga)Se2? A deep defect, which likely limits the VOC, is observed in photoluminescence measurements (PL) independent of a surface treatment. The defect level is proposed to originate from the second charge transition of the CuIn antisite defect (CuIn(-1/-2)). During the investigation also a peak at 0:9 eV is detected and attributed to a DA-transition involving a third acceptor situated (135 ± 10) meV above the valence band. The A3 proposed to originate from the indium vacancy (VIn). Furthermore the defect was detected in admittance measurements and in Cu(In,Ga)Se2 samples with low gallium content. (iii) Is the diode factor intrinsically higher in Cu-rich chalcopyrites? Cu-rich solar cells exhibit larger diode ideality factors which reduce the FF. A direct link between the power law exponent from intensity dependent PL measurements of absorbers and the diode factor of devices is derived and verified using Cu-poor Cu(In,Ga)Se2 samples. This optical diode factor is the same in Cu-rich and Cu-poor samples. (iv) Is the quasi Fermi level splitting (qFLs) of Cu-rich Cu(In,Ga)Se2 absorber layers comparable to Cu-poor samples? Measuring the qFLs of passivated Cu-rich and Cu-poor Cu(In,Ga)Se2 samples, on average a 120 meV lower splitting is determined for Cu-rich samples. This difference increases with gallium content and is likely linked to a defect moving deeper into the bandgap, possibly related to the second charge transition of the CuIn antisite defect. Overall, samples with Cu-rich composition are not limited by the diode factor. However, a deep defect band causes recombination lowering the qFLs and thus the VOC. This defect is not removed by alkali treatments. A key component to improve Cu-rich solar cells in the future, especially Cu(In,Ga)Se2, will be to remove or passivate this defect level. [less ▲]

Detailed reference viewed: 130 (22 UL)
Full Text
See detailConfined in a Fiber: Realizing Flexible Gas Sensors by Electrospinning Liquid Crystals
Reyes, Catherine UL

Doctoral thesis (2019)

Liquid crystalline phases (LCs) readily exhibit optical responsivity to small fluctuations in their immediate environment. By encapsulating LC phase forming compounds within polymer fibers through the ... [more ▼]

Liquid crystalline phases (LCs) readily exhibit optical responsivity to small fluctuations in their immediate environment. By encapsulating LC phase forming compounds within polymer fibers through the electrospinning process (a fiber spinning method known for being a fast way of forming chemically diverse non-woven mats), it is possible to create functionalized LC-polymer fiber mats that are responsive as well. As these fiber mats can be handled macroscopically, a usercan observe the responses of the mats macroscopically without the need for bulky electronics. This thesis presents several non-woven fiber mats that were coaxially electrospun to contain LC within their individual polymer fibers cores for use as novel volatile organic compound (VOC) sensors. The mats are flexible, lightweight, and shown to both macroscopically and microscopically respond to toluene gas. Such gas responsive mats may be incorporated into garments for visually alerting the wearer when they are exposed to harmful levels of VOCs for example. Additionally, the interaction and re-prioritization of several electrospinning variables (from the chemistry based to the processing based) for forming the LC-mats are also discussed. The balance of these variables determines whether a wide range of phenomena occur during fiber formation. For instance, unexpected phase separation between the polymer sheath solution and the LC core can mean the difference between forming fully dried fibrous mats and wet/meshed films. A chapter is devoted to discussing the impact that solvent miscibility with an LC can have on fiber production, including also the effect that water can have when condensed into the electrospinning coaxial jet. The fiber shapes that the polymer fiber sheaths adopt (beaded versus non-beaded), as well as the continuity of the LC core, will influence the visual app earance of the mats. These optical properties, in turn, influence the mats’ responsivity to gases and whether the responses can be macroscopically observed with or without additional polarizers. During two types of gas sensing experiments --mats exposed to gas when contained in a cell, and mats exposed to gas diffused in ambient air without containment, we see that not all fibers within a mat respond at the same time. Moreover, different segments of the fibers within the same non-woven mat also show slightly different rates of response due to variations in fiber thickness, LC content, and whether the fiber cores had variations in LC filling (i.e. LC director twists, and gaps). [less ▲]

Detailed reference viewed: 137 (25 UL)
Full Text
See detailDEUTSCHE SOLDATENGRÄBER DES ZWEITEN WELTKRIEGES ZWISCHEN HELDENVERHERRLICHUNG UND ZEICHEN DER VERSÖHNUNG – KULTURWISSENSCHAFTLICH-HISTORISCHE FALLSTUDIEN ZUR ENTWICKLUNG DES UMGANGES MIT DEM KRIEGSTOD
Janz, Nina UL

Doctoral thesis (2019)

Ten cultural-historical case studies investigate how deaths in war were dealt with based on soldier graves of the Second World War. In this dissertation, the resting places of the fallen German soldiers ... [more ▼]

Ten cultural-historical case studies investigate how deaths in war were dealt with based on soldier graves of the Second World War. In this dissertation, the resting places of the fallen German soldiers offer a unique perspective in the evaluation of death during a violent conflict and in the post-war period. The examination frame extends from 1939 to the present and follows the reception and importance of the graves and the fallen in military, politics and society. Some single chapters of this thesis have been already published or are intended for publication. Methodologically, the study consists of empirical work, such as the analysis of unpublished archival sources, as well as hermeneutical tools in the form of interviews, surveys, local documentation, and field studies of burial sites and exhumations. Two terms – hero glorification and signs of reconciliation – illustrate the differences in how the meaning of the graves and their dead soldiers was perceived. This difference highlights the change in values and meaning that the graves had to face. In the Second World War, the Wehrmacht responded to the nearly five million German casualties with mythical hero stories, propaganda and parades, but also with an elaborate administration system and rules concerning the dead and their graves. The instructions for the soldier’s death included details about the material and inscription of the gravestone to the identification of unknown dead. The graves sustained a structure and organization in accordance with a modern military grave system. The claim to a single grave and the registration and notification of the relatives was included in the Wehrmacht. The denotation of the dead as heroes and their resting places as heroes' graves (Heldengräber) and heroes' groves (Heldenhaine) shows the attempt to integrate them into the ideology and propaganda of the National Socialist regime. However, the management of graves, as well as the cult of heroes, had to fail due to the reality of war – i.e., the number of casualties, the chaotic conditions at the front and the defeat of the Germans. The hero's glorification could not be maintained after the end of the war. In post-war society, an attempt was made to defuse the symbolism of military death and put it into a neutral and harmless context other than National Socialism. The continuation of the graves’ management, the search for unknown resting places and the construction of cemeteries could no longer be operated by the military. Under the slogan of reconciliation and the expression of peace and understanding instead, access to the Wehrmacht graves was reached first in Western Europe, after 1989 in Eastern Europe by the Volksbund Deutsche Kriegsgräberfürsorge e.V. The Volksbund builds and cares for cemeteries and exhumes the remains until the present. This effort is still being made by the Germans today and illustrates the importance of war graves care in modern international context. The studies show how mutable and dependent are the meaning and symbolism of the death of a soldier within different political and social constructs and epochs. In these studies, the range of soldiers' graves as a research topic is clarified and further perspectives for questions and investigation contexts are shown. The investigation of German soldiers' graves of the Second World War in terms of their relevance is of particular importance. The fact that great efforts are still being made to find and maintain the resting places of dead soldiers more than 70 years after the war demonstrates the political dimension of the war dead and their graves. Above all, the distinctiveness of these objects as resting places for German soldiers makes an interesting and even controversial topic for science, politics and society not only in Germany but also in other European countries. [less ▲]

Detailed reference viewed: 50 (1 UL)
Full Text
See detailInvestigating trust in a multilingual theatre project: Potentialities for a humanising pedagogy
Weyer, Dany UL

Doctoral thesis (2019)

Education plays a vital role in shaping social realities by promoting dialogue, solidarity, mutual understanding, and positive social interactions. However, some pedagogical approaches are believed to not ... [more ▼]

Education plays a vital role in shaping social realities by promoting dialogue, solidarity, mutual understanding, and positive social interactions. However, some pedagogical approaches are believed to not shoulder the responsibility to counter current social, economic, and political forces in Europe and beyond that present challenges in terms of social cohesion and ways of living together. This study contributes to recent debates concerning a change of dominant school practices by recognising learning and teaching as collaborative processes between teachers and students and trust as a central element in education. Despite the interest in and positive appraisal of trust in education, little attention has been paid to concrete teaching practices and strategies on how to implement trust in learning and teaching. A case study of a multilingual theatre project of a primary school class and a video ethnographic approach allowed to explore details of classroom practices, (inter-)actions, and activities. This research set out to explore four questions: (a) What are “signs of trust” in an educational context?; (b) How and in what ways can a teacher build, maintain, or strengthen trust?; (c) How and in what ways can “signs of trust” shape interactions in the classroom?; (d) How can “sings of trust” be analysed? As a result of more than 80 hours of video-recorded participant observations and interview data, the results of this investigation show that the classroom teacher continuously and consistently maintained a work environment based on six attributes of trust identified in the literature: vulnerability, benevolence, reliability, competence, honesty, and openness. Most importantly, she valued and promoted responsibility, autonomy, collaboration, and peer support. The teacher’s verbal and non-verbal trustworthy and trusting behaviour is then interpreted as the driving force behind the pupils’ engagement as active, competent, and reliable partners in all aspects of the theatre project. In fact, the pupils signalled ownership of their learning, proactively and independently engaged with the curriculum, and positively oriented towards each other’s relationships and competences. Despite the exploratory nature and small sample of participants, the findings of this study highlight that education imbued with trust offers opportunities of growth for both teachers and students. Moreover, the data suggests that the achievement and maintenance of trust can be seen as a collaborative effort involving all members of the classroom community and facilitated by a myriad of meaning-making resources (verbal, non-verbal, with objects, even a simple look in the eye or a smile). If the debate about the value of trust for all learners is to be moved forward, a better understanding of the wider impacts on personal and social lives needs to be gained. [less ▲]

Detailed reference viewed: 355 (36 UL)
Full Text
See detailNovel Insight into the Role of the S100A8/A9 Protein Complex in the Regulation of Neutrophil Functions
Jung, Nicolas UL

Doctoral thesis (2019)

S100A8 and S100A9 are members of the S100 family of cytoplasmic EF-hand calcium-binding proteins and are abundantly expressed in the cytosol of neutrophils. Mostly found under heterodimeric form, S100A8 ... [more ▼]

S100A8 and S100A9 are members of the S100 family of cytoplasmic EF-hand calcium-binding proteins and are abundantly expressed in the cytosol of neutrophils. Mostly found under heterodimeric form, S100A8/A9 have various intracellular and extracellular functions; they act as alarmins, amplifying the host inflammatory response. Our previous study showed that the intracellular activity of S100A8/A9 is carried by the phosphorylation of S100A9. Based on these results, we further investigated the importance of this post-translational modification on the extracellular activity of the protein complex and its impact on the inflammatory functions of neutrophils. First, we analyzed the phosphorylation state of secreted S100A8/A9 and the mechanism by which the protein complex is released into the extracellular space. Our results show that S100A9 is secreted under a phosphorylated form within the S100A8/A9 protein complex and this release is highly correlated to the process of NETosis. Next, we investigated the inflammatory response of neutrophil-like dHL-60 cells when stimulated with the phosphorylated and non-phosphorylated form of S100A8/A9. Our results indicate that only the phosphorylated form of S100A8/A9 increases the expression and secretion of various cytokines (e.g. TNFa, CCL4, CXCL8). Using receptor-neutralizing antibodies, we then determined the receptor and signaling pathways associated to S100A8/A9-P-induced cytokine secretion. The reduction of expression levels of the previously mentioned cytokines, after TLR4 blocking, point out that S100A8/A9-P-induced signaling is mediated in part by TLR4. Finally, we investigated the post-transcriptional response induced by S100A8/A9-P stimulation. Using miRNA-sequencing of S100A8/A9-P stimulated dHL-60 cells, we identified an upregulation of miR-146a-5p, miR-146b-5p and miR-155-5p expression. Since these three microRNAs have previously been described to regulate TLR4 signaling at various levels, we investigated their influence on the inflammatory response mediated by S100A8/A9-P. Stable overexpression of miR-146a-5p and miR-155-5p in dHL-60 cells resulted in the reduced S100A8/A9-P-mediated secretion of cytokines through the inhibition of key players in the TLR4 signaling pathways. To summarize, our results give new insight into the pro-inflammatory functions induced by S100A8/A9-P in neutrophils and reveal the potential of the phosphorylated protein complex as a major regulator of inflammation in chronic inflammatory diseases. [less ▲]

Detailed reference viewed: 64 (19 UL)
Full Text
See detailADAPTIVE WAVEFORM DESIGN FRAMEWORK FOR MIMO RADAR UNDER PRACTICAL CONSTRAINTS
Hammes, Christian UL

Doctoral thesis (2019)

The recent developments in radar technology - powerful signal processors, increased modulation bandwidth and access to higher carrier frequencies - offers enhanced flexi- bility in waveform design and ... [more ▼]

The recent developments in radar technology - powerful signal processors, increased modulation bandwidth and access to higher carrier frequencies - offers enhanced flexi- bility in waveform design and receiver processing. This provides additional degrees of freedom in the signal design and processing, thereby offering additional avenues to im- plement interference mitigation. The radar environment is dynamic in general, with the inhomogeneous interference sources changing rapidly both in space and time. In this context, an adaptive waveform and adaptive receiver design for Multiple-Input-Multiple- Output (MIMO) radar system is a promising way forward towards dynamic interference mitigation. Even-though the technology offers flexibility, the need to commercialize radar elements imposes certain constraints on the platform to ensure commercial viability. In this context, the transmitted waveform has to satisfy practical design constraints imposed by the hardware including discrete phase modulation and limited number of processing chains. These coupled with the dynamic scenarios warrants a rapid signal adaptation with enhanced performance while satisfying the design constraints. Motivated by the aforementioned requirements, the thesis proposes a general framework for MIMO radar signal adaptation under practical design constraints. The transmit antennas are restricted to operate in a multiplex mode, where a fewer number of pro- cessing chains are multiplexed across an arbitrary number of transmit antennas. Each of these chains, also referred to as channels, have the capability to modulate the phase of a traditional radar pulse in discrete steps. Further, the modulation is assumed to be in the slow time domain (inter-pulse); such a phase modulation results in benign requirements on the platform. Furthermore, the antennas are assumed to be mounted uniformly in a way that the virtual MIMO paradigm for maximum angular resolution is satisfied. The slow time modulation naturally results in in an angle-Doppler coupling; this issue is addressed by phase center motion (PCM) techniques, where nonlinear and random PCM techniques for mitigating angle-Doppler coupling are proposed. While the PCM techniques provide orthogonal signals, a transmit beamforming approach is also consid- ered to exploit the salient features of MIMO and phased array radars. Towards this, an approach based on block circulant decomposition for the slow-time modulation is proposed to generate a particular beam shape while minimizing the cross-correlation between transmitted signals, such that the virtual MIMO paradigm is satisfied. The thesis formulates the radiation pattern design as a dictionary based convex optimization and proposes closed-form signal design solutions for particular configuration of channels, discrete phase stages and transmit antenna elements. The beampattern design is then elegantly combined with the PCM approach to reduce Doppler ambiguity while sup- pressing angle-Doppler coupling. The proposed waveform design methodology is shown to be amenable to fast adaptation. Further, the adaptive waveform design is fused with state of the art adaptive receiver techniques to conceive a novel adaptive MIMO radar system under practical constraints in this thesis. [less ▲]

Detailed reference viewed: 193 (34 UL)
Full Text
See detailFUNCTIONAL CHARACTERISATION OF THE A30P MUTATION IN ALPHA-SYNUCLEIN GENE IN A PATIENT-DERIVED CELLULAR MODEL OF PARKINSON’S DISEASE
Rodrigues Dos Santos, Bruno Filipe UL

Doctoral thesis (2019)

Our study aims to perform detailed phenotyping of the A30P alpha-synuclein familial case of PD, allowing to identify underlying mechanisms of the disease that may translate into novel therapies ... [more ▼]

Our study aims to perform detailed phenotyping of the A30P alpha-synuclein familial case of PD, allowing to identify underlying mechanisms of the disease that may translate into novel therapies. Parkinson’s disease (PD) is the second most common neurodegenerative disease. Approximately 20% of PD cases are known to have a genetic cause. From these, mutations in SNCA, the gene encoding alpha-synuclein, are linked to an autosomal dominant inheritance of the disease. In 1998, our group discovered the second known point mutation within the SNCA gene, causing an A30P exchange of the peptide sequence. We generated first patient-derived cellular model of the A30P alpha-synuclein mutation carrier, by obtaining fibroblasts from an affected sibling of the index patient, an unaffected sibling of the patient, and an age-matched gender-matched non-PD control. We reprogrammed these fibroblasts into induced pluripotent stem cells (iPSCs), and differentiated them into midbrain dopaminergic neurons. We obtained enriched cultures of 80% midbrain neurons (FoxA2+/Tuj1+), with approximately 12 % dopaminergic (TH+), for which we observed electrophysiological activity and dopamine release. We detected a significant reduction of the protein level of mitochondria complexes II, IV, and V in the patient lines compared with the controls, additionally we found a significant impairment of mitochondrial respiration and an increased susceptibility of the cells to oxidative stress. Gene-edited isogenic controls were generated to dissect mutation-specific effects. Furthermore, we investigated mitochondrial morphology and dynamics, and how these processes contribute to the dopaminergic neurodegeneration. Additionally, we were implementing previously established readouts on our high-throughput automated screening platform that will allow us to identify FDA approved compounds with potential to be re-purposed and used as PD treatment. We believe that detailed phenotyping of the A30P alpha-synuclein monogenic case may help to identify underlying mechanisms of the disease that may translate into novel therapies, which would also apply to the more common sporadic forms of PD. [less ▲]

Detailed reference viewed: 49 (19 UL)
See detailMODELING HUMAN METABOLISM: A DYNAMIC MULTI-TISSUE APPROACH
Martins Conde, Patricia UL

Doctoral thesis (2019)

Despite significant advances in constraint-based modelling, a methodology for modelling dynamic multi-tissue models of human metabolism is still missing. Additionally, prior to analysing diseased models ... [more ▼]

Despite significant advances in constraint-based modelling, a methodology for modelling dynamic multi-tissue models of human metabolism is still missing. Additionally, prior to analysing diseased models, it is important to develop a good methodology, as it would not only enable us to capture the effects of metabolism-associated diseases, but it would also allow us to recapitulate known physiological healthy properties of human metabolism. Therefore, a dynamic multi-tissue model using a new methodology was developed. The objective function comprises a set of complex functions that the multi-tissue model needs to perform. To demonstrate the capabilities of this new approach, different healthy, and unhealthy conditions were simulated. In a first step, the effect of different healthy conditions was analysed (i.e. the fasting, the ingestion of different meals, and exercising at various intensities, and conditions), demonstrating the model’s capability to correctly predict metabolic changes occurring on energy-associated pathways. In the second step, biomarkers for a range of inborn errors of metabolism were predicted, and the predictions were shown to be in good agreement with previous data. Finally, after verifying the capability of the dynamic multi-tissue model to review known physiological aspects of human metabolism, this model was further integrated with a physiologically- based pharmacokinetic model of glucose metabolism, previously developed by Schaller et al. (2013). Contrasting conditions, such as healthy and diabetic, were simulated using the multi-scale model during fasting and after an oral glucose tolerance test and candidate drugs to treat type 2 diabetes mellitus were predicted. Five out of the 80 simulated drug targets were predicted as candidate anti-diabetic targets, and the majority of drugs known to inhibit the predicted drug targets, have already been shown to have anti-diabetic effects. The developed approach can be applied to any metabolic disease and to any system where homeostasis plays an important role, or where a simple biomass optimization function is not applicable. Furthermore, the large amount of data collected for the multi-tissue model generation is of significant value for tissue constraint-based metabolic modellers who need data to constrain their models. [less ▲]

Detailed reference viewed: 54 (16 UL)
Full Text
See detailFrom Information Theory Puzzles in Deletion Channels to Deniability in Quantum Cryptography
Atashpendar, Arash UL

Doctoral thesis (2019)

Research questions, originally rooted in quantum key exchange (QKE), have branched off into independent lines of inquiry ranging from information theory to fundamental physics. In a similar vein, the ... [more ▼]

Research questions, originally rooted in quantum key exchange (QKE), have branched off into independent lines of inquiry ranging from information theory to fundamental physics. In a similar vein, the first part of this thesis is dedicated to information theory problems in deletion channels that arose in the context of QKE. From the output produced by a memoryless deletion channel with a uniformly random input of known length n, one obtains a posterior distribution on the channel input. The difference between the Shannon entropy of this distribution and that of the uniform prior measures the amount of information about the channel input which is conveyed by the output of length m. We first conjecture on the basis of experimental data that the entropy of the posterior is minimized by the constant strings 000..., 111... and maximized by the alternating strings 0101..., 1010.... Among other things, we derive analytic expressions for minimal entropy and propose alternative approaches for tackling the entropy extremization problem. We address a series of closely related combinatorial problems involving binary (sub/super)-sequences and prove the original minimal entropy conjecture for the special cases of single and double deletions using clustering techniques and a run-length encoding of strings. The entropy analysis culminates in a fundamental characterization of the extremal entropic cases in terms of the distribution of embeddings. We confirm the minimization conjecture in the asymptotic limit using results from hidden word statistics by showing how the analytic-combinatorial methods of Flajolet, Szpankowski and Vallée, relying on generating functions, can be applied to resolve the case of fixed output length and n → ∞. In the second part, we revisit the notion of deniability in QKE, a topic that remains largely unexplored. In a work by Donald Beaver it is argued that QKE protocols are not necessarily deniable due to an eavesdropping attack that limits key equivocation. We provide more insight into the nature of this attack and discuss how it extends to other prepare-and-measure QKE schemes such as QKE obtained from uncloneable encryption. We adopt the framework for quantum authenticated key exchange developed by Mosca et al. and extend it to introduce the notion of coercer-deniable QKE, formalized in terms of the indistinguishability of real and fake coercer views. We also elaborate on the differences between our model and the standard simulation-based definition of deniable key exchange in the classical setting. We establish a connection between the concept of covert communication and deniability by applying results from a work by Arrazola and Scarani on obtaining covert quantum communication and covert QKE to propose a simple construction for coercer-deniable QKE. We prove the deniability of this scheme via a reduction to the security of covert QKE. We relate deniability to fundamental concepts in quantum information theory and suggest a generic approach based on entanglement distillation for achieving information-theoretic deniability, followed by an analysis of other closely related results such as the relation between the impossibility of unconditionally secure quantum bit commitment and deniability. Finally, we present an efficient coercion-resistant and quantum-secure voting scheme, based on fully homomorphic encryption (FHE) and recent advances in various FHE primitives such as hashing, zero-knowledge proofs of correct decryption, verifiable shuffles and threshold FHE. [less ▲]

Detailed reference viewed: 277 (96 UL)
See detailDynamiques de (dés)appartenance au cours de la vie : le cas des Portugais de "seconde génération" au Grand-Duché de Luxembourg
Martins, Heidi Rodrigues UL

Doctoral thesis (2019)

Cette étude s’intéresse aux sentiments de (dés)appartenances et à leur (re)construction au cours de la vie des Portugais de « seconde génération » au Luxembourg. Sur le plan théorique, notre recherche se ... [more ▼]

Cette étude s’intéresse aux sentiments de (dés)appartenances et à leur (re)construction au cours de la vie des Portugais de « seconde génération » au Luxembourg. Sur le plan théorique, notre recherche se fonde essentiellement sur trois approches : le transnationalisme, la théorie des dynamiques relationnelles et la théorie du parcours de vie. Les enfants d’é(im)migrés, a-t-on suggéré, sont élevés dans un champ social transnational qui implique des contacts transfrontaliers et des visites (au sens réel et/ou symbolique) dans le pays d’origine. Ainsi, dans un premier temps, nous commençons par investiguer de manière compréhensive leurs pratiques (trans)nationales (notamment leur variation en portée, intensité et fréquence au cours du parcours de vie) ; puis, dans un deuxième temps, nous demandons : comment les Portugais de « seconde génération » au Luxembourg (re)construisent-ils, négocient-ils et accomplissent-ils leurs (dés)appartenances au cours du parcours de vie et quelle est le rôle joué par le pays d’origine ?. Pour répondre à ce questionnement, nous avons adopté comme méthodologie principale l’entretien compréhensif. Les données empiriques pour cette étude qualitative proviennent de 25 entretiens compréhensifs (incluant la réalisation d’une Carte de (dés)appartenance) réalisés auprès de membres de la « seconde génération » issue de l’é(im)migration portugaise au Luxembourg. Les participants avaient entre 19 et 55 ans au moment de l’entretien. Le corpus a fait l’objet d’une analyse par théorisation ancrée. En plus de montrer le poids de la « donne résidentielle » et de la « donne technologique », qui a fortement infléchi/ et infléchit encore les parcours des enfants d’é(im)migrés Portugais au Luxembourg (en fonction de la cohorte), nous mettons en exergue les deux sources de tensions majeures quant à leurs sentiments de (dés)appartenance et mettons en avant l’agentivité de nos participants qui, à partir de deux stratégies de (dés)appartenance (déportugalisation et/ou luxembourgisation), mobilisent les clés de résolution de ces mêmes tensions. Nous soulignons également le rôle qu’y jouent les sentiments de fierté et de reconnaissance (liés de près à la question de la mobilité sociale). Nous concluons par, l’intégration de notre modèle de (re)construction des (dés)appartenance dans un processus plus large que nous nommons processus de (re)configuration identitaire. [less ▲]

Detailed reference viewed: 31 (11 UL)
Full Text
See detailArtificial Intelligence for the Detection of Electricity Theft and Irregular Power Usage in Emerging Markets
Glauner, Patrick Oliver UL

Doctoral thesis (2019)

Power grids are critical infrastructure assets that face non-technical losses (NTL), which include, but are not limited to, electricity theft, broken or malfunctioning meters and arranged false meter ... [more ▼]

Power grids are critical infrastructure assets that face non-technical losses (NTL), which include, but are not limited to, electricity theft, broken or malfunctioning meters and arranged false meter readings. In emerging markets, NTL are a prime concern and often range up to 40% of the total electricity distributed. The annual world-wide costs for utilities due to NTL are estimated to be around USD 100 billion. Reducing NTL in order to increase revenue, profit and reliability of the grid is therefore of vital interest to utilities and authorities. In the beginning of this thesis, we provide an in-depth discussion of the causes of NTL and the economic effects thereof. Industrial NTL detection systems are still largely based on expert knowledge when deciding whether to carry out costly on-site inspections of customers. Electric utilities are reluctant to move to large-scale deployments of automated systems that learn NTL profiles from data. This is due to the latter's propensity to suggest a large number of unnecessary inspections. In this thesis, we compare expert knowledge-based decision making systems to automated statistical decision making. We then branch out our research into different directions: First, in order to allow human experts to feed their knowledge in the decision process, we propose a method for visualizing prediction results at various granularity levels in a spatial hologram. Our approach allows domain experts to put the classification results into the context of the data and to incorporate their knowledge for making the final decisions of which customers to inspect. Second, we propose a machine learning framework that classifies customers into NTL or non-NTL using a variety of features derived from the customers' consumption data as well as a selection of master data. The methodology used is specifically tailored to the level of noise in the data. Last, we discuss the issue of biases in data sets. A bias occurs whenever training sets are not representative of the test data, which results in unreliable models. We show how quantifying and reducing these biases leads to an increased accuracy of the trained NTL detectors. This thesis has resulted in appreciable results on real-world big data sets of millions customers. Our systems are being deployed in a commercial NTL detection software. We also provide suggestions on how to further reduce NTL by not only carrying out inspections, but by implementing market reforms, increasing efficiency in the organization of utilities and improving communication between utilities, authorities and customers. [less ▲]

Detailed reference viewed: 217 (14 UL)
Full Text
See detailCo-evolutionary Hybrid Bi-level Optimization
Kieffer, Emmanuel UL

Doctoral thesis (2019)

Multi-level optimization stems from the need to tackle complex problems involving multiple decision makers. Two-level optimization, referred as ``Bi-level optimization'', occurs when two decision makers ... [more ▼]

Multi-level optimization stems from the need to tackle complex problems involving multiple decision makers. Two-level optimization, referred as ``Bi-level optimization'', occurs when two decision makers only control part of the decision variables but impact each other (e.g., objective value, feasibility). Bi-level problems are sequential by nature and can be represented as nested optimization problems in which one problem (the ``upper-level'') is constrained by another one (the ``lower-level''). The nested structure is a real obstacle that can be highly time consuming when the lower-level is $\mathcal{NP}-hard$. Consequently, classical nested optimization should be avoided. Some surrogate-based approaches have been proposed to approximate the lower-level objective value function (or variables) to reduce the number of times the lower-level is globally optimized. Unfortunately, such a methodology is not applicable for large-scale and combinatorial bi-level problems. After a deep study of theoretical properties and a survey of the existing applications being bi-level by nature, problems which can benefit from a bi-level reformulation are investigated. A first contribution of this work has been to propose a novel bi-level clustering approach. Extending the well-know ``uncapacitated k-median problem'', it has been shown that clustering can be easily modeled as a two-level optimization problem using decomposition techniques. The resulting two-level problem is then turned into a bi-level problem offering the possibility to combine distance metrics in a hierarchical manner. The novel bi-level clustering problem has a very interesting property that enable us to tackle it with classical nested approaches. Indeed, its lower-level problem can be solved in polynomial time. In cooperation with the Luxembourg Centre for Systems Biomedicine (LCSB), this new clustering model has been applied on real datasets such as disease maps (e.g. Parkinson, Alzheimer). Using a novel hybrid and parallel genetic algorithm as optimization approach, the results obtained after a campaign of experiments have the ability to produce new knowledge compared to classical clustering techniques combining distance metrics in a classical manner. The previous bi-level clustering model has the advantage that the lower-level can be solved in polynomial time although the global problem is by definition $\mathcal{NP}$-hard. Therefore, next investigations have been undertaken to tackle more general bi-level problems in which the lower-level problem does not present any specific advantageous properties. Since the lower-level problem can be very expensive to solve, the focus has been turned to surrogate-based approaches and hyper-parameter optimization techniques with the aim of approximating the lower-level problem and reduce the number of global lower-level optimizations. Adapting the well-know bayesian optimization algorithm to solve general bi-level problems, the expensive lower-level optimizations have been dramatically reduced while obtaining very accurate solutions. The resulting solutions and the number of spared lower-level optimizations have been compared to the bi-level evolutionary algorithm based on quadratic approximations (BLEAQ) results after a campaign of experiments on official bi-level benchmarks. Although both approaches are very accurate, the bi-level bayesian version required less lower-level objective function calls. Surrogate-based approaches are restricted to small-scale and continuous bi-level problems although many real applications are combinatorial by nature. As for continuous problems, a study has been performed to apply some machine learning strategies. Instead of approximating the lower-level solution value, new approximation algorithms for the discrete/combinatorial case have been designed. Using the principle employed in GP hyper-heuristics, heuristics are trained in order to tackle efficiently the $\mathcal{NP}-hard$ lower-level of bi-level problems. This automatic generation of heuristics permits to break the nested structure into two separated phases: \emph{training lower-level heuristics} and \emph{solving the upper-level problem with the new heuristics}. At this occasion, a second modeling contribution has been introduced through a novel large-scale and mixed-integer bi-level problem dealing with pricing in the cloud, i.e., the Bi-level Cloud Pricing Optimization Problem (BCPOP). After a series of experiments that consisted in training heuristics on various lower-level instances of the BCPOP and using them to tackle the bi-level problem itself, the obtained results are compared to the ``cooperative coevolutionary algorithm for bi-level optimization'' (COBRA). Although training heuristics enables to \emph{break the nested structure}, a two phase optimization is still required. Therefore, the emphasis has been put on training heuristics while optimizing the upper-level problem using competitive co-evolution. Instead of adopting the classical decomposition scheme as done by COBRA which suffers from the strong epistatic links between lower-level and upper-level variables, co-evolving the solution and the mean to get to it can cope with these epistatic link issues. The ``CARBON'' algorithm developed in this thesis is a competitive and hybrid co-evolutionary algorithm designed for this purpose. In order to validate the potential of CARBON, numerical experiments have been designed and results have been compared to state-of-the-art algorithms. These results demonstrate that ``CARBON'' makes possible to address nested optimization efficiently. [less ▲]

Detailed reference viewed: 129 (19 UL)
See detailSociographie des associations islamiques au Luxembourg, à l'aune de l'institutionnalisation
Pirenne, Elsa

Doctoral thesis (2019)

Detailed reference viewed: 69 (27 UL)
Full Text
See detailA Transaction’s Journey: Transactional Enhancements for Public Blockchain-based Distributed Ledgers
Fiz Pontiveros, Beltran UL

Doctoral thesis (2019)

Interest in the decentralised nature of blockchain-based distributed ledgers has rapidly grown over the past few years. While a portion of this interest is fuelled by the price surge in Bitcoin towards ... [more ▼]

Interest in the decentralised nature of blockchain-based distributed ledgers has rapidly grown over the past few years. While a portion of this interest is fuelled by the price surge in Bitcoin towards the end of 2017, numerous companies across industries such as healthcare and finance have shown a keen interest in this technology and begun investing in diverse research projects. The work presented in this dissertation proposes a series of enhancements to blockchain-based distributed ledger technologies by focusing on a key element in the system: the transaction. By investigating the life cycle of a transaction in popular blockchain systems like bitcoin and ethereum, several enhancements were identified to tackle some of the challenges under active research today by the blockchain community. [less ▲]

Detailed reference viewed: 108 (18 UL)
Full Text
See detailSupporting Change in Product Lines Within the Context of Use Case-driven Development and Testing
Hajri, Ines UL

Doctoral thesis (2019)

Product Line Engineering (PLE) is a crucial practice in many software development environments where systems are complex and developed for multiple customers with varying needs. At the same time, many ... [more ▼]

Product Line Engineering (PLE) is a crucial practice in many software development environments where systems are complex and developed for multiple customers with varying needs. At the same time, many business contexts are use case-driven where use cases are the main artifacts driving requirements elicitation and many other development activities. In these contexts, variability information is often not explicitly represented, which leads to ad-hoc change management for use cases, domain models and test cases in product families. In this thesis, we address the problems of modeling variability in requirements with additional traceability to feature models and the manual and error prone requirements configuration and regression testing in product families. We provide the following contributions: - A modeling method for capturing variability information in product line use case and domain models by relying exclusively on commonly used artifacts in use-case driven development, thus avoiding unnecessary modeling overhead. - An approach for automated configuration of product specific use case and domain models that guides customers in making configuration decisions and automatically generates use case diagrams, use case specifications, and domain models for configured products. - A change impact analysis approach for evolving configuration decisions in product line use case models that automatically identifies the impact of decision changes on other decisions, and incrementally reconfigures product specific use case diagrams and specifications for evolving decisions. - An approach for automated classification and prioritization of system test cases in a family of products that automatically classifies and prioritizes, for each new product, system test cases of previous product(s) in a product line, and provides guidance in modifying existing system test cases to cover new use case scenarios that have not been tested in the product line before. All our approaches have been developed and evaluated in close collaboration with our industry partner IEE. [less ▲]

Detailed reference viewed: 127 (35 UL)
See detailMental health and wellbeing in adolescence: The role of child attachment and parents' representations of their children
Decarli, Alessandro UL

Doctoral thesis (2019)

The aim of the current research was to explore the effects of attachment on emotion regulation, autonomy and relatedness, and behavioral problems in adolescence, and how attachment is in turn influenced ... [more ▼]

The aim of the current research was to explore the effects of attachment on emotion regulation, autonomy and relatedness, and behavioral problems in adolescence, and how attachment is in turn influenced by parental reflective functioning (PRF), parenting behaviors (operationalized in terms of behaviors promoting and undermining autonomy relatedness) and parenting stress (in terms of cortisol reactivity). Participants were 49 adolescents (11 to 17 years old) and their mothers (N = 40) and fathers (N = 28). We assessed adolescents’ attachment representations with the Friends and Family Interview (FFI), PRF with the Parent Development Interview (PDI), adolescents’ autonomy and relatedness, and parenting behaviors with the Family Interaction Task (FIT), and behavioral problems with the Child Behavior Checklist (CBCL) and the Youth Self-Report (YSR). The first study showed that mothers had significantly lower PRF and displayed more psychologically controlling behaviors in the interactions with their children than fathers. Rather than gender per se, high levels of PRF were the best predictors of autonomy support, whereas lower levels of PRF predicted more psychological control. Stress in the context of parenting was neither related to autonomy support nor to psychological control, which were best predicted by divorced family status. Finally, PRF mediated the relation between cortisol reactivity and both autonomy support and psychological control. The results of the second study suggest that higher levels of both maternal and paternal reflective functioning (RF) predict attachment security, whereas lower maternal RF and higher levels of maternal hostile parenting behaviors are the best predictors of disorganized attachment. Internalizing problem behavior is best predicted by disorganized attachment and externalizing symptoms are best predicted by dismissing attachment. These findings indicate that maternal behaviors play a mediating role and might be the primary route through which mothers’ RF is translated and communicated in the relationship with their adolescent children. Moreover, lower maternal RF and hostile and threatening behaviors may have long term negative effects in adolescence, contributing to attachment disorganization and poorer mental health. In the third study the results showed that disorganized adolescents displayed higher heart rate variability (HRV) than organized ones, both during the FFI and during the FITs. Dismissing adolescents showed a more pronounced increase in HRV during the FFI than those classified as secure and preoccupied; however, there were no differences between these groups in HRV during the FITs. The results suggest that disorganized adolescents had more difficulties in regulating their emotions both during the FFI and during the FIT, whereas dismissing individuals seemed effectively challenged only during the interview. The findings point to the potential utility of interventions aimed at enhancing attachment security, thus allowing a better psychological adjustment, and at improving PRF, especially in divorced families, given its protective effect on parenting stress and parenting behaviors. Clinical implications are discussed. [less ▲]

Detailed reference viewed: 108 (5 UL)
Full Text
See detailAutomated Identification of National Implementations of European Union Directives With Multilingual Information Retrieval Based On Semantic Textual Similarity
Nanda, Rohan UL

Doctoral thesis (2019)

The effective transposition of European Union (EU) directives into Member States is important to achieve the policy goals defined in the Treaties and secondary legislation. National Implementing Measures ... [more ▼]

The effective transposition of European Union (EU) directives into Member States is important to achieve the policy goals defined in the Treaties and secondary legislation. National Implementing Measures (NIMs) are the legal texts officially adopted by the Member States to transpose the provisions of an EU directive. The measures undertaken by the Commission to monitor NIMs are time-consuming and expensive, as they resort to manual conformity checking studies and legal analysis. In this thesis, we developed a legal information retrieval system using semantic textual similarity techniques to automatically identify the transposition of EU directives into the national law at a fine-grained provision level. We modeled and developed various text similarity approaches such as lexical, semantic, knowledge-based, embeddings-based and concept-based methods. The text similarity systems utilized both textual features (tokens, N-grams, topic models, word and paragraph embeddings) and semantic knowledge from external knowledge bases (EuroVoc, IATE and Babelfy) to identify transpositions. This thesis work also involved the development of a multilingual corpus of 43 directives and their corresponding NIMs from Ireland (English legislation), Italy (Italian legislation) and Luxembourg (French legislation) to validate the text similarity based information retrieval system. A gold standard mapping (prepared by two legal researchers) between directive articles and NIM provisions was prepared to evaluate the various text similarity models. The results show that the lexical and semantic text similarity techniques were more effective in identifying transpositions as compared to the embeddings-based techniques. We also observed that the unsupervised text similarity techniques had the best performance in case of the Luxembourg Directive-NIM corpus. We also developed a concept recognition system based on conditional random fields (CRFs) to identify concepts in European directives and national legislation. The results indicate that the concept recognitions system improved over the dictionary lookup program by tagging the concepts which were missed by dictionary lookup. The concept recognition system was extended to develop a concept-based text similarity system using word-sense disambiguation and dictionary concepts. The performance of the concept-based text similarity measure was competitive with the best performing text similarity measure. The labeled corpus of 43 directives and their corresponding NIMs was utilized to develop supervised text similarity systems by using machine learning classifiers. We modeled three machine learning classifiers with different textual features to identify transpositions. The results show that support vector machines (SVMs) with term frequency-inverse document frequency (TF-IDF) features had the best overall performance over the multilingual corpus. Among the unsupervised models, the best performance was achieved by TF-IDF Cosine similarity model with macro average F-score of 0.8817, 0.7771 and 0.6997 for the Luxembourg, Italian and Irish corpus respectively. These results demonstrate that the system was able to identify transpositions in different national jurisdictions with a good performance. Thus, it has the potential to be useful as a support tool for legal practitioners and Commission officials involved in the transposition monitoring process. [less ▲]

Detailed reference viewed: 48 (2 UL)