References of "Doctoral thesis"
     in
Bookmark and Share    
See detailEFFICIENT AND SCALABLE OPTIMIZATION ALGORITHMS FOR MULTIANTENNA SIGNAL PROCESSING
Arora, Aakash UL

Doctoral thesis (2021)

Multiantenna signal processing (MASP) is indispensable in many applications like wireless communications, radar, seismology, etc. Large-scale antenna arrays (LSAAs) are envisioned for future wireless ... [more ▼]

Multiantenna signal processing (MASP) is indispensable in many applications like wireless communications, radar, seismology, etc. Large-scale antenna arrays (LSAAs) are envisioned for future wireless communication systems to improve the range, power, and spectral efficiency (SE) of existing systems. Thus, for a practical multiantenna wireless communication system, efficient and scalable signal processing (SP) algorithms are essential to optimize system operations. In this thesis, we address several facets of such system optimization including beampattern matching, SE maximization among others. These are formulated as nonconvex optimization problems and the thesis proposes novel, efficient, and scalable optimization algorithms with theoretical convergence guarantees. We first consider the problem of transmit analog beamforming (or phase-only beamforming) design by solving a beampattern matching problem. We formulate variants of the unit-modulus/constant-modulus least-squares problem. To attempt at solving these NP-hard problems, we propose efficient and scalable algorithms based on different optimization frameworks including alternating minimization, majorization-minimization (MM), and cyclic coordinate descent (CCD). The proposed algorithms are theoretically shown to converge to a Karush–Kuhn–Tucker (KKT) point of the corresponding optimization problem while offering superior performance. We also provide a use case in satellite communications where a desired two-dimensional beampattern is approximated using a planar array by designing the analog beamforming system. Building on the previous problem, we consider a joint array design and beampattern matching perspective and formulate variants of sparse unit-modulus or sparse constant-modulus least-squares. The optimization problems are solved using combinations of different optimization frameworks such as variable projection/elimination, MM, and block/alternating MM. Next, we consider the problem of hybrid transceiver design for a single user point-to-point multiple-input multiple-output (MIMO) system employing LSAAs. We solve this problem based on the variable projection/elimination and MM frameworks. The proposed algorithms are shown to converge to a stationary point. We also study the applications of the proposed algorithms for hybrid precoding design for satellite communications. We then generalize convergence proofs from the earlier sections by providing a unified convergence proof for solving a generic block-structured optimization problem over nonconvex constraints. Finally, we consider the problem of localizing sources in the far-field of a spatio-temporal array formed by a single moving sensor along a known trajectory. We provide a novel signal model capturing the incoherency in the measurements sampled by the moving sensor. We establish different Cramér-Rao bounds for the considered system model by exploiting varying degrees of information, propose and study various direction of arrival (DOA) estimators. The thesis concludes by summarizing the main contributions and some open research problems. [less ▲]

Detailed reference viewed: 203 (21 UL)
See detailTrans Narrative: Deutschsprachige Autobiografien von trans Personen
Artuso, Sandy Kathy UL

Doctoral thesis (2021)

Im Zentrum dieser Doktorarbeit steht ein Korpus von 67 Autobiografien, die zwischen 1984 und 2016 im deutschsprachigen Raum veröffentlicht wurden. Diese Arbeit verbindet Literaturwissenschaft und Trans ... [more ▼]

Im Zentrum dieser Doktorarbeit steht ein Korpus von 67 Autobiografien, die zwischen 1984 und 2016 im deutschsprachigen Raum veröffentlicht wurden. Diese Arbeit verbindet Literaturwissenschaft und Trans/Gender Studies, sie benutzt narratologische und queer-theoretische Werkzeuge, und setzt die Texte von trans Personen in den Mittelpunkt. Während immer die grundlegend narratologische Frage im Zentrum stand, wer denn spricht, wurde diese vermeintlich harmlos anmutende Frage jedoch immer mit einem kritischen Blick auf die Machtpositionen und Normativitätssysteme versehen, die intern und extern auf die Autobiografien wirken. So war es auch ein Ziel dieser Arbeit, die Bedeutung und Wertstellung von Autobiografien für trans Personen und ihre Communities zu erforschen, und gleichzeitig einen Einblick in die einzelnen Lebensgeschichten zu geben. [less ▲]

Detailed reference viewed: 88 (8 UL)
Full Text
See detailIdentifying and targeting metabolic vulnerabilities of IDH mutant gliomas
Cano Galiano, Andrés UL

Doctoral thesis (2021)

Diffuse gliomas are a group of central nervous system (CNS) tumors with a poor patient prognosis. Within these diffuse gliomas, isocitrate dehydrogenase (IDH) mutation defines the different tumor subtypes ... [more ▼]

Diffuse gliomas are a group of central nervous system (CNS) tumors with a poor patient prognosis. Within these diffuse gliomas, isocitrate dehydrogenase (IDH) mutation defines the different tumor subtypes and is considered to be an initiating event in gliomagenesis. IDH is a metabolic enzyme that in normal conditions mediates the conversion of isocitrate into α-ketoglutarate (α-KG), producing the reducing equivalent NADPH. IDH mutation (IDHm) leads to a neomorphic reaction where α-KG is consumed to generate the oncometabolite D-2-hydroxyglutarate (D-2HG), using NADPH as reducing agent. It has been reported that IDHm-dependent D-2HG synthesis has a direct impact on DNA and histone methylation, however the metabolic repercussions are not yet well defined. Due to the consumption of NADPH by IDHm reaction, some groups including us have hypothesized that IDHm cells may bear an imbalance of reducing equivalents, that may trigger a defective antioxidant defense. In the present study we made use of patient-derived cell lines and xenografts thereof as well as clinical samples in order to study the metabolic vulnerabilities of IDHm gliomas. In the first part of the thesis experimental data, we generated an integrative liquid chromatography-mass spectrometry (LCMS)-based proteomic-metabolomic characterization of IDHm metabolism. We made use of patients, cell lines and xenografts to address the direct effect of the mutation. We observed that IDHm gliomas have altered regulation of key processes in central carbon metabolism through glucose and glutamate processing as well as glutathione (GSH) metabolism and fatty acid production. In the second part of experimental data, we investigated the redox vulnerabilities of IDHm gliomas. Here we discovered that IDHm astrocytomas specifically upregulate cystathionine-γ-lyase (CSE) enabling them to synthesize GSH independently of NADPH. CSE is the only known enzyme capable of synthesizing cysteine. We found that genetic and chemical inhibition of CSE led to a decrease in cell viability upon cysteine restriction. Finally inhibition of CSE in vivo led to a delay in tumor growth rate. In conclusion, in the present PhD dissertation we expose a comprehensive study of the metabolic behavior of IDHm human gliomas, and we propose a novel therapeutic strategy that might improve patient prognosis, by inflicting oxidative damage to the tumor. [less ▲]

Detailed reference viewed: 273 (8 UL)
Full Text
See detailSpecification and Model-driven Trace Checking of Complex Temporal Properties
Boufaied, Chaima UL

Doctoral thesis (2021)

Offline trace checking is a procedure used to evaluate requirement properties over a trace of recorded events. System properties verified in the context of trace checking can be specified using different ... [more ▼]

Offline trace checking is a procedure used to evaluate requirement properties over a trace of recorded events. System properties verified in the context of trace checking can be specified using different specification languages and formalisms; in this thesis, we consider two classes of complex temporal properties: 1) properties defined using aggregation operators; 2) signal-based temporal properties from the Cyber Physical System (CPS) domain. The overall goal of this dissertation is to develop methods and tools for the specification and trace checking of the aforementioned classes of temporal properties, focusing on the development of scalable trace checking procedures for such properties. The main contributions of this thesis are: i) the TEMPSY-CHECK-AG model-driven approach for trace checking of temporal properties with aggregation operators, defined in the TemPsy-AG language; ii) a taxonomy covering the most common types of Signal-based Temporal Properties (SBTPs) in the CPS domain; iii) SB-TemPsy, a trace-checking approach for SBTPs that strikes a good balance in industrial contexts in terms of efficiency of the trace checking procedure and coverage of the most important types of properties in CPS domains. SB-TemPsy includes: 1) SB-TemPsy-DSL, a DSL that allows the specification of the types of SBTPs identified in the aforementioned taxonomy, and 2) an efficient trace-checking procedure, implemented in a prototype tool called SB-TemPsy-Check; iv) TD-SB-TemPsy-Report, a model-driven trace diagnostics approach for SBTPs expressed in SB-TemPsy-DSL. TD-SB-TemPsy-Report relies on a set of diagnostics patterns, i.e., undesired signal behaviors that might lead to property violations. To provide relevant and detailed information about the cause of a property violation, TD-SB-TemPsy-Report determines the diagnostics information specific to each type of diagnostics pattern. Our technological contributions rely on model-driven approaches for trace checking and trace diagnostics. Such approaches consist in reducing the problem of checking (respectively, determining the diagnostics information of) a property over an execution trace to the problem of evaluating an OCL (Object Constraint Language) constraint (semantically equivalent to ) on an instance (equivalent to ) of a meta-model of the trace. The results — in terms of efficiency of our model-driven tools—presented in this thesis are in line with those presented in previous work, and confirm that model-driven technologies can lead to the development of tools that exhibit good performance from a practical standpoint, also when applied in industrial contexts. [less ▲]

Detailed reference viewed: 97 (24 UL)
Full Text
See detailHigher education trajectories and social origin in Germany and the United States: A comparative sequence-analytical approach
Haas, Christina UL

Doctoral thesis (2021)

Students’ higher education trajectories as holistic educational processes are an underresearched aspect, particularly in the German context. This cumulative thesis fills this gap by investigating ... [more ▼]

Students’ higher education trajectories as holistic educational processes are an underresearched aspect, particularly in the German context. This cumulative thesis fills this gap by investigating students’ trajectories through bachelor’s degree courses in German and US higher education. In terms of methodology, it is based on a sequence-analytical approach using two student panel data sets (the German National Educational Panel Study (NEPS) and the US Beginning Postsecondary Students Longitudinal Study (BPS)) and comprises a literature review and three empirical research articles, each providing a different theoretical and conceptual angle. Higher education is a non-compulsory educational phase, implying students are granted more autonomy and more choice but also require more personal responsibility to plan a path through higher education. As such, it is assumed that parents’ cultural resources – defined here as higher education-specific knowledge – and economic resources shape students’ trajectories to enable them to proceed through their studies in a more continuous or linear way and prevent students from experiencing complex trajectories, such as delays, interruptions or detours. To begin with, the literature review, constructed as a narrative review with systematic elements, captured the state of research on higher education trajectories by reviewing peer-reviewed journal articles from a wide range of mainly higher education research journals. It revealed that this research area is rather heterogeneous and dominated by studies focusing on the United States. Research articles one and two employ similar research strategies – sequence analyses followed by cluster analyses. Stressing the relationship between parents’ resources and students’ trajectories, the first article concentrates exclusively on students in German research universities, whereas the second also considers students at universities of applied sciences. Overall, these studies reveal that the trajectories of students at the universities of applied sciences are more often linear, while the opposite applies to students at research universities and students of low social origin, pointing towards the hypothesised effect of parental resources. Furthermore, students of low social origin are more likely to follow a linear standard trajectory when studying at a university of applied sciences compared to at a research university. In the third paper, based on the premise that trajectories are systematically shaped by the institutional context of the higher education system, students’ trajectories in German and US higher education are compared, allowing to simultaneously a view on system-level characteristics and national idiosyncrasies. US higher education provides almost universal access, is very marketised and highly differentiated, thereby accommodating diverse demands and heterogeneous student groups. By contrast, German higher education, based on public funding and regulation, early ability tracking and low permeability, restricts access and provides an overall much less diversified study offering. Consequently, research article three revealed that students’ trajectories are overall less standardised in US higher education – but this differs greatly by higher education sector, whereas the trajectories of students in the (selective) research universities are overall more standardized. Furthermore, the social origin differences were quite pronounced in the United States, whereas the social origin effect was almost nonexistent for students in German higher education in this study (based on a different sequence-analytical approach). Remarkable, though, remains the finding that students’ trajectories are less linear at German research universities compared to the universities of applied sciences – even more so among students of low social origin – while US research universities facilitate linear trajectories. Overall, this dissertation provides an important contribution to the state of research on link between social origin, students’ trajectories and how this link is mediated by the institutional context of the respective higher education system. [less ▲]

Detailed reference viewed: 268 (18 UL)
Full Text
See detailDRY-STACKED INSULATION MASONRY BLOCKS BASED ON MISCANTHUS CONCRETE
Pereira Dias, Patrick UL

Doctoral thesis (2021)

The present dissertation entitled “Dry-stacked insulation masonry blocks based on Miscanthus concrete” is carried out at the University of Luxembourg and financed by CONTERN Lëtzebuerger Beton. The ... [more ▼]

The present dissertation entitled “Dry-stacked insulation masonry blocks based on Miscanthus concrete” is carried out at the University of Luxembourg and financed by CONTERN Lëtzebuerger Beton. The principal aim of this project is to valorise the sustainability in the construction sector and improve the circular economy by using Luxembourgish Miscanthus to produce a masonry block. The latter should include bearing and thermal properties. Besides, a dry-stacked system should be adopted. The imposed aims are reached by developing a masonry block based on two materials connected by a dovetail connection. Furthermore, a dry-stacked system is adopted using a horizontally and vertically tongue-groove system. The present research demonstrates the approach performed to achieve these goals. The process to reach the described aims is divided in five major steps. The first step consists of an analysis on the needed amount of mixture components with the aim of achieving the highest possible load-bearing capacity of concrete based on Miscanthus aggregates. It can be concluded that the variation of the amount of components affects the density, which has an increasing parabolic relation with the load-bearing capacity of the specimens. Furthermore, the long-term deformations considering shrinkage of Miscanthus concrete achieve in average 2350 𝜇𝑚/𝑚, which is the double of a lightweight concrete. However, comparing the long-term deformations of Miscanthus concrete with Hemp-concrete a benefit of at least 50 % can be considered. Secondly, a machine-learning tool is applied to predict the compressive strength by introducing the mixture components and avoiding the need of creating time-consuming and costly experimental tests. Furthermore, it is possible to analyse the impact of each individual component on the load-bearing capacity. This tool has the ability of optimising the mixture according to the needs in compressive strength. Next, a Miscanthus concrete mixture is used to manufacture rectangular masonry blocks and an analysis on their geometrical height and roughness imperfections is performed experimentally and numerically on the load-bearing capacity of walls and single masonry blocks. The roughness was investigated by measuring the contact surface. Accordingly, an exponential relation is identified between the applied compressive strength and the contact surface. The height imperfections show a low impact on the load-bearing capacity of the wall. This statement is also validated in the numerical calculation. Finally, an increase of the relation height to length of a wall reduces linearly the maximum achieved compressive strength. The next step consists of investigating the use of a Mycelium-Miscanthus composite for insulation purposes and analyse different properties. The scanning electron microscopic analysis allows investigating the bond between Mycelium and Miscanthus. It can be concluded that the Mycelium webs enter the Miscanthus fibre and holds in the way the specimen together. Furthermore, a density of 122 kg/m3 and a thermal conductivity of 0.09 W/mK is measured in this bio composite, which is higher than a conventional insulation material. Besides, a fire resistance of category EI15 according to EN13501-2:2003 is measured. These results show a promising capacity of this composite as a building insulation. The last phase of this project consists of creating an interaction between all the parts by applying the investigated material properties into one masonry block with a geometry able to be applied in a dry-stacked masonry wall. The latter is applied by introducing a horizontal and vertical tongue-groove system in the masonry block. This block is divided in two parts, a bearing and insulation material, which are connected by a dovetail connection. A sensitivity analysis is performed in the wall by varying different properties of the masonry block, such as the thickness of the bearing and the insulation part, the angle of the dovetail connection or the position of tongue-groove system. An increase of the width of the bearing part has an increasingly linear impact on the load-bearing capacity. However, an increase of the thickness of the insulation part does not show any impact on the maximum achieved compressive strength. Furthermore, the impact of the geometrical imperfections like height and roughness are analysed. Subsequently, the needed thickness of the masonry block is calculated based on the imposed thermal transmittance value. A total thickness of the masonry block of 77 cm was determined. Therefore, it can be concluded that the thermal conductivity of the insulation part has to be improved to reduce the needed thickness of the masonry block. Finally, this thesis assesses the use of Miscanthus fibres in a masonry block, which has a bearing and insulation capacity. Furthermore, the tongue-groove system of the masonry block and the low Young’s Modulus of the Miscanthus mixture allow its application in a dry-stacked wall in the construction sector. [less ▲]

Detailed reference viewed: 150 (10 UL)
Full Text
See detailUnderstanding the role of colorectal cancer-associated bacteria in colorectal cancer
Ternes, Dominik UL

Doctoral thesis (2021)

Mounting evidence from 16S rRNA-based or metagenomic analyses suggests that dysbiosis, a state of pathological microbial imbalance, is prevalent in the gut of patients with CRC. Numerous microbial taxa ... [more ▼]

Mounting evidence from 16S rRNA-based or metagenomic analyses suggests that dysbiosis, a state of pathological microbial imbalance, is prevalent in the gut of patients with CRC. Numerous microbial taxa have been identified of which representative isolate cultures can interact with cancer cells, further triggering distinct disease pathways in animal models. Nevertheless, how these complex interrelationships of a dysbiotic microbiota may be involved in the pathogenesis of CRC remains a fundamental question and requires multifaceted mechanistic studies. This thesis moves beyond observational studies, it integrates novel experimental approaches for the study of the gut microbiome in colorectal cancer. It incorporates current knowledge in the field as well as interdisciplinary approaches. My work aims at contributing to an ecosystem-level mechanistic understanding of the CRC-associated microbiome in the initiation and progression of the disease. In detail, the objective of my work comprised an integrative approach of microbiome-CRC interaction studies. We revised current knowledge on, and studied the CRC-associated bacteria, in particular Fusobacterium nucleatum (Fn) and Gemella morbillorum (Gm). We assessed their direct and indirect effects on CRC cells, their interactions with immune cells, as well as their tumor-modulating potential in vitro, in silico, and in vivo. The results presented in this thesis comprise new findings on human microbial cross-talk of Fn with CRC. We identified formate as a potential fusobacterial oncometabolite, which enhanced cancer incidence and progression via increased cancer stemness signaling. Furthermore, we discovered immune-suppressive functions of Gm in the context of CRC. With my work in collaboration projects, I contributed to the development of two novel approaches in anti-cancer therapy: First, to the establishment of a personalized in vitro model (iHuMiX) for the study of microbe-host-immune interactions in anti-cancer therapy, and second, to the validation of an in silico workflow that uses metabolic rewiring strategies for network-based drug target predictions for CRC therapy. Taken together, this thesis work broadened the mechanistic understanding of CRC-associated microbes and it contributed to potential strategies for the development of an improved CRC therapy. [less ▲]

Detailed reference viewed: 154 (34 UL)
Full Text
See detailMagnetoelectric thin-film composites for energy harvesting applications
Nguyen, Tai UL

Doctoral thesis (2021)

Detailed reference viewed: 117 (12 UL)
See detailPopulismes et fabrique des droits économiques et sociaux dans le cadre des droits de l'Homme. Le Front national et l'Union démocratique du centre (1992-2013)
Albert, Frédéric UL

Doctoral thesis (2021)

Since the 1990’s, with the development of Europeanisation, globalisation and the installation of the neo-liberal paradigm, we observe in Europe the non-application of economic and social rights, despite ... [more ▼]

Since the 1990’s, with the development of Europeanisation, globalisation and the installation of the neo-liberal paradigm, we observe in Europe the non-application of economic and social rights, despite being aspired by the States of the human rights continent after the Second World War. At the same time, "national-populist" parties are gaining more and more support and are establishing themselves in the political landscape as a "right-wing third way", which would provide the answers that the governing parties do not seem to be able to find, in the context of an actual crisis of the welfare state. The aim of our research is to provide a comparative analysis of the discourses of the "Front National" in France, now Rassemblement National ("RN") and the "Union Démocratique du Centre" in Switzerland ("UDC/SVP") on economic and social rights, in order to confront them with the changes and public policies observed in our societies as a result of the neo-liberal paradigm. Using a cross-cutting analytical grid that identifies the characteristics of socio-economic discourse of a "national-populist" nature, this work is made up of numerous sources from both parties, studied over a period of twenty years : work that constitutes the boundaries between 1992, when the Maastricht Treaty developing the "European market" was signed, and 2013, with the start of negotiations for the signing of the transatlantic treaty opening the "European market" to the "US market". The research work is also based on original interviews with key figures from both parties and the creation of an online questionnaire aimed at elected representatives with responsibilities on a smaller scale (at regional level). Thus, the cross-referencing of our qualitative and quantitative data has enabled us to produce original results and to construct a new category of populist parties called : "national-populist parties opposed to human rights". Among other things, the latter develop in their socio-economic approach a will to defend economic and social rights but only for nationals, rejecting the universality of human rights. At the same time, they propose a hybrid form of capitalism with nuances between the "FN/RN" and the "UDC", combining a dose of protectionism but also a more or less sought-after integration into the "market". Furthermore, it is interesting to compare the discourses of a party outside government (the "RN") that has not yet participated in executive authority at the national level and a party associated with federal authority, the "UDC", both "inside and outside". Ultimately, in both cases, it is sovereignism and anchoring in so-called right-wing policies that seem to dominate the socio-economic DNA of the two populist parties studied. With the help of our research work, we can thus ask ourselves in what way does the "national-populist" discourse point to the failure of states on the human rights continent to implement economic and social rights as they intend ? [less ▲]

Detailed reference viewed: 106 (22 UL)
Full Text
See detailINTERFACIAL COVALENT CHEMICAL BONDING: TOWARDS THERMOREVERSIBLE ADHESION
Hassouna, Lilia UL

Doctoral thesis (2021)

Many industrial sectors like automotive and aeronautic industries are moving toward the use of multi-material devices and composite materials. Reversible adhesion becomes then increasingly important as it ... [more ▼]

Many industrial sectors like automotive and aeronautic industries are moving toward the use of multi-material devices and composite materials. Reversible adhesion becomes then increasingly important as it allows to structurally join dissimilar materials and keep them together during the material’s useful life, while allowing the easy repair of damaged parts or recycling of raw materials. One way to achieve this property is by thermoreversible covalent bonding based on Diels-Alder and its retro Diels-Alder reaction. This “click” reaction occurs between a diene and a dienophile to form an adduct at a certain temperature, then this adduct can be dissociated on command by simple heating. The investigation of the adduct dissociation via retro Diels-Alder reaction is key in understanding the adhesion reversibility based on these systems. In this work an already formed Diels-Alder adduct is synthesized then grafted on plasma polymer coatings or self assembled monolayers. These two different types of surfaces offer different environments for the molecules. Afterwards, a protocol for reaction monitoring based on TOF-SIMS spectroscopy was developed which allowed the determination of kinetic and thermodynamic parameters of the interfacial reaction on both surfaces. The effect of the adducts environment on the reaction was then elucidated by comparing the obtained values. Investigation of the same reaction in solution using 1H NMR spectroscopy confirmed the observations made on the effect of molecules immobilisation on the reaction. Essentially, the more the molecules are immobilized the lower is the energy barrier and the higher is the entropy contribution. Finally, the feasibility of interfacial adhesion based on this system was explored. [less ▲]

Detailed reference viewed: 52 (5 UL)
Full Text
See detailTaming Android App Crashes
Kong, Pingfan UL

Doctoral thesis (2021)

App crashes constitute an important deterrence for app adoption in the android ecosystem. Yet, Android app developers are challenged by the limitation of test automation tools to ensure that released apps ... [more ▼]

App crashes constitute an important deterrence for app adoption in the android ecosystem. Yet, Android app developers are challenged by the limitation of test automation tools to ensure that released apps are free from crashes. In recent years, researchers have proposed various automation approaches in the literature. Unfortunately, the practical value of these approaches have not yet been confirmed by practitioner adoption. Furthermore, existing approaches target a variety of test needs which are relevant to different sets of problems, without being specific to app crashes. Resolving app crashes implies a chain of actions starting with their reproduction, followed by the associated fault localization, before any repair can be attempted. Each action however, is challenged by the specificity of Android. In particular, some specific mechanisms (e.g., callback methods, multiple entry points, etc.) of Android apps require Android-tailored crash-inducing bug locators. Therefore, to tame Android app crashes, practitioners are in need of automation tools that are adapted to the challenges that they pose. In this respect, a number of building blocks must be designed to deliver a comprehensive toolbox. First, the community lacks well-defined, large-scale datasets of real-world app crashes that are reproducible to enable the inference of valuable insights, and facilitate experimental validations of literature approaches. Second, although bug localization from crash information is relatively mature in the realm of Java, state-of-the-art techniques are generally ineffective for Android apps due to the specificity of the Android system. Third, given the recurrence of crashes and the substantial burden that they incur for practitioners to resolve them, there is a need for methods and techniques to accelerate fixing, for example, towards implementing Automated Program Repair (APR). Finally, the above chain of actions is for curative purposes. Indeed, this "reproduction, localization, and repair" chain aims at correcting bugs in released apps. Preventive approaches, i.e., approaches that help developers to reduce the likelihood of releasing crashing apps, are still absent. In the Android ecosystem, developers are challenged by the lack of detailed documentation about the complex Android framework API they use to develop their apps. For example, developers need support for precisely identifying which exceptions may be triggered by APIs. Such support can further alleviate the challenge related to the fact that the condition under which APIs are triggered are often not documented. In this context, the present dissertation aims to tame Android crashes by contributing to the following four building blocks: Systematic Literature Review on automated app testing approaches: We aim at providing a clear overview of the state-of-the-art works around the topic of Android app testing, in an attempt to highlight the main trends, pinpoint the main methodologies applied and enumerate the challenges faced by the Android testing approaches as well as the directions where the community effort is still needed. To this end, we conduct a Systematic Literature Review (SLR) during which we eventually identified 103 relevant research papers published in leading conferences and journals until 2016. Our thorough examination of the relevant literature has led to several findings and highlighted the challenges that Android testing researchers should strive to address in the future. After that, we further propose a few concrete research directions where testing approaches are needed to solve recurrent issues in app updates, continuous increases of app sizes, as well as the Android ecosystem fragmentation. Locating Android app crash-inducing bugs: We perform an empirical study on 500 framework-specific crashes from an open benchmark. This study reveals that 37 percent of the crash types are related to bugs that are outside the crash stack traces. Moreover, Android programs are a mixture of code and extra-code artifacts such as the Manifest file. The fact that any artifact can lead to failures in the app execution creates the need to position the localization target beyond the code realm. We propose ANCHOR, a two-phase suspicious bug location suggestion tool. ANCHOR specializes in finding crash-inducing bugs outside the stack trace. ANCHOR is lightweight and source code independent since it only requires the crash message and the apk file to locate the fault. Experimental results, collected via cross-validation and in-the-wild dataset evaluation, show that ANCHOR is effective in locating Android framework-specific crashing faults. Mining Android app crash fix templates: We propose a scalable approach, CraftDroid, to mine crash fixes by leveraging a set of 28 thousand carefully reconstructed app lineages from app markets, without the need for the app source code or issue reports. We develop a replicative testing approach that locates fixes among app versions which output different runtime logs with the exact same test inputs. Overall, we have mined 104 relevant crash fixes, further abstracted 17 fine-grained fix templates that are demonstrated to be effective for patching crashed apks. Finally, we release ReCBench, a benchmark consisting of 200 crashed apks and the crash replication scripts, which the community can explore for evaluating generated crash-inducing bug patches. Documenting framework APIs' unchecked exceptions: We propose Afuera, an automated tool that profiles Android framework APIs and provides information on when they can potentially trigger unchecked exceptions. Afuera relies on a static-analysis approach and a dedicated algorithm to examine the entire Android framework. With Afuera, we confirmed that 26739 unique unchecked exception instances may be triggered by invoking 5467 (24%) Android framework APIs. Afuera further analyzes the Android framework to inform about which parameter(s) of an API method can potentially be the cause of the triggering of an unchecked exception. To that end, Afuera relies on fully automated instrumentation and taint analysis techniques. Afuera is run to analyze 50 randomly sampled APIs to demonstrate its effectiveness.Evaluation results suggest that Afuera has perfect true positive rate. However, Afuera is affected by false negatives due to the limitation of state-of-the-art taint analysis techniques. [less ▲]

Detailed reference viewed: 269 (28 UL)
See detailUnderstanding multilingual pupils' translanguaging practices: a qualitative study in two primary schools in Luxembourg
Degano, Sarah UL

Doctoral thesis (2021)

Over the last decades, research in bilingual and trilingual schools has shown that translanguaging – the systematic alternation between two or more named languages – can promote children’s learning and ... [more ▼]

Over the last decades, research in bilingual and trilingual schools has shown that translanguaging – the systematic alternation between two or more named languages – can promote children’s learning and increase their opportunities to participate in class. Translanguaging can therefore provide more equitable access to the curriculum and address educational inequalities. To date, however, few translanguaging studies have researched older primary school children in multilingual schools. Similarly, few have explored the use of children’s non-linguistic resources in meaning-making processes. More recently, there has been an explicit call for a multimodal perspective on translanguaging, according to which translanguaging is conceptualised as the deployment of the semiotic repertoire. While multimodality studies have pointed out the potential of multimodal resources in education, there remains a paucity of empirical research that – with regard to more equitable educational practices – explores the pupils’ use of both multilingual and multimodal resources in class. With the aim of understanding what translanguaging means in two different multilingual primary schools, the present thesis addresses this issue. It presents a qualitative research on the ways in which multilingual fourth-graders in Luxembourg deploy and combine the resources of their semiotic repertoires to communicate and make meaning in interaction with their peers and teachers. The study was informed by the following questions – how is translanguaging practised in the classroom? How do pupils translanguage, drawing on their semiotic repertoire? How is translanguaging used to construct curricular meaning in interaction? I argue that the pupils’ translanguaging practices are interactively constructed because the ways in which they combine their multilingual and multimodal resources for communication and meaning-making in class reflect those of their teachers. Data was collected in two public primary schools in Luxembourg – one in the North, one in the East – from January to November 2018. This data took the form of observations, fieldnotes, video-recordings of lessons, semi-structured interviews with the pupils and the teachers, as well as stimulated recall interviews with the pupils. Over a duration of 130 hours, I observed and recorded four focal pupils of different language and education backgrounds as they deployed their semiotic repertoires in interaction with their teachers and peers in both Year 4 and at the beginning of Year 5. In order to explore how translanguaging was practiced in the classrooms, I conducted a thematic analysis of the 42 documents of field notes (analytic memos), totalling about 152,000 words. In addition, to investigate how the pupils combined the resources of their repertoires and drew on translanguaging to make meaning, I applied a multimodal interaction analysis and a sociocultural discourse analysis to 11 hours’ worth of video footage from 177 videos, ranging in length from 7 seconds to 24 minutes. The 13 hours’ worth of interview material were summarised by hand and served triangulation purposes. The study has shown that the pupils’ translanguaging practices were interactively constructed. Firstly, the translanguaging practices of the teachers and the pupils varied between school contexts. In both schools, the pupils learned French, German and/ or Luxembourgish, and regularly used several languages. However, while their translanguaging practices mainly involved shifts between German and Luxembourgish in the school in the East, they involved shifts between the three school languages as well as Portuguese in the school in the North. Secondly, all four focal pupils combined linguistic, extralinguistic and paralinguistic resources, but their semiotic combinations were influenced by the translanguaging practices of their teachers and, therefore, varied between schools and across year groups. In the school in the East, the focal pupils tended to deploy their semiotic repertoires more extensively in Year 5 than in Year 4; it was the other way around for the focal pupils in the school in the North. Thirdly, depending on their teachers’ translanguaging practices, the pupils orchestrated their multilingual and multimodal resources in complex ways to co-construct curricular meaning. They flexibly shifted between languages (e.g. school languages, home languages), modes (i.e. verbal mode of communication, non-verbal mode of communication) and modalities (e.g. speaking, singing), thereby combining different linguistic resources (e.g. English, French, German, Luxembourgish, Portuguese), extralinguistic resources (e.g. gaze, gesture, touch) and paralinguistic resources (e.g. intonation, sound effects, volume). The present study draws attention to the resourcefulness with which multilingual primary school pupils of different backgrounds can co-construct curricular meaning. It also emphasizes the role that teachers can play in constraining or supporting the pupils’ use of resources in class. This study has implications for teachers in that they can learn how important it is to allow pupils to use their own resources in class. It calls for professional development that will help teachers develop a translanguaging stance, that is the belief that pupils are able to leverage their semiotic repertoires to participate in class, make meaning and thereby access the curriculum. This is of particular importance for children of lower socio-economic status and with a migration background who continue to underperform. [less ▲]

Detailed reference viewed: 156 (19 UL)
See detailInternationalisation and Multilingualism in Doctoral Education: Language Ideologies, Discourse and Positioning
Hofmann, Stephanie UL

Doctoral thesis (2021)

In light of the growing linguistic and cultural diversity among students and researchers, studies on multilingualism in higher education have been increasingly devoting attention to how students and ... [more ▼]

In light of the growing linguistic and cultural diversity among students and researchers, studies on multilingualism in higher education have been increasingly devoting attention to how students and academics use their plurilingual repertoire for writing academic texts. Framed by the internationalisation of higher education and its contributions towards a knowledge-based society and economy in Europe, little is known about how students and researchers conceptualise the role of the national language(s) and the linguistic repertoire(s) vis-à-vis English as the lingua franca. In particular, how academic actors negotiate voice when choosing a language for academic writing and publishing has not been closely examined. To rectify this lacuna, this study focuses on the linguistic processes of doctoral publications and outputs in the context of a multilingual university—the University of Luxembourg (UL), where next to English also German and French are official academic languages. In view of the increased usage of English for writing and publishing doctoral theses, questions arise about the mechanisms and preferences underlying doctoral researchers’ linguistic choices, and how such choices pertain to shifting academic norms. Thus, the overall aim of this exploratory study is to show how doctoral researchers in a multilingual research context—here, the University of Luxembourg—position themselves in relation to macrolevel discourses about language and academic success within their complex lingua-cultural and socio-economic setting. The data analysis is based on in-depth problemcentred interviews with five plurilingual doctoral researchers from China, Germany, Luxembourg and Russia. By applying discourse analysis to the interview transcripts, this thesis makes three substantial contributions to the research field. First, it reveals that despite the dominance of English, doctoral researchers continue to draw on their plurilingual repertoire as a resource for their research and writing processes, however, for different reasons, which are ideologically motivated. Second, the study shows that the choice for publishing in English is mostly based on shifts in academic norms that focus on economic imperatives, such as competition and hyper-performativity. Therefore, the prevalence of English and the pressure to publish in international journals seem to lead doctoral researchers towards limiting the use of the totality of their plurilingual repertoire for writing and publishing theses. And third, this research allows for a detailed understanding of underlying language ideologies of doctoral researchers in higher education. In particular, it gives insights into the value of the theoretical concepts of positioning and language ideology in discourse analysis for investigating the negotiation of voice. [less ▲]

Detailed reference viewed: 90 (7 UL)
Full Text
See detailLA KORA A LA CONFLUENCE DES CULTURES : UNE ETUDE DE LA TRANSMISSION D’UN OBJET CULTUREL DANS UN CONTEXTE TRANSCULTUREL CONTEMPORAIN
Ayegnon, Armel Benvide Osée UL

Doctoral thesis (2021)

The kora is a musical instrument of West African origin. Now played by several individuals who do not necessarily come from her native background, it has been transmitted from her endogenous cultural ... [more ▼]

The kora is a musical instrument of West African origin. Now played by several individuals who do not necessarily come from her native background, it has been transmitted from her endogenous cultural circle to other exogenous circles. We ourselves are kora players and belong to this exogenous circle. In our artistic journey and in our personal history, we have felt the need to produce a hybrid game. Thus, as an artist and researcher, our first objective of a doctoral thesis was to research hybridization processes from a so-called "traditional" repertoire and another called "modern". . Our academic career made us question these categorical qualifiers: "traditional", "modern". This questioning took place through the observation of the process of transmission of the kora. It was made from the very history of the instrument which cannot be dissociated from the history of the individuals and communities who adopted it. However, given the great diversity of the stories, it was necessary to think about setting up tools for collecting knowledge related to contexts and material and immaterial objects, relating to the kora. In order to achieve the primary goal we were pursuing, we finally directed this research towards proposing a method of approach to the study of transmission. There is therefore a dialectic between the object approached and the approach used in this research work. It is this dialectic that corresponds to the main objective of this work. Thus, this doctoral thesis aims to answer the following question: "How to think about the transmission of the kora" object-witness "of the meeting of cultures? "It is a double research focusing both on the transmission of the kora and on the means to study and realize this transmission. This research subsequently led to an applied anthropology and even cultural engineering project. This present writing is the summary of an academic adventure between practice and theory. [less ▲]

Detailed reference viewed: 115 (8 UL)
Full Text
See detailENTRAINMENT OF DROPLETS FROM WATER POOLS
Ouallal, Mohammed UL

Doctoral thesis (2021)

The aim of this work is to study the phenomena of droplet entrainment from water pool. This phenomenon could be either a consequence of boiling or depressurization. In a bubble column, droplets are ... [more ▼]

The aim of this work is to study the phenomena of droplet entrainment from water pool. This phenomenon could be either a consequence of boiling or depressurization. In a bubble column, droplets are released from the surface of the pool by bubble burst (in bubbly flow regime) or by detachment from liquids sheets (in churn turbulent flow regime) depending on the hydrodynamics inside the pool. Eventually, these droplets will be entrained by the streaming gas (superficial gas velocity) or fall back due to gravity. Many experimental studies have been conducted, and several numerical simulations were performed for a better understanding of the phenomena of entrainment. Numerical simulation are a good tool to simulate an experiment due to limitations of data. To that end, CFD showed to be a good candidate to perform such a simulation, yet these it demand high computational performance and are time consuming. However, Lumped Parameter codes (LP) are widely used due to their simplicity and fast running. The number of correlations that quantify the entrainment previously developed based on empirical, semi-empirical and theoretical approaches are limited to a specific regime in the water pool, thermal hydraulic conditions or even to a specific geometry For this purpose, after an extensive study, an empirical correlation is proposed to cover the flow regimes from bubbly to churn turbulent, and could be applied to a wide range of geometries. The current correlation shows an increase until a maximum entrainment of about 2.10-4, corresponding to gas velocity of 0.05 m/s for bubbly flow regime, a slight decrease to 2.10-5, for the transition regime for superficial gas velocities up to 0.1 m/s, and a sharp increase as the superficial gas velocity goes up to 5 m/s The experimental database used to develop the present empirical correlation covers a broader range of boundary conditions, namely pressure [1 bar – 15 bar], water pool thermal condition [subcooled – boiling], vessel diameter[0.19m- 3.2 m], pool diameter [0.1 m – 1.4 m], superficial gas velocity up to 5.0 m/s and for soluble and insoluble aerosols. Therefore, the proposed empirical correlation aims to constitute an important tool to transfer the experimental results to reactor application. [less ▲]

Detailed reference viewed: 179 (7 UL)
See detailA Study of Hiring Discrimination Using Factorial Survey Experiments: Theoretical and Methodological Insights
Gutfleisch, Tamara Rebecca UL

Doctoral thesis (2021)

This dissertation provides new insights into the study of hiring discrimination related to three dimensions of inequality: gender, ethnicity, and unemployment. Scholars within sociology of work and labor ... [more ▼]

This dissertation provides new insights into the study of hiring discrimination related to three dimensions of inequality: gender, ethnicity, and unemployment. Scholars within sociology of work and labor economics widely agree that hiring discrimination based on these dimensions exists. However, the question of who is most affected beyond the classical look at single dimensions is less clear, as are the conditions under which hiring discrimination occurs. Focusing on young labor market entrants, I address these questions with two empirical studies using factorial survey experiments. First, I study how applicants’ gender and unemployment interactively shape recruiters’ hiring intentions in sex-segregated occupations. Second, I study the role of recruiter nationality in hiring discrimination against foreigners in Luxembourg. Moreover, while factorial surveys are increasingly applied in the study of hiring discrimination, they have been criticized for exhibiting low external validity. This dissertation empirically addresses how to overcome this criticism by improving the design of factorial surveys currently applied in employer studies. Specifically, I study whether designs based on real vacancies trigger more valid judgements compared to designs based on hypothetical vacancies. Overall, the findings of this dissertation support the relevance of hiring discrimination for labor market inequalities related to gender, ethnicity, and unemployment, but suggest that the mechanisms underlying hiring discrimination related to these dimensions are more nuanced. First, this dissertation suggests that occupational sex segregation might matter for how unemployment shapes recruiters’ hiring intentions towards men and women. Second, while foreign applicants might generally have better hiring chances if the recruiter is foreign, foreign applicants having the same nationality as the recruiter might benefit less from this situation. Finally, this dissertation provides first evidence that using hypothetical vacancies constitutes a valid approach to study recruiter decision-making within the limits of factorial surveys. [less ▲]

Detailed reference viewed: 136 (23 UL)
Full Text
See detailETHICAL PERSPECTIVES ON BIG DATA IN AGRI-FOOD: OWNERSHIP AND GOVERNANCE FOR SAFETY
Sapienza, Salvatore UL

Doctoral thesis (2021)

Big data are reshaping the way we interact with technology, thus fostering new applications to increase the safety-assessment of foods, a critical goal in the protec- tion of individuals’ right to health ... [more ▼]

Big data are reshaping the way we interact with technology, thus fostering new applications to increase the safety-assessment of foods, a critical goal in the protec- tion of individuals’ right to health and the flourishing of the food and feed market. An extraordinary amount of information, including real-time data available from multiple sources, is analysed using machine learning approaches aimed at detecting the existence or predicting the likelihood of future risks, thus reducing the inaccu- racy of risk assessment. Food business operators have to share the results of these analyses when applying to place on the market certain products, whereas agri-food safety agencies (including the European Food Safety Authority) are exploring new avenues to increase the accuracy of their evaluations by processing Big data. Such an informational endowment brings with it opportunities and risks correlated to the extraction of meaningful inferences from data. However, conflicting interests and tensions among the involved entities - the industry, food safety agencies, and con- sumers - hinder the finding of shared methods to steer the processing of Big data in a sound, transparent and trustworthy way. Taken together, a recent reform in the EU sectoral legislation, the lack of trust in the EU food safety system proved by the recent Fitness Check of the General Food Law Regulation and the presence of a considerable number of stakeholders highlight the need of ethical contributions aimed at steering the development and the deployment of Big data applications. At the same time, general Artificial Intelligence guidelines and charters published by European Union institutions and Member States have to be discussed in light of applied contexts, including the one at stake. This thesis aims to contribute to these goals by discussing what principles should be put forward when processing Big data in the context of agri-food safety-risk assessment. The research focuses on two narrow and interviewed topics - data ownership and data governance - by evaluating how the regulatory framework addresses the challenges raised by Big data analysis in these domains. To do so, it adopts a cross-disciplinary research methodology that keeps into account both the technological advances and the policy tools adopted in the European Union, while assuming an ethical perspective when exploring poten- tial solutions. The outcome of the project is a tentative Roadmap aimed to identify the principles to be observed when processing Big data in this domain and their possible implementations. [less ▲]

Detailed reference viewed: 100 (4 UL)
Full Text
See detailDATA PROTECTION BY DESIGN IN THE E-HEALTH CARE SECTOR: THEORETICAL AND APPLIED PERSPECTIVES
Bincoletto, Giorgia UL

Doctoral thesis (2021)

In the digital age, e-health technologies play a pivotal role in the processing of medical information. As personal health data represents sensitive information concerning a data subject, enhancing data ... [more ▼]

In the digital age, e-health technologies play a pivotal role in the processing of medical information. As personal health data represents sensitive information concerning a data subject, enhancing data protection and security of systems and practices has become a primary concern. In recent years, there has been an increasing interest in the concept of privacy by design (PbD), which aims at developing a product or a service in a way that it supports privacy principles and rules. In the European Union, Article 25 of the General Data Protection Regulation provides a binding obligation of implementing data protection by design (DPbD) technical and organisational measures. This thesis explores how an e-health system could be developed and how data processing activities could be carried out to apply data protection principles and requirements from the design stage. Currently, there is a lack of clarity and knowledge on the topic for developers, data controllers and stakeholders. The research attempts to bridge the gap between the legal and technical disciplines on DPbD by providing a set of guidelines for the implementation of the principle in the e-health care sector. The research is based on literature review, legal and comparative analysis, and investigation of the existing technical solutions and engineering methodologies. So, this thesis uses both legal comparison and the interdisciplinary method. The work can be differentiated by theoretical and applied perspectives. First, it critically conducts a legal analysis on the principle of PbD and it studies the DPbD legal obligation and the related provisions. Later, the research contextualises the rule in the health care field by investigating the applicable legal framework for personal health data processing. Moreover, the research focuses on the US legal system by conducting a comparative analysis since PbD is an international principle and in the US federal law there is a specific rule for the e-health care sector that mandates the implementation of technical and organisational safeguards. Adopting an applied perspective, the research investigates the existing technical methodologies and tools to design data protection and it proposes a set of comprehensive DPbD organisational and technical guidelines for a crucial case study, that is an Electronic Health Record system. [less ▲]

Detailed reference viewed: 56 (7 UL)
Full Text
See detailMathematical Approaches to Biological Complexity in Systems Biomedicine
Ghaderi, Susan UL

Doctoral thesis (2021)

Living organisms represents the maybe most complex systems in the universe. This complexity is rooted in the necessity of life to be robust and adaptable. During evolution, life has therefore developed ... [more ▼]

Living organisms represents the maybe most complex systems in the universe. This complexity is rooted in the necessity of life to be robust and adaptable. During evolution, life has therefore developed diverse regulatory strategies that are implemented by interactions of a plethora of entities and driven by the indispensable non-equilibrium character of living matter. The resulting intrinsic complexity of biological systems has been a major obstacle to deeply understand the underlying principles of life. Systems biology and biomedicine address this challenge by interdisciplinary approaches where mathematical modeling represents a key element to reveal and dissect the sources of complexity from large and big data sets. In this spirit, the presented thesis applies bottom-up and top-down systems biomedicine approaches to investigate biological complexity at different levels and from different angels for metabolism and cell differentiation. First, a bottom-up approach is targeting the mathematical properties of the stoichiometric matrix, which is the essential mathematical object in biochemical reaction networks and thus of metabolism. Applying graph and hypergraph theory, we present the key mathematical properties of the stoichiometric matrix and exploit biochemical properties of such networks to obtain a moiety-based decomposition of the stoichiometric matrix and consequently of biochemical reaction networks. These insights lay the foundation for a more descriptive characterization of metabolism. Second, a novel top-down approach is presented to identify cell differentiation properties from single cell transcriptomic data by a combination of binarization, information theory and neural networks in terms of self organizing maps. This distribution-based analysis of cell fate is applied to blood cell differentiation and to the differentiation of induced pluripotent stem cells (iPSCs) into dopaminergic neurons in the context of Parkinson's disease (PD). This methodology allows for an alternative and efficient characterization of differentially expressed genes and the robust identification of critical points in cell differentiation. Comparing iPSCs with a PD-associated mutation within the LRKK2 gene to a healthy control cell line shows a faster maturation process in the disease context. By adapting concepts form non-equilibrium statistical physics, an entropy-based methodology in form of the Kullback-Leibler divergence is introduced to quantify the non-equilibrium character of cell fate, which reveals complementary essential biological processes of differentiation. Finally, a potential integrative approach in form of Bayesian networks is introduced that will eventually allow for efficient and robust mechanistic inference from big data. In particular, the optimization approach is based on Markov chain Monte Carlo methods for sampling from distributions with non-smooth potential functions and uses Langevin stochastic equations for an advanced optimization strategy. The potential of the introduced approach is demonstrated by a first application to a logistic regression function. Overall, the thesis applies complementary mathematical techniques to develop new tools for the characterization of biological complexity and the identification of underlying principles that are appearing in living systems. [less ▲]

Detailed reference viewed: 114 (14 UL)
Full Text
See detailLa protection du consommateur dans la vente en ligne en Côte d'Ivoire: Etude à la lumière du Droit européen
Adde, Franck Olivier UL

Doctoral thesis (2021)

Like most African countries, Côte d'Ivoire is experiencing a rapid growth in e-commerce. This boom brings challenges of all kinds. Amidst others, it is taking in a legal context where consumer protection ... [more ▼]

Like most African countries, Côte d'Ivoire is experiencing a rapid growth in e-commerce. This boom brings challenges of all kinds. Amidst others, it is taking in a legal context where consumer protection has not yet found its place, as evidenced by the texts of laws governing distance selling in Côte d'Ivoire. The thesis aims to propose an improvement of consumer protection in distance selling in Côte d'Ivoire through a review of laws in the light of European law. Though it refers to the European legal model, the thesis stresses that the literal mimicry of European law is one of the reasons for the ineffectiveness of African laws. The premise of the thesis about distance selling is that we cannot protect the consumer in the ivory coast in the same way that European law protects the European consumer. Beyond a simple legal comparison, the thesis stirs an in-depth reflection on the circulation of legal models. It examines the relationship between African law and European law from a historical and contextual point of view in order to determine how European law can serve as a model for Ivorian law so as to offer sufficient protection to consumers without threatening the growth of distance sales. [less ▲]

Detailed reference viewed: 175 (7 UL)
See detailThe ITU and ICAO regulating aeronautical safety services and related radio spectrum
Bergamasco, Federico UL

Doctoral thesis (2021)

The aim of the thesis is two-fold. At first it has a theoretical value, aiming at shedding light upon an unexplored area of international law – i.e. the overlap of regulatory competence of the ITU and ... [more ▼]

The aim of the thesis is two-fold. At first it has a theoretical value, aiming at shedding light upon an unexplored area of international law – i.e. the overlap of regulatory competence of the ITU and ICAO – in light of the wider phenomenon of the fragmentation of international law. Secondly, it has a practical purpose, suggesting a preventive way to deal with the legal antinomies that could arise between the ITU and ICAO legal frameworks and that could ultimately represent a hazard to the safety of air navigation and to human life and property. It takes a systematic approach, analyzing both the institutional aspects of the ITU and ICAO and the legal features of their main regulatory instruments, the Radio Regulations and the Standards and Recommended Practices. Subsequently, it explores the criticalities of such interaction in the overlapping domain of aeronautical safety services and attempts to examine the viable solutions in the next future, both at the legal and institutional level. This area of law, due to its complexity and to its connection with technical and engineering aspects, is largely unexplored by international lawyers. It thus represents an original contribution filling a gap not covered by either generalist or specialized legal literature. It also tackles potential problems that, if left unanswered, could become actual in the next future and represent a serious threat to the safety of international civil aviation. In particular, it recalls the necessity of a tighter regulatory coordination between the two international agencies in view of the increasing competition for and congestion of radio spectrum. [less ▲]

Detailed reference viewed: 77 (16 UL)
Full Text
See detailPROBABILISTIC CONTENT POPULARITY LEARNING IN PROACTIVE CACHING SYSTEMS
Mehrizi Rahmat Abadi, Sajad UL

Doctoral thesis (2021)

Recent rapid growth of data traffic in mobile networks has stretched the capability of current network architectures. Proactively caching popular contents close to end-users has been proposed as a ... [more ▼]

Recent rapid growth of data traffic in mobile networks has stretched the capability of current network architectures. Proactively caching popular contents close to end-users has been proposed as a promising approach to mitigate the issue. A full-fledged proactive cache management mechanism encompasses two interrelated algorithms which need to be carefully designed: content popularity prediction and caching policy. Abundant research has focused on the performance of various caching policies assuming that the content popularity is perfectly known. Nonetheless, the content popularity is unknown in practice and has to be predicted from users' requests. Due to non-deterministic and time-varying nature of the requests, the prediction is nontrivial. In this thesis, the main focus is to introduce efficient prediction algorithms from Bayesian viewpoint. The Bayesian approach provides a powerful framework to construct statistical models which capture uncertainty and are robust to "over-fitting" issue. Firstly, we consider the prediction problem under stationary scenario. To enhance the accuracy of prediction, content features are leveraged and a Bayesian Poisson regressor based on a Gaussian process is proposed. The model can automatically discover hidden patterns in the feature space among the already-existing or seen contents. It also allows to predict the popularities of newly-added or unseen contents whose statistical data is not available in advance. We show that these capabilities of the model can have significant impact on caching performance. Secondly, we formulate a cooperative content caching in order to optimize the aggregated network cost for delivering contents to users. An efficient caching policy requires an accurate prediction of time varying content popularity. The requests can potentially have interactions over time, among contents, and across locations. To exploit these patterns, a probabilistic dynamical model based on a canonical tensor decomposition is developed. Additionally, an online learning method that works with streaming data where content request arrives sequentially is designed. Numerical results confirm that modeling time-content-location interactions by the proposed model can improve the cooperative caching strategy performance. Last but not least, we take one step further and develop a dynamical model, which besides time-content-location interactions, it can also uncover a non-linear temporal trend structure in content requests through which a more accurate prediction can be attained. Subsequently, a cooperative caching policy is designed which adaptively performs network resource allocation and optimizes content delivery according to the dynamic of content requests. Therefore, the policy provides a more efficient utilization of network resources. Using simulations, we show that the developed caching mechanism outperforms reference methods which ignore the temporal trend information. [less ▲]

Detailed reference viewed: 122 (15 UL)
Full Text
See detailEnhanced Signal Space Design for Multiuser MIMO Interference Channels
Haqiqatnejad, Alireza UL

Doctoral thesis (2021)

Multiuser precoding techniques are critical to handle the co-channel interference, also known as multiuser interference (MUI), in the downlink of multiuser multi-antenna wireless systems. The convention ... [more ▼]

Multiuser precoding techniques are critical to handle the co-channel interference, also known as multiuser interference (MUI), in the downlink of multiuser multi-antenna wireless systems. The convention in designing multiuser precoding schemes has been to treat the MUI as an undesired received signal component. Consequently, the design attempts to suppress the MUI by exploiting the channel state information (CSI), regardless of the instantaneous users’ data symbols. In contrast, it has been shown that the MUI may not always be undesired or destructive as it is possible to exploit the constructive part of the interference or even converting the interfering components into constructive interference (CI) by instantaneously exploiting the users’ intended data symbols. As a result, the MUI can be transformed into a useful source of power that constructively contributes to the users’ received signals. This observation has turned the viewpoint on multiuser precoding from conventional approaches towards more sophisticated designs that further exploit the data information (DI) in addition to the CSI, referred to as symbol-level precoding (SLP). The SLP schemes can improve the multiuser system’s overall performance in terms of various metrics, such as power efficiency, symbol error rate, and received signal power. However, such improvement comes with several practical challenges, for example, the need for setting the modulation scheme in advance, increased computational complexity at the transmitter, and sensitivity to CSI and other system uncertainties. The main goal of this thesis is to address these challenges in the design of an SLP scheme. The existing design formulations for the CI-based SLP problem consider a specific signal constellation; therefore, the design needs to set the modulation scheme in advance. In this thesis, we first elaborate on optimal and relaxed approaches to exploit the CI in a novel systematic way. This study enables us to develop a generic framework for the SLP design problem, which can be used for modulation schemes with constellations of any given shape and order. Depending on the design criterion, the proposed framework can offer significant gains in the power consumption at the transmitter side or the received signal power and the symbol error rate at the receiver side without increasing the complexity, compared to the state-of-the-art schemes. Next, to address the high computational complexity issue, we simplify the design process and propose approximate yet computationally-efficient solutions performing relatively close to the optimal design. We further propose an optimized accelerated FPGA design that allows the real-time implementation of our SLP technique in high-throughput communications systems. Remarkably, the accelerated design enjoys the same per-symbol complexity order as that of the zero-forcing (ZF) precoding scheme. Next, we address the problem of robust SLP design under system uncertainties. In particular, we focus on two sources of uncertainty, namely, the channel and the design process. The related problems are tackled by adopting worst-case and stochastic design approaches and appropriately redefining the precoding optimization problem. The resulting robust schemes can effectively deal with system uncertainties while preserving reliability and power efficiency in the multiuser communications system, at the cost of a slightly increased complexity. Finally, we broaden our scope to new technologies such as millimeter wave (mmWave) communications and massive multiple-input multiple-output (MIMO) systems and revisit the SLP problem for low-cost energy-efficient transmitter architectures. The precoding design problem is more challenging particularly in such scenarios as the related hardware restrictions impose additional (often intractable) constraints on the problem. The restrictions are typically due to the use of finite-resolution analog-to-digital converters (DAC) or analog components such as switches and/or phase shifters. Two well-known design strategies are considered in this thesis, namely, quantized (finite-alphabet) precoding and hybrid analog-digital precoding. We tackle the related problems through adopting efficient design mechanisms and optimization algorithms, which are novel for the SLP schemes. The proposed techniques are shown to improve the system’s energy efficiency compared to the state-of-the-art. [less ▲]

Detailed reference viewed: 331 (55 UL)
Full Text
See detailAGILE FERTIGUNGSSTEUERUNG FÜR (RE-)FABRIKATIONSSYSTEME
Groß, Sebastian UL

Doctoral thesis (2021)

Increasing product variance and individualisation lead to increasing demands for flexibility in produc-tion and production control. In the context of remanufacturing, these demands are further intensified ... [more ▼]

Increasing product variance and individualisation lead to increasing demands for flexibility in produc-tion and production control. In the context of remanufacturing, these demands are further intensified by unknown conditions of the used products. Each product to be remanufactured may therefore require an individual route through the remanufacturing system. This process, which puts used products into an "as good as new or better" condition, is receiving increasing attention due to its high ecological and economic potential and legal regulations. In order to meet these requirements, a hybrid control archi-tecture will be presented. This consists of centralised and decentralised components. At the decentral-ised level, all physical production participants are networked with software components and controlled by these. These components can acquire the status and availability of the corresponding manufacturing participants. They can communicate with each other as well as with the central level. The central level is where the scheduling of machines and automated guided vehicle (AGVs) takes place. This is carried out simultaneously and not sequentially as is the case with the currently available control systems. A method based on Constraint Programming (CP) is being developed to optimise scheduling. Simulation results show that a simultaneous, as opposed to a sequential, scheduling enables a reduction of makespan by 35.6 %. Compared to other state of the art methods, the CP-based approach provides the best results and this in a significantly shorter computing time. The control architecture is able to react adequately to unexpected events such as machine failures or new orders. It uses real-life feedback from the shop floor for this purpose. The architecture is implemented as a multi-agent system. The approach can be validated by successfully controlling a model factory in a realistic environment. [less ▲]

Detailed reference viewed: 90 (1 UL)
See detailCurriculumentwicklung in einer mehrsprachigen Gesellschaft: Das Beispiel Luxemburg
Sattler, Anna-Sabrina UL

Doctoral thesis (2021)

The study starts by shedding light on the specific language situation in Luxembourg’s schools and society and explores the ways in which national curriculum is constructed utilizing the three official ... [more ▼]

The study starts by shedding light on the specific language situation in Luxembourg’s schools and society and explores the ways in which national curriculum is constructed utilizing the three official languages of Luxembourg, namely French, German and Luxembourgish. Against this backdrop it provides a detailed discussion of how specific ideas of a national linguistic identity have evolved in the course of history, and the extent to which they act as the basis for debates on language policy in today’s Luxembourgian school system. Identity formation and curriculum making shall therefore be considered as co-constructing processes, in the sense that the curriculum anticipates future societal ideals. In this respect, the curriculum ‘fabricates’ certain kinds of people and also different kinds of people (Popkewitz, 2008, 2020). Keeping this definition in mind, curriculum design becomes challenging when the school population is highly heterogenous and multilingual in itself: In addition to its historically and contextually determined multilingualism, the Grand Duchy of Luxembourg is home to numerous immigrant languages, and today almost 48% of the population are foreigners (STATEC, 2020a). Educational policy thus has to integrate pupils of non-Luxembourg origin and languages into the trilingual school system. While considering the different usage of languages in Luxembourgian society and the school system, I examine how certain ideas of multilingualism evolved and with it the representation of an ideal member of Luxembourg’s society. The dissertation will first give a historical and sociopolitical overview, concentrating on the interrelation of nation building during the 19th century and the creation of a national school system. Following the historical background, this dissertation focuses on the school and curriculum reform process of 2009 in Luxembourg, in the course of which the former education act of 1912 was replaced by a new law on elementary education. The reform was a response to the below-average performance results in large-scale assessments, first and foremost the PISA-study in the year 2000. Furthermore, it was as well the attempt to create a far more permeable school curriculum in the entire school system, and with it equal opportunities for pupils of different origins. The reform process of 2009 is accordingly seen as a turning point that broke up previously dominant ideas about the intertwinement of language and identity. With regard to these considerations, this study claims the process of curriculum making not only to be an explicit and implicit attempt to control school, and thus social realities. It is explicit to the extent that educational planning is used as a politically conscious mean of social intervention; and implicit because this control simultaneously correlates with cultural-historical practices which create common sense and therefore became subconsciously part of policy making. Following the theoretical approaches of Ludwik Fleck’s epistemology about thought styles (Fleck, 2017 [1935]), my research analyzes the extent to which specific ways of reasoning and acting in the context of curriculum making implicitly result from specific cultural historical conditions underlying the trilingual Luxembourgian school curriculum. Regarding the correlations between the institutional ideal of trilingualism in Luxembourg, the orientation towards international education standards and the extremely heterogeneous and multilingual structure of Luxembourgian society, the dissertation mainly focuses on the interrelation of the curricular paradigm and the challenges faced in the classroom reality. In light of these reflections, the dissertation tackles the following central questions: Which logics of argumentation do different actors within the curriculum making process pursue and how do they legitimize their positions on language policy? Which conflicts arise regarding the students’ linguistic repertoire and (supra-)national standards? To what extent do (supra)national educational agendas interfere with the shaping of a Luxembourg language(s) identity? How is the Luxembourg language(s) identity in light of curriculum making produced and thought? Methodologically, the reform process of 2009 will be historicized and the research questions will be addressed by a two folded research design. First, I conduct a historiographical evaluation of newspaper articles, parliamentary debates, minutes of curriculum meetings, publications of the ministry of education and legal texts. Second, the study contains an empirical analysis of 17 expert interviews which I conducted with key figures of the reform process and those who have been working with the reformed curriculum requirements. Based on the findings of my analyses, the dissertation will show that and why Luxembourg, as a kind of laboratory, is relevant to other multilingual contexts in general and in light of immigration processes in particular. The dissertation offers an innovative impetus by looking at the school reform of 2009 through a cultural-historical perspective. [less ▲]

Detailed reference viewed: 56 (6 UL)
See detailScalable computational modelling of concrete ageing and degradation
Habera, Michal UL

Doctoral thesis (2021)

The typical lifespan of concrete structures ranges from tens to hundreds of years. During such a long period of time many external factors including weather conditions, loading history or environmental ... [more ▼]

The typical lifespan of concrete structures ranges from tens to hundreds of years. During such a long period of time many external factors including weather conditions, loading history or environmental pollution. play a crucial role in concrete health and serviceability state. Prediction (via the means of computer simulation) of the long-term material properties of concrete can thus provide valuable insights and lead to better reusability of construction components. Several very complex multi-physics models were developed in the past decades for this purpose. While these models usually include a wide range of phenomena, the numerical problem which has to be solved poses major challenges and significantly increases required computational time. This makes a predictive simulation of any larger-scale structure non-feasible. On the other hand, commercial codes (ABAQUS, ANSYS, etc.) either lack the material models for a more accurate creep prediction or provide custom material routines which are not computationally optimised. In addition, a specific model and discretisation approach often requires a very specific choice of solvers and preconditioners in order to achieve good parallel scaling properties, so much required for execution on modern HPC infrastructures. In this thesis a 3-D material model for a reinforced concrete based on the micro-prestress solidification theory (MPS) of Bažant, continuum damage mechanics and the temperature and humidity model of Kunzel is efficiently implemented in the finite-element software FEniCS. A high-performance code for the assembly of residual and tangent operators is automatically derived using automatic differentiation capabilities (AD) of FEniCS. Seamless parallel integration with the linear algebra solvers suite PETSc then offers a wide range of solvers. The combination of AD, code generation techniques (e.g. FEniCS), and parallel performance of PETSc solvers for predictive modelling of concrete degradation is not present in the existing literature. It is believed that the results presented here allow the study of reusability and degradation of concrete components also for larger structures, where the conventional existing approaches cannot provide a reasonable computation time. [less ▲]

Detailed reference viewed: 169 (20 UL)
See detailContribution à la réactualisation théorique du rapport entre le contrat et la relation contractuelle au 21e siècle à partir des travaux de Axel Honneth sur la reconnaissance
Dufour, Pascale UL

Doctoral thesis (2021)

The aim of this interdisciplinary thesis is to engage a new theoretical reflection on the function of the contract. It essentially seeks to explain the link between objective law as applied to contractual ... [more ▼]

The aim of this interdisciplinary thesis is to engage a new theoretical reflection on the function of the contract. It essentially seeks to explain the link between objective law as applied to contractual relationships and intersubjective recognition. Honneth’s theoretical framework of recognition is used in order to rethink the classic conceptual toolbox of contractual legal theory, specifically the autonomy of the will and contractual liberty. By reevaluating contractual relationships through the prism of recognition and intersubjectivity, it appears that positive law neither voids nor contradicts these principles. Indeed, a fair number of provisions exist in a different legal realm, guaranteeing the minimal individual independence and social liberty of legal parties in their concrete contractual relationship. The contract, governing individual relationships between parties through positive law, takes on an institutional function beyond its symbolic function, at the crossroads of economic, political and social considerations. The limits of the contract’s function are revealed by the critique of the common legal techniques as well as the level of institutionalization of the contractual relationship. This argumentative decentering stays in line with liberal thinking whilst simultaneously rejecting the postulate of legal individualism, which still is the main critique regarding the principle of the autonomy of the will. This thesis is part of the civil law tradition and mainly draws on examples from Quebec Law and French Law. The conclusions pertaining to the function of the contract enable various explanations of certain contractual phenomena (contracts of adhesion, consumer contracts, default, pre-contractual obligations, etc.) and question many fundamental theoretical discourses in contractual theory (capacity, contractual justice, binding force of the contract, unforeseen events, etc.). [less ▲]

Detailed reference viewed: 137 (9 UL)
Full Text
See detailScattering Theory for the Hodge Laplacian and Covariant Riesz Transforms
Baumgarth, Robert UL

Doctoral thesis (2021)

Detailed reference viewed: 105 (12 UL)
Full Text
See detailDirection of Arrival Estimation and Localization Exploiting Sparse and One-Bit Sampling
Sedighi, Saeid UL

Doctoral thesis (2021)

Data acquisition is a necessary first step in digital signal processing applications such as radar, wireless communications and array processing. Traditionally, this process is performed by uniformly ... [more ▼]

Data acquisition is a necessary first step in digital signal processing applications such as radar, wireless communications and array processing. Traditionally, this process is performed by uniformly sampling signals at a frequency above the Nyquist rate and converting the resulting samples into digital numeric values through high-resolution amplitude quantization. While the traditional approach to data acquisition is straightforward and extremely well-proven, it may be either impractical or impossible in many modern applications due to the existing fundamental trade-off between sampling rate, amplitude quantization precision, implementation costs, and usage of physical resources, e.g. bandwidth and power consumption. Motivated by this fact, system designers have recently proposed exploiting sparse and few-bit quantized sampling instead of the traditional way of data acquisition in order to reduce implementation costs and usage of physical resources in such applications. However, before transition from the tradition data acquisition method to the sparsely sampled and few-bit quntized data acquisition approach, a study on the feasibility of retrieving information from sparsely sampled and few-bit quantized data is first required to be conducted. This study should specifically seek to find the answers to the following fundamental questions: 1-Is the problem of retrieving the information of interest from sparsely sampled and few-bit quantized data an identifiable problem? If so, what are the identifiability conditions? 2-Under the identifiability conditions: what are the fundamental performance bounds for the problem of retrieving the information of interest from sparsely sampled and few-bit quantized data? and how close are these performance bounds to those of retrieving the same information from the data acquired through the traditional approach? 3-Does there exist any computationally efficient algorithm for retrieving the information of interest from sparsely sampled and few-bit quantized data capable of achieving the corresponding performance bounds? My thesis focuses on finding the answers to the above fundamental questions for the problems of Direction of Arrival (DoA) estimation and localization, which are of the most important information retrieval problems in radar, wireless communication and array processing. Inthis regard, the first part of this thesis focuses on DoA estimation using Sparse Linear Arrays (SLAs). I consider this problem under three plausible scenarios from quantization perspective. Firstly, I assume that an SLA quantized the received signal to a large number of bits per samples such that the resulting quantization error can be neglected. Although the literature presents a variety of estimators under such circumstances, none of them are (asymptotically) statistically efficient. Motivated by this fact, I introduce a novel estimator for the DoA estimation from SLA data employing the Weighted Least Squares (WLS) method. I analytically show that the large sample performance of the proposed estimator coincides with the Cram\'{e}r-Rao Bound (CRB), thereby ensuring its asymptotic statistical efficiency. Next, I study the problem of DoA estimation from one-bit SLA measurements. The analytical performance of DoA estimation from one-bit SLA measurements has not yet been studied in the literature and performance analysis in the literature has be limited to simulations studies. Therefore, I study the performance limits of DoA estimation from one-bit SLA measurements through analyzing the identifiability conditions and the corresponding CRB. I also propose a new algorithm for estimating DoAs from one-bit quantized data. I investigate the analytical performance of the proposed method through deriving a closed-form expression for the covariance matrix of its asymptotic distribution and show that it outperforms the existing algorithms in the literature. Finally, the problem of DoA estimation from low-resolution multi-bit SLA measurements, e.g. $2$ or $4$ bit per sample, is studied. I develop a novel optimization-based framework for estimating DoAs from low-resolution multi-bit measurements. It is shown that increasing the sampling resolution to $2$ or $4$ bits per samples could significantly increase the DoA estimation performance compared to the one-bit sampling case while the power consumption and implementation costs are still much lower compared to the high-resolution sampling scenario. In the second part of the thesis, the problem of target localization is addressed. Firstly, I consider the problem of passive target localization from one-bit data in the context of Narrowband Internet-of-Things (NB-IoT). In the recently proposed narrowband IoT (NB-IoT) standard, which trades off bandwidth to gain wide area coverage, the location estimation is compounded by the low sampling rate receivers and limited-capacity links. I address both of these NB-IoT drawbacks by consider a limiting case where each node receiver employs one-bit analog-to-digital-converters and propose a novel low-complexity nodal delay estimation method. Then, to support the low-capacity links to the fusion center (FC), the range estimates obtained at individual sensors are converted to one-bit data. At the FC, I propose a novel algorithm for target localization with the aggregated one-bit range vector. My overall one-bit framework not only complements the low NB-IoT bandwidth but also supports the design goal of inexpensive NB-IoT location sensing. Secondly, in order to reduce bandwidth usage for performing high precision time of arrival-based localization, I developed a novel sparsity-aware target localization algorithm with application to automotive radars. The thesis concludes with summarizing the main research findings and some remarks on future directions and open problems. [less ▲]

Detailed reference viewed: 341 (43 UL)
Full Text
See detailTight-binding perspective on excitons in hexagonal boron nitride
Galvani, Thomas UL

Doctoral thesis (2021)

Two dimensional materials, which are systems composed of one or several angstrom-thin layers of atoms, have recently received considerable attention for their novel electronic and optical properties. In ... [more ▼]

Two dimensional materials, which are systems composed of one or several angstrom-thin layers of atoms, have recently received considerable attention for their novel electronic and optical properties. In such systems, the quasi two dimensional confinement of electrons as well as the reduced dielectric screening lead to a strong binding of electrons and holes. These bound electron-hole excitations, termed excitons, control many of the peculiar opto-electronic properties of 2D materials. In this context we study hexagonal Boron Nitride (hBN) as a prototypical 2D system. hBN layers crystallize in a honeycomb lattice similar to graphene, with carbon atoms replaced by boron and nitrogen. Contrary to its carbon cousin, hBN is a wide band gap semiconductor, well know for its UV luminescence properties and its particularly strong excitons. We investigate theoretically the excitonic properties of single and multilayer hBN. To describe excitons, we make use of the Bethe-Salpeter equation, which provides an effective hamiltonian for electron-hole pairs. We show that, owing to the relatively simple electronic structure of BN systems, it is possible there to construct a model that approximately maps the Bethe-Salpeter equation onto an effective tight-binding Hamiltonian with few parameters, which are in turn fitted to ab initio calculations. Using this technique, we are able to study in detail the excitonic series in single layer hBN. We classify its excitons according to the symmetries of the point group of the crystal lattice, and thus provide precise optical selection rules. Because our model naturally preserves the crystal geometry, we are able to characterize the effects of the lattice, and show how their inclusion affects the excitonic and, in turn, optical properties of hBN compared to a continuum hydrogenoid model. Further, we can access exciton dispersion, which is a crucial component for the understanding of indirect processes. We thus examine the dispersion of the lowest bound state. Having established the properties of the single layer, we turn our attention to multilayers. The interaction of several layers leads to a phenomenon known as Davydov splitting. Under this lens, we investigate how the number of layers affects the excitonic properties of hBN, with particular focus on the Davydov splitting of the lowest bound exciton, which is responsible for the main feature of the absorption spectra. We discuss the effects responsible for the splitting of excitons in multilayers, and construct a simple one-dimensional model to provide a qualitative understanding of their absorption spectra as a function of the number of layers. In particular, we show that, from trilayers onwards, we can distinguish inner excitons, which are localized in the inner layers, and surface excitons, which are localized on the outer layers. Remarkably, the lowest bound bright state is found to be a surface exciton. Finally, we briefly present a comparison of tight-binding calculations with ab initio calculations of the absorption spectrum of bulk hBN. We discuss its first peaks, and how they are related to the excitons of single-layer hBN. [less ▲]

Detailed reference viewed: 192 (10 UL)
Full Text
See detailSecuring Robots: An Integrated Approach for Security Challenges adn Monitoring for the Robotic Operating System
Rivera, Sean UL

Doctoral thesis (2021)

Robotic systems are proliferating in our society due to their capacity to carry out physical tasks on behalf of human beings, with current applications in the military, industrial, agricultural, and ... [more ▼]

Robotic systems are proliferating in our society due to their capacity to carry out physical tasks on behalf of human beings, with current applications in the military, industrial, agricultural, and domestic fields. The Robotic Operating System (ROS) is the de-facto standard for the development of modular robotic systems. Manufacturing and other industries use ROS for their robots, while larger companies such as Windows and Amazon have shown interest in supporting it, with ROS systems projected to make up most robotic systems within the next five years. However, a focus on security is needed as ROS is notorious for the absence of security mechanisms, placing people in danger both physically and digitally. This dissertation presents the security shortcomings in ROS and addresses them by developing a modular, secure framework for ROS. The research focuses on three features: internal system defense, external system verification, and automated vulnerability detection. This dissertation provides an integrated approach for the security of ROS-enabled robotic systems to set a baseline for the continual development of ROS security. Internal system defense focuses on defending ROS nodes from attacks and ensuring system safety in compromise. ROS-Defender, a firewall for ROS leveraging Software Defined Networking (SDN), and ROS-FM, an extension to ROS-Defender that uses the extended Berkely Packet Filter(eBPF), are discussed. External system verification centers on when data becomes the enemy, encompassing sensor attacks, network infrastructure attacks, and inter-system attacks. In this section, the use of machine learning to address sensor attacks is demonstrated, eBPF is utilized to address network infrastructure attacks, and consensus algorithms are leveraged to mitigate inter-system attacks. Automated vulnerability detection is perhaps the most important, focusing on detecting vulnerabilities and providing immediate mitigating solutions to avoid downtime or system failure. Here, ROSploit, an automated vulnerability scanner for ROS, and DiscoFuzzer, a fuzzing system designed for robots, are discussed. ROS-Immunity combines all the components for an integrated tool that, in conjunction with Secure-ROS, provides a suite of defenses for ROS systems against malicious attackers. [less ▲]

Detailed reference viewed: 118 (18 UL)
Full Text
See detailEssai sur les fonctions de la responsabilité contractuelle : l'éclairage du débat doctrinal par la pratique judiciaire française
Boyer, Julie Marie Suzie UL

Doctoral thesis (2021)

In French law, the debate regarding the functions of contractual damages still divides legal scholars. On the one hand, an unitarian approach argues that contractual damages should only compensate the ... [more ▼]

In French law, the debate regarding the functions of contractual damages still divides legal scholars. On the one hand, an unitarian approach argues that contractual damages should only compensate the creditor for his loss (‘réparation’). On the other hand, a dual approach argues that damages should protect the creditor’s right to performance (‘exécution par équivalent’). As a result, damages are not awarded because the debtor is liable, but because the contract guaranteed the performance of an obligation. However, some scholars take a position which aims at reconciling these conflicting positions by recognising the dual function of contractual damages. At a time when the French civil Code is being remodelled, our work tackles this debate with an innovative approach. Using both a comparative and an empirical method, our research verifies the viability of the theoretical arguments. The comparative perspective seeks to highlight the shortcomings in the structure of the debate in French law. The empirical perspective is testing the proposals put forward in the scholarly literature in order to pave the way for a renewed and structured duality of functions of contractual damages. [less ▲]

Detailed reference viewed: 140 (14 UL)
Full Text
See detailMultipath Routing on Anonymous Communication Systems: Enhancing Privacy and Performance
de La Cadena Ramos, Augusto Wladimir UL

Doctoral thesis (2021)

We live in an era where mass surveillance and online tracking against civilians and organizations have reached alarming levels. This has resulted in more and more users relying on anonymous communications ... [more ▼]

We live in an era where mass surveillance and online tracking against civilians and organizations have reached alarming levels. This has resulted in more and more users relying on anonymous communications tools for their daily online activities. Nowadays, Tor is the most popular and widely deployed anonymization network, serving millions of daily users in the entire world. Tor promises to hide the identity of users (i.e., IP addresses) and prevents that external agents disclose relationships between the communicating parties. However, the benefit of privacy protection comes at the cost of severe performance loss. This performance loss degrades the user experience to such an extent that many users do not use anonymization networks and forgo the privacy protection offered. On the other hand, the popularity of Tor has captured the attention of attackers wishing to deanonymize their users. As a response, this dissertation presents a set of multipath routing techniques, both at transport and circuit level, to improve the privacy and performance offered to Tor users. To this end, we first present a comprehensive taxonomy to identify the implications of integrating multipath on each design aspect of Tor. Then, we present a novel transport design to address the existing performance unfairness of the Tor traffic.In Tor, traffic from multiple users is multiplexed in a single TCP connection between two relays. While this has positive effects on privacy, it negatively influences performance and is characterized by unfairness as TCP congestion control gives all the multiplexed Tor traffic as little of the available bandwidth as it gives to every single TCP connection that competes for the same resource. To counter this, we propose to use multipath TCP (MPTCP) to allow for better resource utilization, which, in turn, increases throughput of the Tor traffic to a fairer extend. Our evaluation in real-world settings shows that using out-of-the-box MPTCP leads to 15% performance gain. We analyze the privacy implications of MPTCP in Tor settings and discuss potential threats and mitigation strategies. Regarding privacy, in Tor, a malicious entry node can mount website fingerprinting (WFP) attacks to disclose the identities of Tor users by only observing patterns of data flows.In response to this, we propose splitting traffic over multiple entry nodes to limit the observable patterns that an adversary has access to. We demonstrate that our sophisticated splitting strategy reduces the accuracy from more than 98% to less than 16% for all state-of-the-art WFP attacks without adding any artificial delays or dummy traffic. Additionally, we show that this defense, initially designed against WFP, can also be used to mitigate end-to-end correlation attacks. The contributions presented in this thesis are orthogonal to each other and their synergy comprises a boosted system in terms of both privacy and performance. This results in a more attractive anonymization network for new and existing users, which, in turn, increases the security of all users as a result of enlarging the anonymity set. [less ▲]

Detailed reference viewed: 217 (4 UL)
Full Text
See detailLa controverse constitutionnelle grecque sur l’article 120 § 4 en période de crise. Réflexions sur la compétence controversée du peuple en tant qu’organe de l’État
Mavrouli, Roila UL

Doctoral thesis (2021)

Cette thèse s’intéresse à l’apparition des deux discours doctrinaux grecs durant la période de crise économique de 2008 visant l’(in)constitutionnalité du premier mémorandum d’austérité, suivant les ... [more ▼]

Cette thèse s’intéresse à l’apparition des deux discours doctrinaux grecs durant la période de crise économique de 2008 visant l’(in)constitutionnalité du premier mémorandum d’austérité, suivant les politiques européennes de négociation de la dette publique. Il s’agit de faire apparaître les limites entre le discours du droit, la dogmatique juridique et la science du droit tout en identifiant trois niveaux de langage. La doctrine en tant qu’activité de compréhension, d’explication, de création et de critique du droit se distingue de la connaissance du droit positif. Mais parfois par crainte qu’une vision sociologique du droit ne prive celui-ci de toute prévisibilité, la doctrine se replie sur elle-même en fondant sa « science » et par conséquent prétend à une connaissance de son objet-droit. Ainsi, il s’agit de rechercher si le discours doctrinal pro-mémorandum autant que le discours doctrinal anti-mémorandum ne seraient pas descriptifs, mais exprimeraient des valeurs et énonceraient des prescriptions. Ou bien si la doctrine ne se limitant pas à une activité de connaissance de son objet, elle interprèterait et systématiserait le droit dans son rôle créateur de source complémentaire du droit en dialogue constant avec la jurisprudence et le législateur. Ou bien encore si elle peut être caractérisée par un élément scientifique, à savoir la description critique de l’activité scientifique ou prétendument scientifique à propos du droit. À cet égard, la démarche épistémologique de cette analyse est de montrer que la science juridique, aujourd’hui confrontée à une crise du paradigme positiviste dominant, mène à penser soit la nécessité de changer les dogmes établis soit au fait que l’« anomalie » ne sera pas parvenue à infirmer la fécondité du paradigme en place. [less ▲]

Detailed reference viewed: 119 (11 UL)
Full Text
See detailHuman Motion Analysis Using 3D Skeleton Representation in The Context of Real-World Applications: From Home-Based Rehabilitation to Sensing In The Wild
Lemos Baptista, Renato Manuel UL

Doctoral thesis (2021)

Human motion analysis using 3D skeleton representations has been a very active research area in the computer vision community. The popularity of this high-level representation mainly results from the ... [more ▼]

Human motion analysis using 3D skeleton representations has been a very active research area in the computer vision community. The popularity of this high-level representation mainly results from the large variety of possible real-world applications such as video surveillance, video conferencing, human-computer interaction, virtual reality, healthcare, and sports. Despite the effectiveness of recent 3D skeleton-based approaches, their suitability to real-world scenarios still needs to be assessed. Using these approaches in a real-world scenario can give new insights on how to improve them for reaching real-world standards. In this thesis, we propose new solutions to mitigate existing constraints for the deployment of 3D skeleton-based approaches in various real-world scenarios. For that purpose, we investigate two human motion analysis applications that are based on 3D skeletons, namely, home-based rehabilitation of functional activities and human motion analysis in the wild. In the first part of this thesis, we propose a low-cost solution designed for supporting home-based rehabilitation of stroke survivors under the remote supervision of a therapist. To that end, we introduce the concept of color-based feedback proposals for guiding the patients in real-time while exercising. More specifically, color-based codes are visualized for informing the patient on the accuracy of the movement and on the adequacy of the posture. Feedback proposals are tailored to each patient's body anthropometry. An initial clinical validation shows an improvement of the posture and of the quality of motion when using the proposed feedback proposals. In the second part of this thesis, we focus on human motion analysis in the wild in the context of cross-view action recognition. We propose and investigate different 3D human pose estimation techniques from a single RGB camera in order to take advantage of 3D skeleton-based approaches. Indeed, given their 3D nature, 3D skeletons can overcome more easily the challenge of viewpoint variability in contrast to 2D-based approaches. To show the relevance of 3D pose estimation techniques in the context of human motion analysis, two different pipelines are proposed. The first pipeline makes use of a per-frame pose estimation approach. Per-frame pose estimation shows temporal inconsistency and small fluctuations in the skeleton joint locations over time. Considering this, the second framework is then based on a sequence-to-sequence pose estimation, providing, therefore, temporally consistent skeleton sequences that are more robust to sensing in the wild. These two pipelines show an improvement in recognition accuracy as compared to state-of-the-art approaches on two different well-known datasets. However, despite their relevance, 3D human pose estimation methods present some limitations. For example, their accuracy drops significantly in the presence of unseen environments or situations, eg, challenging camera locations, and outdoor conditions. For that reason, we introduce 3DBodyTex.Pose dataset, an original dataset to address the challenges of camera locations and outdoor scenarios in the context of 3D human pose estimation. Moreover, 3DBodyTex.Pose offers to the research community new possibilities for the generalization of 3D human pose estimation from monocular in-the-wild images from arbitrary camera viewpoints. [less ▲]

Detailed reference viewed: 242 (18 UL)
Full Text
See detailExploring the potential of citizen science for more adaptive and sustainable surface water governance in Luxembourg
Pickar, Karl Arthur UL

Doctoral thesis (2021)

The following Ph.D. thesis describes a research project, which aimed to explore the potential of environmental citizen science to contribute to more adaptive surface water governance in Luxembourg and ... [more ▼]

The following Ph.D. thesis describes a research project, which aimed to explore the potential of environmental citizen science to contribute to more adaptive surface water governance in Luxembourg and beyond. Citizen science projects are research projects, which are marked by the active engagement of members of the public. Adaptive governance refers to a type of governance, which is based on the engagement of diverse types of knowledge, perspectives, and stakeholders, and on building adaptive capacity in the face of unforeseen change and coordination across levels and scales. The research contributes to the conceptual development of citizen science in the context of adaptive governance and provides an example of a co-design process with a focus on building a citizen science tool for the exploration of social-ecological systems. In addition, the thesis contributes to practical development by identifying a set of opportunities for change of current data collection and meaning-making towards more adaptive surface water governance in Luxembourg and by making first experiences with surface water citizen science in Luxembourg, while engaging multiple place-based, regional, and national partners. Towards the above-mentioned goal, the research project, first, examined the current data collection programmes and meaning-making approaches for the governance of surface water bodies in Luxembourg. Prevailing practices are discussed based on key criteria for adaptive governance based on relevant academic literature. The research project, then, examined different approaches to environmental citizen science as alternative and complementary data collection programmes and meaning-making approaches in view of their potential to contribute to more adaptive surface water governance.The research project set out to do so by taking a transdisciplinary sustainability science research approach. The methodology encompassed (1) semi-structured qualitative interviews with specialists in the water domain and documentary review to gain insights into the current data collection programmes and meaning-making approaches in Luxembourg, (2) the trialling of two contributory surface water citizen science projects based on the Freshwater Watch citizen science tool by Earthwatch, an approach, in which volunteers are called upon to engage in data collection designed by scientists, and (3) the co-creation of surface water citizen science projects with interested groups based in Luxembourg centred around co-design workshops, in which the co-design partners were invited to explore changes and challenges and to develop sets of parameters for investigating the state of surface water bodies based on their research interests. In line with other studies, the findings show that citizen science can, indeed, constitute new sources of data on surface water bodies and, thus, increase data availability. Citizen science can lead to datasets on multiple temporal and spatial levels, and may increase overall transparency (of, for example, data on water quality). It can also contribute to more transparency in the meaning of data and increase the capacity for individual meaning-making. The findings show, in particular, that citizen science can increase the diversity of approaches to data collection and meaning-making, as projects constitute channels for the engagement of different knowledge types and can utilise new funding sources with alternative funding criteria. In addition, the case studies have shown that citizen science is particularly useful for complementing current official data collection, in particular, with respect to data from smaller water bodies, and for linking ecological data with social and technological data for a faster detection of changes in the system and a better grasp of the evolution of drivers of change. Interestingly, the study suggests that contributory citizen science may be better suited for the initial engagement of those, who are not specialised or professionally engaged in the water domain. Specialists and professionals, in turn, showed a bigger interest in engaging in co-design. [less ▲]

Detailed reference viewed: 86 (4 UL)
See detailImplications of Blockchain-Based Smart Contracts on Contract Law
Bomprezzi, Chantal UL

Doctoral thesis (2021)

Detailed reference viewed: 138 (7 UL)
Full Text
See detailDigital History als ‚experimental space‘: Handels- und Transportnetzwerke in Gallien und Germanien sowie die Transportverbindung zwischen Mosel und Saône
Lotz, Jan Philipp UL

Doctoral thesis (2021)

This dissertation consists of two parts. In the first part, the focus lies on the study of trade and transport networks in the Gaulish and German provinces during the Roman Empire based on inscriptions ... [more ▼]

This dissertation consists of two parts. In the first part, the focus lies on the study of trade and transport networks in the Gaulish and German provinces during the Roman Empire based on inscriptions. Different approaches are used to tackle this topic, e.g. networks between different people and families, organisations and cities. The results show that networks between merchants or merchant families likely existed with the aim of securing and improving one’s own position in the business world. The networks between organisations show the close connection of these organisations among themselves and to the social and political elite. The spatial networks emphasize the important role of Lyon and show long-range connections of the merchants. The second part focuses on the reconstruction of the roman road between upper Saône and Moselle with the help of ancient, medieval and modern sources as well as digital methods like least cost path analysis. The results show that the road between Corre, Escles and Portieux is the most likely candidate, but it is also possible that other options existed. The dissertation is also part of the Doctoral Doctoral Training Unit ‚Digital History & Hermeneutics‘. One of the basic ideas of the DTU was a critical reflection on the epistemological and methodological challenges of doing historical research in the digital age and a critical and self-reflexive use of these new digital tools and technologies. Especially during the first part of the study, it became evident, that the fragmented nature of the sources leads to several problems when using the new digital methods and tools. This applies even more to Ancient History. The dealing with these problems became another important aspect of the dissertation. There is no point in denying the chances the digital turn and that it will change today’s academia landscape. They can be useful in historic research, but at the same time, they are still a journey into the unknown. It is of the utmost importance to keep a critical mindset towards the new developments, methods and tools. Detailed knowledge of the sources and especially their shortcomings is key. This also applies to their communication and to the documentation of the research process since digital methods can quickly produce impressive-looking results. Furthermore, not only methodological knowledge is required but, especially and probably even more importantly, methodological awareness. [less ▲]

Detailed reference viewed: 95 (8 UL)
See detailComprehensive peptidomic analysis of cell culture models and exosomes
Fougeras, Miriam Rebecca UL

Doctoral thesis (2021)

Cellular endogenous peptides may harbour important signalling functions regulating health and disease. The identification of functional peptides by mass spectrometry is very challenging but it allows ... [more ▼]

Cellular endogenous peptides may harbour important signalling functions regulating health and disease. The identification of functional peptides by mass spectrometry is very challenging but it allows identification and quantification of peptides in a large-scale and can provide experimental evidence of bioactive peptides. Quantitative approaches can provide insights in the regulation of endogenous peptides, however, they need a robust peptide extraction methodology. We demonstrated that different approaches for peptidomic sample preparation introduce variation and strongly influence the identification of different sets of peptides, showing the need of a critical evaluation of peptide extraction protocols. We identified a global peptidome, stable throughout different cell lines and within different organisms. And high numbers of identification allows us to target cellular fractions. An interesting source of bioactive peptides are extracellular vesicles (EVs). EVs transfer signalling molecules for cell-to-cell communication with surrounding but also distant cells. In cancer, EVs are involved in tumour progression, metastasis and treatment resistance. Their purity for proteomic and peptidomic analysis is very crucial and we consequently described strategies to assess the vesicle purity by mass spectrometry and selected a robust sample preparation method for further application. We identified many nuclear proteins enriched in EVs upon hypoxic treatment, even though general cellular markers were overall highly reduced in EV samples compared to cell samples. We thus conclude that the enriched cargo of the EVs is the result of targeted exocytosis of important proteins for tumour progression, rather than random co-exportation. An increased export of nuclear proteins involved in DNA replication, repair or chromatin remodelling suggests a proliferation-related signature of hypoxic EVs. So far, most EV studies are limited to the EVs’ nucleic acid, lipid or proteomic cargo, but little is known about the peptidomic cargo. Challenges of EV peptidomics lie in the low starting material EVs provide, compared to the high starting material which is usually needed for peptidomic studies. However, the exploitation of EVs regarding their peptidomic cargo might pave the way for better understanding of signalling pathways. Thus we aimed at comprehensively studying the cellular and EV peptidome. By combining different peptide extraction methodologies, we detected a vesicle- and a cell-origin related signature in the peptidome and we showed the feasibility of a parallel peptide and protein extraction and it’s advantage for precious EV-enriched samples. [less ▲]

Detailed reference viewed: 90 (13 UL)
Full Text
See detailSIGNALING, FUNCTION AND MODULATION OF CXCR3 VARIANTS
Reynders, Nathan UL

Doctoral thesis (2021)

Detailed reference viewed: 34 (1 UL)
See detailREFERENCE MEASUREMENTS AND SIMULATIONS OF STATIC AND DYNAMIC CHARACTERISTICS OF PRESTRESSED CONCRETE BRIDGES UNDER OUTDOOR CONDITIONS FOR STRUCTURAL HEALTH MONITORING
Kebig, Tanja UL

Doctoral thesis (2021)

Today’s traffic infrastructure, including its engineering structures such as bridges, is stressed not only by natural ageing and corrosion but also by fatigue. The fatigue of material is accelerated by ... [more ▼]

Today’s traffic infrastructure, including its engineering structures such as bridges, is stressed not only by natural ageing and corrosion but also by fatigue. The fatigue of material is accelerated by the steadily growing traffic volume and heavier vehicles. Many bridges were built after World War 2 using the prestressed concrete construction method that emerged at that time. Some bridges are close to the end of the planned service life and show damage such as spalling, cracking and corrosion. In addition, some bridges have not yet reached the end of their planned service life and already exhibit damage. These bridges require special attention and control, knowing that this issue is highly safety and cost relevant at the same time. Structural Health Monitoring (SHM) of bridges aims to detect and localise damage as early as possible to take countermeasures to reach at least the planned service life or even more. Therefore, control systems are needed to support the engineers in addition to the visual bridge inspection. Permanent control systems can allow real time controlling of the bridge behaviour but generate high effort and cost. One approach in SHM is damage detection based on stiffness changes. Damage can alter both the static and modal properties of a structure. It leads to a loss of stiffness and, consequently, to greater static deflection and, in dynamics, to a decrease of eigenfrequencies. A prerequisite for early damage detection is essential information about the bridge structure, best knowledge and understanding of the individual bridge behaviour already in the undamaged state to track changes. This information can be obtained with experiments on a bridge and, in parallel, by simulation with a Finite Element (FE) model. In the next step, the FE model is updated to the measurements so that the reference state of the structure is well matched. The aim of the simulation is not the ultimate load bearing analysis, but the simulation of changes in the deflection line, eigenfrequencies, mode shapes and in the best case, also in the static or dynamic flexibility matrix and even better stiffness matrix due to damage. For this purpose, recurring measurements and simulations are compared with the initial measurements. If changes in the properties occur, model updating can be used to detect, localise and quantify damage. For the described approach, the detection and localisation of damage depend on the best possible reference state’s characteristics acquisition. The most commonly used construction method for bridges is prestressed concrete. Therefore, the main focus of this work is the recording of the undamaged reference state of a post tensioning bridge beam. The test object was a 26 m long prestressed concrete T-beam, which was saved before the demolition of the real bridge. It was subsequently installed outdoors on the campus of the University of Luxembourg as a simple supported real size test beam. Since the changes in static and dynamic system properties are not only due to damage but can also occur, for example, as a result of temperature fluctuations, the work focuses as well on the recording and assessment of influences arising from bearing and real environmental conditions in the undamaged reference state. For the temperature acquisition, the tests were carried out over around 2 years. Moreover, the influence of bearing conditions was tested by three interchangeable movable bearing types. The reference condition was recorded by static, quasi-static and dynamic tests. Throughout the observation period, the temperature and deflection of the bridge were continuously measured at different positions. For the deflection measurements, a commercial system was used that requires contact with the bridge. In addition, two new non-contact measurement approaches were tested. One is a camera-based system and the other is a laser-based system. The laser-based measurement method was improved during the recordings and tested by a second laser based system at the beam. Through the various tests, the deflections, eigenfrequencies and mode shapes of the bridge were determined. With the information of the experimental part, an FE model was created and best fitted to the reference state. The FE model consists mainly of solid elements. For a future model updating, a special FE model was created, offering a slice-by-slice adjustment of the beam stiffness. The model was used to perform static deformation and modal analysis. Then, the static and dynamic flexibility matrix was calculated and compared based on the experimental and numerical results. Finally, and in view of the subsequent artificial damage of the beam, damage scenarios are proposed based on the calculated cracking moment. [less ▲]

Detailed reference viewed: 81 (10 UL)
Full Text
See detailLoad bearing mechanisms of headed stud shear connections in profiled steel sheeting transverse to the beam
Vigneri, Valentino UL

Doctoral thesis (2021)

Composite steel-concrete floor solutions have become popular in the design of buildings thanks to the efficient combination of high tensile strength and ductility of steel with reinforced concrete ... [more ▼]

Composite steel-concrete floor solutions have become popular in the design of buildings thanks to the efficient combination of high tensile strength and ductility of steel with reinforced concrete elements in compression. To ensure the longitudinal shear transfer between the downstand steel beam and the concrete slab in composite beams, headed stud shear connections are generally employed with profiled steel sheeting transverse to the supporting beam. However, whilst the steel deck enhances the bending resistance of the slab, the performance of the shear connection decreases. Based on the evaluation of a large database of push-out tests carried out in the last 40 years, several design models have been proposed in the last decades to predict the resistance of studs but none of them provides safe and reliable results. This is related to the fact that the proposed design equations do not always consider appropriately the actual resistance mechanisms activated in the shear connection. Also, as the failure modes are typically observed at high displacements, no information on the resistance components at lower displacements is given. Therefore, a deep investigation on the sequence of the load bearing resistance mechanisms of headed stud shear connections was performed with the support of an experimental campaign of 21 full scale push-out tests and numerical simulations. From the analysis of the experimental results, it was seen that all the samples experienced rib punching at low displacements followed by concrete pull-out failure or stud rupture. The influence of several structural parameters was also assessed by comparing different test series. It was found that 200 mm wide recess and slab depth have a minor impact on the performance of the connection. Instead, the addition of waveform rebars increased the resistance by 26% as well as the slip capacity whereas the different position of the wire mesh did not show an important influence. To investigate specifically the behaviour of the shear connections, the distribution of the compressive stresses in the rib and the plastic hinges developed in the stud connector were evaluated by means of a validated finite element model. From the outcomes of the experimental and numerical study, three main load bearing phases were distinguished. At low displacements (Phase 1), the concrete is not damaged until the typical cone crack initiates at the edge of the rib and the stud deforms in bending. Subsequently (Phase 2), while the cracks propagate, the internal forces in the rib redistribute and the resistance is governed by the bearing stresses of the concrete in front of the connector. At large displacements (Phase 3), the front side of the concrete rib is highly damaged whereas the tension stresses in the stud increases significantly due to pulling forces. For further slips, this can lead to concrete pull-out or stud rupture as confirmed by the experimental studies. These insights were taken as a basis for the development of three respective mechanical models: cantilever model, modified strut and tie model (MSTM), and strut and tie model (STM). Whilst the first considers the system as a cantilever beam, the other two reproduce the concrete as a system of compression struts and the steel sheeting was modelled as tie elements. All the resistance functions were analytically derived in consideration of the experimental and numerical results in order to estimate the capacity of the shear connection at different displacements. As the STM focuses on the behaviour at large deformations, only the first two models were considered to predict the actual capacity of the shear connection. The design resistance of these two proposed models was finally calibrated according to the statistical procedure of EN 1990. [less ▲]

Detailed reference viewed: 414 (58 UL)
See detailSTUDY OF RESORCINOL FORMALDEHYDE LATEX ADHESIVE IN FLEXIBLE RUBBER COMPOSITES: Multiscale characterization of initial structure and its evolution upon thermal treatment
Enganati, Sachin Kumar UL

Doctoral thesis (2021)

Polymeric cord-rubber composites play a vital role in the performance of a tire. They provide dimensional stability, strength to the sidewall, absorb shock, etc. in the tires. The polymer cords used in ... [more ▼]

Polymeric cord-rubber composites play a vital role in the performance of a tire. They provide dimensional stability, strength to the sidewall, absorb shock, etc. in the tires. The polymer cords used in these composites are dipped in resorcinol–formaldehyde–latex (RFL) adhesive to improve the cord-rubber adhesion. Not enough studies have been performed to fully understand the RFL interfacial region and its structural changes in course of tire usage. Moreover, its detailed structure and its evolution during thermal exposure has not been fully resolved. This study has been dedicated in understanding the structural properties of the RFL interfacial layer and its evolution upon thermal treatment in cord-rubber composites. Firstly, pure RFL adhesive properties were investigated when subjected to accelerated thermal exposure. The DMA and AFM measurements highlighted an increase of the modulus of the latex phase and the resin phase during thermal treatment in the presence of curatives. However, no changes occurred in the latex phase in RFL without curatives samples during thermal treatment. Such results demonstrated that the increase of modulus of the latex phase during thermal treatment was mainly due to the presence of curatives through the co-vulcanization process highlighted by NMR. Then, a model composite system composing RFL dipped polyamide monofilaments embedded in a rubber matrix was developed. A multiscale methodology was implemented to study the RFL interfacial region when the model composite was subjected to thermal treatment. While the macroscopic interfacial adhesion properties decreased with thermal treatment, an increase in the local modulus in RF resin and latex phases of the interfacial region was observed by AFM. The SEM-EDX results indicated the presence of oxygen in the RFL region which was facilitating the resin hardening in the RF phase. The further crosslinking in the latex phase due to the presence of sulfur curatives resulted in the increase of the latex phase modulus. Finally, the developed methodology on the model system was applied to the multifilament tire composites (polyamide cords embedded in rubber matrix) to have a multiscale connection between macroscale adhesion behavior and local RFL interfacial region evolution. [less ▲]

Detailed reference viewed: 234 (5 UL)
See detailLa reformulation de la figure du leader d’opinion au prisme de la réception de l’information des jeunes adultes via les réseaux socionumériques
Lukasik, Stéphanie UL

Doctoral thesis (2021)

Social-digital networks are linked to the user-receiver activity theorized by the Columbia school. In their model of two-step flow of communication, the Columbia researchers focused on non-digital social ... [more ▼]

Social-digital networks are linked to the user-receiver activity theorized by the Columbia school. In their model of two-step flow of communication, the Columbia researchers focused on non-digital social networks. The link between the media system and the social system that the Columbia school anticipated seems all the more relevant with the collect of information via social networks. Henceforth, the media must reckon with social networks and consequently with users-receivers. By sharing information, each user-receiver can become a short-term opinion leader by influencing their secondary groups and by arousing gratifications. The one-off act of sharing materializes this new filter which symbolizes the passage to the second step flow of communication. Sharing is therefore the circumstantial reification of personal influence which transforms the user-receiver into an opinion leader. In this 2.0 user-receiver model of the new media digital-social networks ecosystem, 2.0 opinion leaders can be compared to opinion sharers. 2.0 user-receivers are no longer only influenced by discussions led by opinion leaders within the groups to which they belong, but they are also influenced as soon as they receive information on social-digital networks by the filter operated by these same 2.0 opinion leaders. By mobilizing European and North American scientific literature, we wish to show the relevance of a canonical theoretical framework for the analysis of the uses and practices of social-digital networks through the prism of the reception of information from young adults. The goal of our approach is to link information (which is diffused by the media) to the communication of information (by users-receivers). In order to understand the situations of opinion influence at work in circulation and reception activities, the information filter processes will be studied by taking up the structural elements of the model proposed by the Columbia school ; in this model, those are the opinion leaders who are the real relays and filters of information. Our approach, both theoretically and methodologically, is a return to the original Columbia school literature. In accordance with this same literature, we aim to deploy an empirical social analysis steeped in both quantitative and qualitative methodology. We are interested in what "real people of everyday life" choose and do with media on social-digital networks, like the Columbia school which was interested in the people's choice and in particular in the part played by people in the flow of mass communications. The objective of this research is thus to transpose this Columbia model to the context of social-digital networks in order to update it and redefine, within it, the notion of opinion leader whose acceptance has been altered. Our contribution is therefore that of a social analysis of human communication of information via social-digital networks in the human and social sciences. [less ▲]

Full Text
See detailTUMOUR-ASSOCIATED MICROGLIA/MACROPHAGE HETEROGENEITY IN GLIOBLASTOMA
Pires Afonso, Yolanda Sofia UL

Doctoral thesis (2021)

Glioblastoma (GBM) is the most common and aggressive primary brain tumour in adults, characterized by high degrees of both inter- and intra-tumour heterogeneity. GBM cells secrete numerous factors ... [more ▼]

Glioblastoma (GBM) is the most common and aggressive primary brain tumour in adults, characterized by high degrees of both inter- and intra-tumour heterogeneity. GBM cells secrete numerous factors promoting the recruitment and infiltration of cellular players to the local tumour microenvironment. Tumour-associated microglia/macrophages (TAMs) represent the major cell type of the stromal compartment in GBM playing important roles along tumour development. Along GBM progression, these cells are supposed to be geared towards a tumour-supportive phenotype, therefore TAMs are pursued as key targets for the development of novel strategies aimed at re-educating them towards anti-tumour phenotypes. However, it is yet unclear how these immune suppressive properties are acquired and whether TAM subsets may phenotypically and functionally differently contribute to tumour development. Hence, the main goal of the present PhD project was to elucidate TAM diversity under defined temporal and spatial settings in GBM. Taking advantage of the GBM GL261 syngeneic and patient-derived orthotopic xenograft mouse models, we comprehensively studied the cellular and transcriptional heterogeneity of TAMs by combining single-cell RNA-sequencing, multicolour flow cytometry, immunohistological and functional analyses. We demonstrated that, as observed in patients, the myeloid compartment is the most affected and heterogeneous stromal compartment, with microglia and macrophage-like cells acquiring key transcriptional differences and rapidly adapting along GBM progression. Specifically, we uncovered that TAM transcriptional programmes converge over time, suggesting a context-dependent symbiosis mechanism characterized by decreased antigen-presenting cell signatures at late tumour stages. In the absence of Acod1/Irg1, a key gene involved in the metabolic reprogramming of macrophages towards an anti-inflammatory phenotype, we detected higher TAM diversity in the TME displaying increased immunogenicity and correlating with increased lymphocytic recruitment to the tumour site. Additionally, we uncovered that TAMs exhibit niche-specific functional adaptations in the tumour microenvironment, with microglia in the invasive landscapes displaying higher immune reactive profiles when compared to the corresponding cells in the angiogenic tumour phenotypes. Taken together, our data provide insights into the spatial and molecular heterogeneity of TAMs dynamically adapting along tumour progression or across specific tumour sites and revealing potential reactive anti-tumorigenic cell subsets that may be harnessed for therapeutic intervention in GBM. [less ▲]

Detailed reference viewed: 116 (18 UL)
Full Text
See detailMining App Lineages: A Security Perspective
Gao, Jun UL

Doctoral thesis (2021)

Direct inter-app code invocation in Android apps and its evolution: The Android ecosystem offers different facilities to enable communication among app components and across apps to ensure that rich ... [more ▼]

Direct inter-app code invocation in Android apps and its evolution: The Android ecosystem offers different facilities to enable communication among app components and across apps to ensure that rich services can be composed through functionality reuse. At the heart of this system is the Inter-component communication (ICC) scheme, which has been largely studied in the literature. Less known in the community is another powerful mechanism that allows for direct inter-app code invocation which opens up for different reuse scenarios, both legitimate or malicious. In this dissertation, we expose the general workflow for this mechanism, which beyond ICCs, enables app developers to access and invoke functionalities (either entire Java classes, methods or object fields) implemented in other apps using official Android APIs. We experimentally showcase how this reuse mechanism can be leveraged to “plagiarize" supposedly-protected functionalities. Typically, we could leverage this mechanism to bypass security guards that a popular video broadcaster has placed for preventing access to its video database from outside its provided app. We further contribute with a static analysis toolkit, named DICIDer, for detecting direct inter-app code invocations in apps. An empirical analysis of the usage prevalence and evolution of this reuse mechanism is then conducted. [less ▲]

Detailed reference viewed: 178 (12 UL)
Full Text
See detailThermodynamics of Quantum Collisions
Lourenço Jacob, Samuel UL

Doctoral thesis (2021)

In this thesis, we show how the thermodynamic concepts of heat and work arise at the level of a quantum collision. We consider the collision of a particle travelling in space, described by a wave packet ... [more ▼]

In this thesis, we show how the thermodynamic concepts of heat and work arise at the level of a quantum collision. We consider the collision of a particle travelling in space, described by a wave packet, with a fixed system having internal energy states. Our main finding is that the energy width of the wave packet, which can be narrow or broad with respect to the system energies, plays a crucial role in identifying energy exchanges due to collisions as heat or work. While heat is generated by narrow wave packets effusing with thermal momenta, work is generated by fast and broad wave packets. We compare our results with models of repeated interactions, which despite being inspired by quantum collisions, still present shortcomings when used as a basis for a thermodynamic theory. We show how these difficulties are circumvented by a proper application of quantum scattering theory. [less ▲]

Detailed reference viewed: 29 (1 UL)
Full Text
See detailLarge Scale Parallel Simulation For Extended Discrete Element Method
Mainassara Chekaraou, Abdoul Wahid UL

Doctoral thesis (2020)

Numerical models are commonly used to simulate or model physical processes such as weather forecasts, fluid action, rocket trajectory, building designs, or biomass combustion. These simulations are ... [more ▼]

Numerical models are commonly used to simulate or model physical processes such as weather forecasts, fluid action, rocket trajectory, building designs, or biomass combustion. These simulations are immensely complex and require a hefty amount of time and computation, making it impossible to run on a standard modern laptop in a reasonable and fair period. This research work targets large-scale and parallel simulations of DEM and DEM-CFD couplings using high-performance computing techniques and optimizations. This thesis aims to analyze, contribute, and apply the DEM approach using the XDEM multi-Physics toolbox to physical processes that have been reluctant to be used due to their required computational resources and time. The first step of this work is to analyze and investigate the performance bottlenecks of the XDEM software. Therefore, the latter has been profiled and some critical parts as the contact detection were identified as the main bottlenecks of the software. A benchmark has also been set up to assess each bottleneck part’s performance using a baseline case. This step is crucial as it defines the general guidelines to follow in optimizing any application in general. A complete framework has been developed from scratch and aims to test and compare several contact detection algorithms and implementations. The framework, which also has a parallel version, has been used to select an appropriate algorithm and implementation for the XDEM software. The link-cell approach, combined with a new Verlet list concept, proved to be the best option for significantly reducing the contact detection part’s computational time. The Verlet buffer concept developed during this thesis takes the particle flow regime into account when selecting the skin margin to enhance the algorithm’s efficiency further. In order to target the high-performance computers for large-scale simulations, a full hybrid distributed-shared memory parallelization has been introduced by adding a fine-grain OpenMP implementation layer to the existing MPI approach. A shared memory parallelization allows taking full advantage of personal workstations with modern CPU architecture. On the other hand, a hybrid approach is one of the best ways to fully exploit the computing node capacities of our modern CPU clusters that mainly have a NUMA architecture. Macro-benchmarking performance analysis showed that we could entirely exploit 80% (speed-up) of 85 computing nodes representing 2380 cores on the ULHPC supercomputer. Finally, a life-size biomass combustion furnace is developed and used as an application test to demonstrate the complex and heavy cases that the XDEM software can accommodate at this time. The furnace is the combustion chamber of a 16 MW geothermal steam super-heater, partoftheEnelGreenPower"Cornia2." powerplant located in Italy. It proves that DEM, in general, and XDEM in particular, can be used for real case applications that discourage users due to their complexity and especially the time required to deliver the outcome results. [less ▲]

Detailed reference viewed: 120 (16 UL)
See detailUnlawful Content Online: Towards A New Regulatory Framework For Online Platforms
Ullrich, Carsten UL

Doctoral thesis (2020)

The thesis reviews the online intermediary liability framework of the E-Commerce Directive (in Articles 12 - 15) along two research questions. 1) Is the current legal framework regulating content ... [more ▼]

The thesis reviews the online intermediary liability framework of the E-Commerce Directive (in Articles 12 - 15) along two research questions. 1) Is the current legal framework regulating content liability of online platforms under the ECD still adequate when it comes to combating illegal content? 2) Are there alternative models for intermediary regulation that are better suited to include internet intermediaries in the fight against illegal content? These questions were formulated against the premises that unlawful content online has been a persisting and growing problem and that the position of online intermediaries today makes enhanced responsibilities on their part necessary. The thesis undertakes to analyse the nature of the enforcement challenges in the EU when trying to engage online platforms under the current liability framework, and charts out an alternative approach to holding online platforms responsible. Chapter 3 reviews the current intermediary framework in the EU and the horizontal challenges of holding internet intermediaries liable. This is analysed against the backdrop of the proliferation of the internet and online platforms, sketched out in the preceding Chapter 2. Due to the ambiguity and outdatedness of the ECD provisions, on the one hand, and different national secondary liability traditions, on the other hand, the liability protections of online platforms have been interpreted and applied differently by EU Member States, and most importantly courts, leading to an uneven and ineffective enforcement landscape. Chapter 4 analyses sectoral provisions that cover different kinds of offences related to unlawful content and their interactions with the ECD and national legislation on intermediary liability. The thesis evaluates enforcement efforts in the areas of defamation, hate speech, terrorist content, copyright, trademarks, product safety and food safety. While none of the national (sectoral) approaches reviewed appear to be effective when trying to enlist intermediaries in the fight against unlawful content, the latter have built up powerful own private enforcement systems that have come to rival and run counter to public interests and fundamental rights. Chapter 5 introduces case studies of online enforcement in the areas of product and food safety, based on interviews conducted with market surveillance authorities in the EU. The specific enforcement system of EU product regulation poses particular challenges, but also offers some useful lessons for the eventual framework proposed in Chapter 6. This system eschews today’s liability cornerstones and the reliance of self-regulatory tools favoured by EU and national legislators so far. Instead it proposes an enhanced responsibility system, based on harmonised technical standards as used in the EU's New Approach regulatory method. Technical standards would define duty of care obligations in the guise of risk management approaches, which focus on defined (sectoral) harms that arise from the business practices of online platforms. They incorporate prospective responsibilities, such as for example safety by design for user onboarding, user empowerment, or (algorithmic) content management, as well as specific retrospective responsibilities relating to e.g. notice and takedown or content identification system. The standard can be adapted to the type of harm/violation, thus taking account of the specific fundamental rights and public interests involved on a sectoral level. [less ▲]

Detailed reference viewed: 200 (32 UL)
Full Text
See detailMETABOLIC MODELLING BASED APPROACH TO IDENTIFY TAILORED METABOLIC DRUGS
Bintener, Tamara Jean Rita UL

Doctoral thesis (2020)

Cancer is one of the leading causes of death worldwide with no efficient cure. Even though the currently existing cancer therapies show promising results in early detected cancers, the treatment of late ... [more ▼]

Cancer is one of the leading causes of death worldwide with no efficient cure. Even though the currently existing cancer therapies show promising results in early detected cancers, the treatment of late state cancers and metastases still presents many obstacles. The development of treatment resistance and non-responsive cancer hampers the already time-consuming endeavour of drug development. To overcome these hurdles, researchers started to turn to computational approaches to unravel the mechanisms of cancer in terms of onset, development, resistance mechanism, metastasis, and immune invasion among others. The advent of systems biology came not only with the ability to generate large amounts of data but also novel approaches and techniques to analyse them. For example, metabolic modelling approaches can integrate -omics data into a computational representation of metabolism and reconstruct context-specific metabolic models via model-building algorithm such as FASTCORE (Vlassis et al., 2014), FASTCORMICS (Pacheco et al., 2015), and rFASTCORMICS (Pacheco et al., 2019b). Since then, context-specific metabolic models have gained a large area of application that ranges from understanding basic metabolism of microbes, in silico engineering and optimisation of different bacterial strains to applications in human diseases such as biomarker identification and drug target prediction in several diseases, including cancer. In this thesis, I give an overview on metabolic modelling approaches with a focus on cancer and different methods of drug discovery based on the prediction of in silico targets that enable finding cancer-specific targets to advance personalized medicine. To this end, I have developed a drug prediction workflow that has been successfully used to predict and validate three drugs for repurposing in colorectal cancer using context-specific metabolic models reconstructed via rFASTCORMICS (Pacheco et al., 2019b). Furthermore, we have reconstructed 10 005 context specific models for cancer and its controls to investigate the metabolic differences and rewiring strategies used in cancer. A second application of the workflow for melanoma is currently in progress, several drugs have been predicted for repurposing in melanoma and will be tested in vitro on different melanoma cell lines. Additionally, the most promising drugs will also be tested in combination with current melanoma treatment. [less ▲]

Detailed reference viewed: 129 (6 UL)
Full Text
See detailOn reductions of local and global Galois representations modulo prime powers
Torti, Emiliano UL

Doctoral thesis (2020)

Detailed reference viewed: 154 (25 UL)
See detailTaxation, Data and Destination - An Analysis of the Compatibility of a Digitalized Destination-Based Corporate Tax and a Destination-Based Cash-Flow Tax with International and EU Tax and Data Protection Law Frameworks
Sinnig, Julia Ruth UL

Doctoral thesis (2020)

The digitalized economy poses challenges and issues to traditional corporate taxation that require modifications in the way companies are taxed under international tax law. The thesis discusses two tax ... [more ▼]

The digitalized economy poses challenges and issues to traditional corporate taxation that require modifications in the way companies are taxed under international tax law. The thesis discusses two tax proposals to face some of the main issues, such as the intangibility of business assets and operations, the lack of physical presence of businesses in market jurisdictions, as well as non-taxation under both corporate income tax and value added tax in these jurisdictions. One of the tax proposals is the destination-based cash-flow tax that has been elaborated on mainly in economic literature. The other proposal has been drafted by the author and is named "digitalized destination-based corporate tax". The thesis analyzes the business and legal context in which the two taxes would operate: beyond the testing of the two model taxes against several digitalized business models, the relevant legal frameworks known in taxation and composed by double taxation conventions, EU law and WTO law are analyzed. Moreover, considering that the place of destination is determined by reference to the place of customers or users, corporate taxpayers likely need to collect and process (personal) data of these third parties in order to determine their place of tax liability. Thus, the thesis also examines potential data protection law interferences. [less ▲]

Detailed reference viewed: 145 (10 UL)
Full Text
See detailPressure Sensing with Nematic Liquid Crystal and Carbon Nanotube Networks
Murali, Meenu UL

Doctoral thesis (2020)

The study of colloidal dispersions of nanoparticles in liquid crystals (LCs) is well known. In most of the works, the particles are mixed into the LC to form suspensions with well-dispersed particles ... [more ▼]

The study of colloidal dispersions of nanoparticles in liquid crystals (LCs) is well known. In most of the works, the particles are mixed into the LC to form suspensions with well-dispersed particles. However, when nanoparticles are physically connected to form networks, the overall macroscopic properties of the ensemble are directly linked to the specific properties of the nanoparticles. Carbon nanotubes (CNTs) are excellent electrical conductors possessing extremely high aspect ratio, which results in a very low concentration threshold needed to obtain percolation. Therefore, they form conductive networks with extremely small amounts of CNTs. Another advantage of carbon nanotubes is their capability to transport large current densities without damage by electromigration, maintaining a stable resistance, and having scattering-less paths across several microns. Moreover, the electromechanical properties of CNTs make them an ideal candidate in pressure sensing technology. The doctoral thesis presented here describes two different approaches to integrate and utilise CNTs in an LC matrix. In the first case, we show that a variety of nanoparticles that are dispersed in LC can be attracted and assembled onto a LC defect line generated in a predetermined location, thereby creating a vertical interconnect of nanoparticles. The second consists of CNT sheets mechanically drawn from a CNT forest and an LC cell is then built on top, and the second consists of a template-based assembly of dispersed CNTs onto defect lines in LCs. In this case, we study the electrical and optical properties of CNT sheets in the presence and absence of liquid crystals based on their DC electrical characterization with distributed electrical contacts. Finally, we discuss how these two approaches can be used to successfully fabricate pressure-sensing devices. The pressure response in both these sensors is achieved based on the change in resistance of the CNTs, induced by the structural variations under the external applied pressure. Both the pressure sensors developed here are easy to fabricate, cost-effective, and recoverable owing to the elasticity and softness of the LC. [less ▲]

Detailed reference viewed: 150 (26 UL)
Full Text
See detailDEMAND-SIDE-MANAGEMENT MIT WÄRMEPUMPEN IN LUXEMBURG - POTENZIALE UND HERAUSFORDERUNGEN DER WÄRMEPUMPENFLEXIBILITÄT FÜR DIE SYSTEMINTEGRATION DER ERNEUERBAREN ENERGIEN
Bechtel, Steffen UL

Doctoral thesis (2020)

In 2020 the European Union introduced the “Green Deal” and declared the target of climate neutrality until 2050. The necessary measures will lead to a massive roll-out of fluctuating renewable energies ... [more ▼]

In 2020 the European Union introduced the “Green Deal” and declared the target of climate neutrality until 2050. The necessary measures will lead to a massive roll-out of fluctuating renewable energies such as wind power and photovoltaic. This in turn will lead to an increasing need for flexibility in the energy system. The design of the future European internal market for electricity intends to let end-consumers actively participate by managing their consumption based on variable electricity prices and in that way contributing to the flexibility demand. For private households, these Demand-Side-Management measures target heat pumps in particular. This work analyzes the flexibility potential of heat pumps in residential buildings and addresses challenges in the Luxembourgish context. The time horizon for the evaluation is defined as 2030. The methodology presented in this work is applicable to similar regions in Europe. The research questions are investigated by the means of thermal simulation. The software TRNSYS is used for the building models and heating systems. A Model-Predictive-Control, developed in MATLAB, is sending control signals to the heat pump that are based on variable electricity tariffs. The heat extraction of the thermal energy storage tank is determined by a neural network, so that the Model-Predictive-Control in itself works without an integrated building model. The suitability of the approach is validated by the simulation results. Based on the national developments in the building stock, there is a theoretical heat pump potential of 236-353 MWel that can offer flexibility. The band with arises because of different suppositions for the yearly refurbishment rate. The technical potential is significantly lower and is determined by the developments of the national heat pump market. As the data availability for Luxemburg was insufficient, a heat market study was initiated that investigated sales numbers for the period of 2014-2018 and derived scenarios until 2030. The technical potential in conclusion amounts to 30-73 MWel. The insights of the national context are used for the design of the simulation models. The concept of Demand-Side-Management is tested with numerous simulation cases and is then evaluated on aspects of energy efficiency, profitability and load shifting. In total there are three reference buildings, one single-family and one multi-family house, each according to the energetic standard of a new construction, and one single-family house that meets the legal requirements for energetic refurbishment in Luxembourg. In order to demonstrate the influence of the heat source there are simulations with air-to-water as well as geothermal heat pumps. The analysis furthermore considers six different thermal energy storage capacities. The influence of the predictive control strategy is demonstrated by a comparison with reference cases that work with a common control. The flexible electricity tariffs are based on real market data of the EPEX-Spot Day-Ahead auction and is completed with grid fees and taxes in Luxembourg. The simulation results confirm the suitability of the Model-Predictive-Control approach without integrated building model. Air-to-water heat pumps achieve better efficiency and cost reduction than geothermal heat pumps, as they have two ways to reduce the costs: via the variable electricity tariffs and via a performance optimization of the heat pump itself. The performance optimization is the preferred choice of the control strategy if the price profile consists of mainly static components. Buildings with high insulation level show a sharper reaction to price signals than buildings with lower insulation standard. For the latter in return the absolute cost reduction potential is better as the overall energy demand is higher. With low capacity thermal energy storage, the energy efficiency and cost reduction potential are limited since the reaction to price signals immediately leads to a temperature rise in the tank counteracting the overall objective by increasing the heat pump consumption. With increasing tank capacity, this aspect improves. Nevertheless, there is a limit where the increasing heat losses of the tank compensate the positive aspects of bigger tanks. As the heating systems are usually not equipped with larger thermal energy storage tanks, there is an extra investment for the end-consumer that needs to be compensated by the cost reduction of the Demand-Side-Management. This profitability is only given for the multi-family house and the less insulated single-family house, equipped with an air-to-water heat pump and small to medium sized storage tanks. Two alternative price profiles are tested in order to demonstrate the influence of the price signals. In the first case, a higher volatility of the prices is presumed, to reflect a higher market share of renewable energies. In the second case variable grid fees are added to the volatile prices to further increase the incentive of Demand-Side-Management. In all simulation cases the cost reduction increases so that that buildings with high thermal insulation and air-to-water heat pump are profitable with medium sized thermal energy storage. At the same time a change of behavior of the predictive controller can be observed as the price signals become more attractive than the aspect of performance optimization, leading to an increased electricity consumption in comparison to the previous price profile. An overall economic potential of 22-53 MWel can be concluded. The numerous constraints for the heat pump operation lead to an implicit load management effect that is difficult to interpret. Nevertheless, there is a clear systemic benefit of Demand-Side-Management that result from the better performance of air-to-water heat pumps and the highly probable reaction to extreme price signals. The assessment of a high number of heat pumps by the grid operator in order to stabilize the electricity grid is questionable. The main counter arguments are the limited reliability considering the constraints and the low electric power compared to the e-mobility that will be the major challenge of the low voltage grids in the nearer future. Concepts, where energy providers or direct marketers assess the flexibility to optimize procurement strategies seems more interesting. In this context the profitability is the main question that cannot be verified based on the findings, except if there is added value stemming from synergy effects that were not considered in this work. In relation to the peak demand of the Luxembourgish energy system there is a relevant heat pump potential for Demand-Side-Management. In the nearer future the subject should be further investigated, keeping in mind the findings and sensitivities presented in this work. [less ▲]

Detailed reference viewed: 375 (33 UL)
Full Text
See detailRecycling of gravel wash mud for manufacturing CO2-reduced cement
Thapa, Vishojit Bahadur UL

Doctoral thesis (2020)

The present research project “CO2REDCEM” is carried out at the Laboratory of Solid Structures (LSS) of the University of Luxembourg, in close collaboration with Luxembourgish industrial partners (Cimalux ... [more ▼]

The present research project “CO2REDCEM” is carried out at the Laboratory of Solid Structures (LSS) of the University of Luxembourg, in close collaboration with Luxembourgish industrial partners (Cimalux S.A., Carrières Feidt S.A. and Contern S.A.). This project aims at reducing the generation of CO2 emissions during cement production by minimisation of the use of cement clinker or its complete replacement by new binder compositions and concepts, containing novel material resources derived from local unused industrial waste products. Such a potential raw material is gravel wash mud (GWM), which occurs as a waste product from gravel mining. This clayey mud is collected from a sludge reservoir, located in the North West of Luxembourg. Currently, this waste product is landfilled without any further use. However, this prime material offers very promising properties, which require a thorough characterisation and verification before its revalorisation as a viable supplementary cementitious material (SCMs). Reusing or recycling of waste elements into goods has been among the greatest ambitions of our and earlier generations, and it will take a more important role in the future economy. One primary goal of this project is to replace the “end-of-life” concept of gravel wash mud by reusing it as new raw material. This endeavour will bring double benefit to environment as the waste is prevented from landfilling, and it is revalorised as a prime resource in another system.This research work shares the outcomes from the assessment of the performance of the prime material GWM within the following binder concepts and binder reaction mechanisms: • The use of gravel wash mud (GWM) powders as a precursor material for the synthesis of alkali-activated binders: A “cementless” binder is synthesised by alkaline activation of processed and calcined GWM powders. The mitigation of the CO2 emissions is achieved by the calcination process of the clayey gravel wash mud, which requires less thermal energy and thus lower energy consumption than for cement clinker production. • Substitution of Ordinary Portland Cement (OPC) by calcined GWM powders: Cement and concrete mixtures are prepared based on partial replacement of Portland cement by calcined GWM powders. This study presents the investigations on the involved reaction mechanisms (pozzolanic and cementitious hydration reactions), the optimal mixture configurations and the optimal material treatment processes. • The development of lime-Metakaolin-GWM binder concepts: Mixtures without cement are developed using GWM and other constituents, classified as industrial by-products. This research includes the mineralogical and microstructural characterisation of the constituents, the understanding of the reaction mechanism, and the optimisation of the mixtures to enhance the performance of the novel cementitious products. This thesis allowed to assess the performance of the waste product GWM as a valid pozzolanic prime material and to understand the requirements on physical, chemical and mineralogical characteristics of any potential raw material to ensure its permissibility as an alternative supplementary cementitious material (SCM). [less ▲]

Detailed reference viewed: 262 (14 UL)
Full Text
See detailBoosting Automated Program Repair for Adoption By Practitioners
Koyuncu, Anil UL

Doctoral thesis (2020)

Automated program repair (APR) attracts a huge interest from research and industry as the ultimate target in automation of software maintenance. Towards realizing this automation promise, the research ... [more ▼]

Automated program repair (APR) attracts a huge interest from research and industry as the ultimate target in automation of software maintenance. Towards realizing this automation promise, the research community has explored various ideas and techniques, which are increasingly demonstrating that APR is no longer fictional. Although literature techniques constantly set new records in fixing a significant fraction of defects within well-established benchmarks, we are not aware of large-scale adoption of APR in practice. Meanwhile, open-source and commercial organizations have started to reflect on the potential of integrating some automated steps in the software development cycle. Actually, the current practice has several development settings that use a number of tools to automate and systematize various tasks such as code style checking, bug detection, and systematic patching. Our work is motivated by this fact. We advocate that systematic and empirical exploration of the current practice that leverage tools to automate debugging tasks would provide valuable insights for rethinking and boosting the APR agenda towards its acceptability by developer communities. We have identified three investigation axes in this dissertation. First, mining software repositories towards understanding code change properties that could be valuable to guide program repair. Second, analyzing communication channels in software development in order to assess to what extent they could be relevant in a real-world program repair scenario. Third, exploring generic concepts of patching in the literature for establishing a common foundation for program repair pipelines that can be integrated with industrial settings. This dissertation makes the following contributions to the community: • An empirical study of tool support in a real development setting providing concrete insights on the acceptance, stability and the nature of bugs being fixed by manually-craft patches vs tool-supported patches and manifests opportunities for improving automated repair techniques. • A novel information retrieval based bug localization approach that learns how to compute the similarity scores of various types of features. • An automated mining strategy to infer fix pattern that can be integrated to automated program repair pipelines. • A practical bug report driven program repair pipeline. [less ▲]

Detailed reference viewed: 194 (18 UL)
Full Text
See detailRobust Real-time Sense-and-Avoid Solutions for Remotely Piloted Quadrotor UAVs in Complex Environments
Wang, Min UL

Doctoral thesis (2020)

UAV teleoperation is a demanding task: to successfully accomplish the mission without collision requires skills and experience. In real-life environments, current commercial UAVs are to a large extent ... [more ▼]

UAV teleoperation is a demanding task: to successfully accomplish the mission without collision requires skills and experience. In real-life environments, current commercial UAVs are to a large extent remotely piloted by amateur human pilots. Due to lack of teleoperation experience or skills, they often drive UAVs into collision. Therefore, in order to ensure safety of the UAV as well as its surroundings, it is necessary for the UAV to boast the capability of detecting emergency situation and acting on its own when facing imminent threat. However, the majority of UAVs currently available in the market are not equipped with such capability. To fill in the gap, in this work we present 2D LIDAR based Sense-and-Avoid solutions which are able to actively assist unskilled human operator in obstacle avoidance, so that the operator can focus on high-level decisions and global objectives in UAV applications such as search and rescue, farming etc. Specifically, with our novel 2D LIDAR based obstacle detection and tracking algorithm, perception-assistive flight control design, progressive emergency evaluation policies and optimization based and adaptive virtual cushion force field (AVCFF) based avoidance strategies, our proposed UAV teleoperation assistance systems are capable of obstacle detection and tracking, as well as automatic obstacle avoidance in complex environment where both static and dynamic objects are present. Additionally, while the optimization based solution is validated in Matlab, the AVCFF based avoidance system has been fully integrated with sensing system, perception-assistive flight controller on the basis of the Hector Quadrotor open source framework, and the effectiveness of the complete Sense-and-Avoid solution has been demonstrated and validated on a realistic simulated UAV platform in Gazebo simulations, where the UAV is operated at a high speed. [less ▲]

Detailed reference viewed: 94 (6 UL)
Full Text
See detailActionable knowledge for sustainability at the water-land nexus: An inquiry into governance and social learning in two river basins in Luxembourg
Hondrila, Kristina UL

Doctoral thesis (2020)

The thesis offers in-depth empirical insights into diverse factors that foster or hinder collective capacities of actors to address sustainability challenges at the water-land nexus. It focuses on how ... [more ▼]

The thesis offers in-depth empirical insights into diverse factors that foster or hinder collective capacities of actors to address sustainability challenges at the water-land nexus. It focuses on how relations, knowledge, and practices in diverse organisations and professions engaged in governance and social learning processes in the Syr and Upper Sûre river basins in Luxembourg have changed following the entering into force of the EU Water Framework Directive in 2000. Finding that contradictions in water and land systems grow while spaces for self-organisation and meaning-making shrink, the thesis raises fundamental questions concerning both dominant supply- and productivity-oriented paradigms and managerial approaches to sustainability. New governance approaches are needed to foster social learning and actionable knowledge, embracing interrelations between ecological and social dimensions of sustainability. [less ▲]

Detailed reference viewed: 325 (77 UL)
Full Text
See detailTertium non datur: Various aspects of value-added (VA) models used as measures of educational effectiveness
Levy, Jessica UL

Doctoral thesis (2020)

Value-added (VA) models are used as measures of educational effectiveness which aim to find the “value” that has been added by teachers or schools to students’ achievement, independent of students’ ... [more ▼]

Value-added (VA) models are used as measures of educational effectiveness which aim to find the “value” that has been added by teachers or schools to students’ achievement, independent of students’ backgrounds. Statistically speaking, teacher or school VA scores are calculated as the part of an outcome variable that cannot be explained by the covariates that are in the VA model (i.e., the residual). Teachers or schools are classified as effective (or ineffective) if they have a positive (or negative) effect on students’ achievement compared to a previously specified norm value. Although VA models have gained popularity in recent years, there is a lack of consensus concerning various aspects of VA scores. The present dissertation aims at shedding light on these aspects, including the state of the art of VA research in the international literature, covariate choice, and model selection for the estimation of VA scores. In a first step, a systematic literature review was conducted, in which 370 studies from 26 countries were classified, focusing on methodological issues (Study 1 of the present dissertation). Results indicated no consensus concerning the applied statistical model type (the majority applied a linear regression, followed by multilevel models). Concerning the covariate choice, most studies used prior achievement as a covariate, cognitive and/or motivational student data were hardly considered, and there was no consensus on the in- or exclusion of students’ background variables. Based on these findings, it was suggested that VA models are better suited to improve the quality of teaching than for accountability and decision-making purposes. Secondly, based on one of the open questions resulting from Study 1 (i.e., covariate choice), the aim of Study 2 was to systematically compare different covariate combinations in the estimation of school VA models. Based on longitudinal data from primary school students participating in the Luxembourg School Monitoring Programme in Grades 1 and 3, three covariate sets were found to be essential when calculating school VA scores with math or language achievement as dependent variables: prior language achievement, prior math achievement, and students’ sociodemographic and sociocultural background. However, the evaluation of individual schools’ effectiveness varied widely depending on the covariate set that was chosen, casting further doubt on the use of VA scores for accountability purposes. Thirdly, the aim of Study 3 was to investigate statistical model selection, as Study 1 showed no consensus on which model types are most suitable for the estimation of VA scores, with the majority of studies applying linear regression or multilevel models. These classical linear models, along with nonlinear models and different types of machine learning models were systematically compared to each other. Covariates were kept constant (based on the results from Study 2) across models. Multilevel models led to the most accurate prediction of students’ achievement. However, as school VA scores varied depending on specific model choices and as these results can be only generalized for a Luxembourgish sample, it was suggested for future research that the model selection process should be made transparent and should include different specifications in order to obtain ranges of potential VA scores. In conclusion, all three studies imply that the application of VA models for decision-making and accountability should be critically discussed and that VA scores should not be used as the only measure for accountability or high-stakes decisions. In addition, it can be concluded that VA scores are more suitable for informative purposes. Thus, the findings from the present dissertation prepare the ground for future research, where schools with stable high VA scores can be part of further investigations (both qualitatively and quantitatively) to study their pedagogical strategies and learn from them. [less ▲]

Detailed reference viewed: 176 (25 UL)
Full Text
See detailA newform theory for Katz modular forms
Mamo, Daniel Berhanu UL

Doctoral thesis (2020)

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old ... [more ▼]

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old space of a uniquely associated Katz newform. We also set up multiplicity one results for Katz eigenforms which have reducible Galois representation. [less ▲]

Detailed reference viewed: 68 (3 UL)
Full Text
See detailA newform theory for Katz modular forms
Mamo, Daniel Berhanu UL

Doctoral thesis (2020)

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old ... [more ▼]

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old space of a uniquely associated Katz newform. We also set up multiplicity one results for Katz eigenforms which have reducible Galois representation. [less ▲]

Detailed reference viewed: 139 (6 UL)
Full Text
See detailImmersions of surfaces into SL(2,C) and into the space of geodesics of Hyperbolic space
El Emam, Christian UL

Doctoral thesis (2020)

This thesis mainly treats two developments of the classical theory of hypersurfaces inside pseudo-Riemannian space forms. The former - a joint work with Francesco Bonsante - consists in the study of ... [more ▼]

This thesis mainly treats two developments of the classical theory of hypersurfaces inside pseudo-Riemannian space forms. The former - a joint work with Francesco Bonsante - consists in the study of immersions of smooth manifolds into holomorphic Riemannian space forms of constant curvature -1 (including SL(2,C) with a multiple of its Killing form): this leads to a Gauss-Codazzi theorem, it suggests an approach to holomorphic transitioning of immersions into pseudo-Riemannian space forms, a trick to construct holomorphic maps into the PSL(2,C)-character variety, and leads to a restatement of Bers theorem. The latter - a joint work with Andrea Seppi - consists in the study of immersions of n-manifolds inside the space of geodesics of the hyperbolic (n+1)-space. We give a characterization, in terms of the para-Kahler structure of this space of geodesics, of the Riemannian immersions which turn out to be Gauss maps of equivariant immersions into the hyperbolic space. [less ▲]

Detailed reference viewed: 44 (2 UL)
Full Text
See detailvan der Waals Dispersion Interactions in Biomolecular Systems: Quantum-Mechanical Insights and Methodological Advances
Stoehr, Martin UL

Doctoral thesis (2020)

Intermolecular interactions are paramount for the stability, dynamics and response of systems across chemistry, biology and materials science. In biomolecules they govern secondary structure formation ... [more ▼]

Intermolecular interactions are paramount for the stability, dynamics and response of systems across chemistry, biology and materials science. In biomolecules they govern secondary structure formation, assembly, docking, regulation and functionality. van der Waals (vdW) dispersion contributes a crucial part to those interactions. As part of the long-range electron correlation, vdW interactions arise from Coulomb-coupled quantum-mechanical fluctuations in the instan- taneous electronic charge distribution and are thus inherently many-body in nature. Common approaches to describe biomolecular systems (i.e., classical molecular mechanics) fail to capture the full complexity of vdW dispersion by adapting a phenomenological, atom-pairwise formalism. This thesis explores beyond-pairwise vdW forces and the collectivity of intrinsic electronic behav- iors in biomolecular systems and discusses their role in the context of biomolecular processes and function. To this end, the many-body dispersion (MBD) formalism parameterized from density-functional tight-binding (DFTB) calculations is used. The investigation of simple molecular solvents with particular focus on water gives insights into the vdW energetics and electronic response properties in liquids and solvation as well as emergent behavior for coarse-grained models. A detailed study of intra-protein and protein–water vdW interactions highlights the role of many-body forces during protein folding and provides a funda- mental explanation for the previously observed “unbalanced” description and over-compaction of disordered protein states. Further analysis of the intrinsic electronic behaviors in explicitly solvated proteins indicates a long-range persistence of electron correlation through the aque- ous environment, which is discussed in the context of protein–protein interactions, long-range coordination and biomolecular regulation and allostery. Based on the example of a restriction enzyme, the potential role of many-body vdW forces and collective electronic behavior for the long-range coordination of enzymatic activity is discussed. Introducing electrodynamic quantum fluctuations into the classical picture of allostery opens up the path to a more holistic view on biomolecular regulation beyond the traditional focus on merely local structural modifications. Building on top of the MBD framework, which describes vdW dispersion within the interatomic dipole-limit, a practical extension to higher-order terms is presented. The resulting Dipole- Correlated Coulomb Singles account for multipolar as well as dispersion-polarization-like contri- butions beyond the random phase approximation by means of first-order perturbation theory over the dipole-coupled MBD state. It is shown that Dipole-Correlated Coulomb Singles become particularly relevant for relatively larger systems and can alter qualitative trends in the long-range interaction under (nano-)confinement. Bearing in mind the frequent presence of confinement in biomolecular systems due to cellular crowding, in ion channels or for interfacial water, this so-far neglected contribution is expected to have broad implications for systems of biological relevance. Ultimately, this thesis introduces a hybrid approach of DFTB and machine learning for the accu- rate description of large-scale systems on a robust, albeit approximate, quantum-mechanical level. The developed DFTB-NN rep approach combines the semi-empirical DFTB Hamiltonian with a deep tensor neural network model for localized many-body repulsive potentials. DFTB- NN rep provides an accurate description of energetic, structural and vibrational properties of a wide range of small organic molecules much superior to standard DFTB or machine learning. Overall, this thesis aims to extend the current view of complex (bio)molecular systems being governed by local, (semi-)classical interactions and develops methodological steps towards an advanced description and understanding including non-local interaction mechanisms enabled by quantum-mechanical phenomena such as long-range correlation forces arising from collective electronic fluctuations. [less ▲]

Detailed reference viewed: 185 (25 UL)
Full Text
See detailEnd-to-end Signal Processing Algorithms for Precoded Satellite Communications
Krivochiza, Jevgenij UL

Doctoral thesis (2020)

The benefits of full frequency reuse in satellite communications consist of increased spectral efficiency, physical layer security, enhanced coverage, and improved Quality of Service. This is possible due ... [more ▼]

The benefits of full frequency reuse in satellite communications consist of increased spectral efficiency, physical layer security, enhanced coverage, and improved Quality of Service. This is possible due to novel digital signal processing techniques for interference mitigation as well as signal predistortion in non-linear high-performance amplifiers. Advanced linear precoding and symbol-level precoding can jointly address the signal processing demands in the next-generation satellite communications. The real-time signal precoding increases the computational complexity handled at the gateway, thus requiring low-complexity high-performance algorithms to be developed. Additionally, extensive in-lab and field tests are required to increase the technology readiness level and industrial adaption rate. In this thesis, we focus on low-complexity precoding design and in-lab validations. We study the state-of-the-art linear and symbol-level precoding techniques and multi-user MIMO test-beds available in the literature. First, we present a novel low-complexity algorithm for sum power minimization precoding design. This technique allows to reduce transmitted power in a multi-beam satellite system and improves the quality of the received signal at user terminals. Next, we demonstrate an FPGA accelerated high-throughput precoding design. The FPGA precoding design is scalable for a different number of beams in the systems and operates in a real-time processing regime using a commercially available software defined radio platform. One of the highlights of this research is the creation of a real-time in-lab precoding test-bed. The test-bed consists of a DVB-S2X precoding enabled gateway prototype, a MIMO channel emulator, and user terminals. By using the radio frequency for transmitting and receiving the precoded signals, we can test the performance of different precoding techniques in realistic scenarios and channel impairments. We demonstrate an end-to-end symbol-level precoded real-time transmission, in which user terminals can acquire and decode the precoded signals showing an increase in performance and throughput. The in-lab validations confirm numerical results conducted alongside in this work. [less ▲]

Detailed reference viewed: 150 (19 UL)
Full Text
See detailArgument Acceptance and Commitment in Formal Argumentation
Dauphin, Jérémie UL

Doctoral thesis (2020)

Detailed reference viewed: 54 (8 UL)
Full Text
See detailMICROGLIA IN PARKINSON´S DISEASE: IDENTITY, HETEROGENEITY AND THEIR CONTRIBUTION TO NEURODEGENERATION
Uriarte Huarte, Oihane UL

Doctoral thesis (2020)

Parkinson´s disease (PD) is the most common movement disorder caused by dopamine deficiency owing to a loss of dopaminergic neurons within the substantia nigra (SN). So far, there is no cure available ... [more ▼]

Parkinson´s disease (PD) is the most common movement disorder caused by dopamine deficiency owing to a loss of dopaminergic neurons within the substantia nigra (SN). So far, there is no cure available, hence understanding the mechanisms by which dopaminergic neurons degenerate is essential for the development of future treatment strategies. Recently, a potential role of neuroinflammation, and especially the activation of microglial cells in PD was suggested, not being secondary to neuronal death, but rather primarily implicated in PD pathogenesis. Hence, we have ventured in to study neuroinflammation and microglia activation in the context of PD using in vivo and in vitro mouse models. Firstly, we addressed microglial heterogeneity in the healthy nigrostriatal pathway, the primary circuit affected in PD. By using single-cell RNA sequencing, we have identified four different microglial immune subsets within the midbrain and the striatum. Notably, we were able to distinguish a microglial subset with an immune alerted phenotype, which was mainly composed of microglial cells from the midbrain. The transcriptomic identity of this subset resembled partially to the one of inflammatory microglia. Additionally, in situ morphological studies, such as 3D reconstruction, revealed that microglia located within the midbrain is less complex than microglia with a striatal origin. Secondly, we studied the potential role of neuroinflammation and microglia in PD progression by using a PD-like mouse model of a-synuclein (a-syn) seeding and spreading. In this study, pre-formed fibrils (PFF) were injected into the mice striatum, and a combined neuropathological and transcriptomic analysis was performed at two time points that have distinct and increasing levels and distribution of a-syn pathology across different brain regions (13 and 90 days post-injection). Interestingly, neuropathological quantifications at 90 days post-injection uncovered that neuroinflammation and microglial reactivity are linked to neurodegeneration. However, pathology neither correlates with neurodegeneration nor with a-syn aggregation. Importantly, at 13 days post-injection, the transcriptomic analysis of the midbrain revealed the dysregulation of several inflammatory pathways and pointed to the overexpression of neurotoxic inflammatory mediators. Furthermore, at this time point, the presence of a-syn oligomers was detected in certain areas of the brain. Subsequently, we hypothesised that at early stages of PD pathogenesis, the presence of a-syn oligomeric forms induces a robust inflammatory response of microglia, which can be further associated with neurodegeneration. Thirdly, to understand if a-syn oligomers are the main inducers of microglial activation, we examined further the microglial inflammatory response to other a-syn conformations, monomers and fibrils (PFF1 and PFF2). For that, BV2 and primary microglial cells were exposed to the a-syn moieties at different concentrations and incubations times. Electron microscopy depicted some heterogeneity across the synthetic a-syn fibrils, suggesting that PFF1 and PFF2 were composed by different structures. Then, microglial reactivity to a-syn monomers and fibrils was investigated by RT-PCR, and no specific response of microglia to a-syn was encountered. Also, only one of the a-syn fibrils, the PFF1, decreased microglial phagocytic activity and reduced the expression of Il1b by microglia after LPS stimulation. Concomitant to the findings in the a-syn seeding and spreading model, we attempted to elucidate the molecular profile of microglia associated with neurodegeneration. In this particular study, RNA-sequencing was performed in isolated microglial cells in an early stage of pathology progression. In contrast with our previous results, no differences in the microglial profile were found between the PFF and the control mice. Lastly, we have investigated potential neuroprotective mechanisms associated with microglial reactivity counter-regulation. Considering previous observations that microglia express dopaminergic receptors, we investigated further whether apomorphine, a dopamine agonist with anti-oxidant properties, could govern microglial activation. The effect of apomorphine enantiomers was analysed within primary microglia cultures that were activated by exposure to mutated A53T monomeric a-syn. Herein, we demonstrated that microglial activation can be dampened by apomorphine, via the recruitment of Nrf2 to the nucleus, which results in a decreased release of proinflammatory mediators, such as TNFa or PGE2. Taken together, this study provides an additional characterisation of neuroinflammation and microglial cells in the context of PD, which ultimately contributes to a better understanding of their relationship with neurodegeneration. [less ▲]

Detailed reference viewed: 135 (20 UL)
Full Text
See detailInstruction Coverage for Android App Testing and Tuning
Pilgun, Aleksandr UL

Doctoral thesis (2020)

For many people, mobile apps have already become an indispensable part of modern life. Apps entertain, educate, assist us in our daily routines and help us connect with others. However, the advanced ... [more ▼]

For many people, mobile apps have already become an indispensable part of modern life. Apps entertain, educate, assist us in our daily routines and help us connect with others. However, the advanced capabilities of modern devices running the apps and sensitive user data make mobile devices also an attractive attack target. To get access to sensitive data, adversaries tend to conceal malicious functionality in freely distributed legitimately-looking apps. The problem of low-quality and malicious apps, spreading at an enormous scale, is especially relevant for one of the biggest software repositories – Google Play. The Android apps distributed through this platform undergo a validation process by Google. However, that is insufficient to confirm their good nature. To identify dangerous apps, novel frameworks for testing and app analysis are being developed by the Android community. Code coverage is one of the most common metrics for evaluating the effectiveness of these frameworks, and it is used as an internal metric to guide code exploration in some of them. However, when analyzing apps without source code, the Android community relies mostly on method coverage since there are no reliable tools for measuring finer-grained code coverage in 3rd-party Android app testing. Another stumbling block for testing frameworks is the inability to test an app exhaustively. While code coverage measurement can indicate an improvement in testing, it is neither possible to reach 100% coverage nor to identify the maximum reachable coverage value for the app. Despite testing, the app still contains high amounts of not executed code, which makes it impossible to confirm the absence of potentially malicious code in the part of the app that has not been tested. The existing static debloating approaches aim at app size minimization rather than security and simply debloat not reachable code. However, there is currently no approach to debloat apps based on dynamic analysis information, i.e. to cut out not-executed code. In this dissertation, we solve these two problems by, first, proposing an efficient approach and a tool to measure code coverage at the instruction level, and second, a dynamic binary shrinking methodology for deleting not executed code from the app. We support our solutions by the following contributions: - An instrumentation approach to measure code coverage at the instruction level. Our technique instruments smali representation of Android bytecode to allow code coverage measurement at the finest level. - An implementation of the instrumentation approach. ACVTool is a self-contained package containing 4K lines of Python code. It is publicly available and can be integrated into different testing frameworks. - An extensive empirical evaluation that shows the high reliability and versatility of our approach. ACVTool successfully executes on 96.9% of apps from our dataset, introduces a negligible instrumentation time and runtime overheads, and its results are complaint to the results of JaCoCo (source code coverage) and Ella (method coverage) tools. - A detailed study on the influence of code coverage metric granularity on automated testing. We demonstrate the usefulness of ACVTool for automated testing techniques that rely on code coverage data in their operation. - A dynamic debloating approach based on ACVTool instruction coverage. We propose Dynamic Binary Shrinking System, a novel methodology created to shrink 3rd-party Android apps towards observed benign functionality on executed code. - An implementation of the dynamic debloating technique incorporated into the ACVCut tool. The tool demonstrates the viability of the Dynamic Shrinking System on two examples. It allows us to cut out not executed code and, thus, provide 100% instruction coverage on explored app behaviors. [less ▲]

Detailed reference viewed: 161 (10 UL)
Full Text
See detailFoundations of an Ethical Framework for AI Entities: the Ethics of Systems
Dameski, Andrej UL

Doctoral thesis (2020)

The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations ... [more ▼]

The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what theory can explain the process of moral reasoning, decision and action, for AI entities in virtual, simulated and real-life moral scenarios? This thesis answers these two research questions with its two main contributions to the field of AI ethics, a substantial (ethico-philosophical) and a methodological contribution. The substantial contribution is a coherent and novel theory named Ethics of Systems Framework, as well as a possible inception of a new field of study: ethics of systems. The methodological contribution is the creation of its main methodological tool, the Ethics of Systems Interface. The second part of the research effort was focused on testing and demonstrating the capacities of the Ethics of Systems Framework and Interface in modeling and managing moral scenarios in which AI and other entities participate. Further work can focus on building on top of the foundations of the Framework provided here, increasing the scope of moral theories and simulated scenarios, improving the level of detail and parameters to reflect real-life situations, and field-testing the Framework on actual AI systems. [less ▲]

Detailed reference viewed: 269 (15 UL)
Full Text
See detailA multifaceted formal analysis of end-to-end encrypted email protocols and cryptographic authentication enhancements
Vazquez Sandoval, Itzel UL

Doctoral thesis (2020)

Largely owing to cryptography, modern messaging tools (e.g., Signal) have reached a considerable degree of sophistication, balancing advanced security features with high usability. This has not been the ... [more ▼]

Largely owing to cryptography, modern messaging tools (e.g., Signal) have reached a considerable degree of sophistication, balancing advanced security features with high usability. This has not been the case for email, which however, remains the most pervasive and interoperable form of digital communication. As sensitive information (e.g., identification documents, bank statements, or the message in the email itself) is frequently exchanged by this means, protecting the privacy of email communications is a justified concern which has been emphasized in the last years. A great deal of effort has gone into the development of tools and techniques for providing email communications with privacy and security, requirements that were not originally considered. Yet, drawbacks across several dimensions hinder the development of a global solution that would strengthen security while maintaining the standard features that we expect from email clients. In this thesis, we present improvements to security in email communications. Relying on formal methods and cryptography, we design and assess security protocols and analysis techniques, and propose enhancements to implemented approaches for end-to-end secure email communication. In the first part, we propose a methodical process relying on code reverse engineering, which we use to abstract the specifications of two end-to-end security protocols from a secure email solution (called pEp); then, we apply symbolic verification techniques to analyze such protocols with respect to privacy and authentication properties. We also introduce a novel formal framework that enables a system's security analysis aimed at detecting flaws caused by possible discrepancies between the user's and the system's assessment of security. Security protocols, along with user perceptions and interaction traces, are modeled as transition systems; socio-technical security properties are defined as formulas in computation tree logic (CTL), which can then be verified by model checking. Finally, we propose a protocol that aims at securing a password-based authentication system designed to detect the leakage of a password database, from a code-corruption attack. In the second part, the insights gained by the analysis in Part I allow us to propose both, theoretical and practical solutions for improving security and usability aspects, primarily of email communication, but from which secure messaging solutions can benefit too. The first enhancement concerns the use of password-authenticated key exchange (PAKE) protocols for entity authentication in peer-to-peer decentralized settings, as a replacement for out-of-band channels; this brings provable security to the so far empirical process, and enables the implementation of further security and usability properties (e.g., forward secrecy, secure secret retrieval). A second idea refers to the protection of weak passwords at rest and in transit, for which we propose a scheme based on the use of a one-time-password; furthermore, we consider potential approaches for improving this scheme. The hereby presented research was conducted as part of an industrial partnership between SnT/University of Luxembourg and pEp Security S.A. [less ▲]

Detailed reference viewed: 300 (18 UL)
Full Text
See detailScalable Control of Asynchronous Boolean Networks
Su, Cui UL

Doctoral thesis (2020)

Direct cell reprogramming has been garnering attention for its therapeutic potential for treating the most devastating diseases characterised by defective cells or a deficiency of certain cells. It is ... [more ▼]

Direct cell reprogramming has been garnering attention for its therapeutic potential for treating the most devastating diseases characterised by defective cells or a deficiency of certain cells. It is capable of reprogramming any kind of abundant cells in the body into the desired cells to restore functions of the diseased organ. It has shown promising benefits for clinical applications, such as cell and tissue engineering, regenerative medicine and drug discovery. A major obstacle in the application of direct cell reprogramming lies in the identification of effective reprogramming factors. Experimental approaches are usually laborious, time-consuming and enormously expensive. Mathematical modelling of biological systems paves the way to study mechanisms of biological processes and identify therapeutic targets with computational reasoning and tools. Among several modelling frameworks, Boolean networks have apparent advantages. They provide a qualitative description of biological systems and thus evade the parametrisation problem, which often occurs in quantitative models. In this thesis, we focus on the identification of reprogramming factors based on asynchronous Boolean networks. This problem is equivalent to the control of asynchronous Boolean networks: finding a subset of nodes, whose perturbations can drive the dynamics of the network from the source state (the initial cell type) to the target attractor (the desired cell type). Before diving into the control problems, we first develop a near-optimal decomposition method and use this method to improve the scalability of the decomposition-based method for attractor detection. The new decomposition-based attractor detection method can identify all the exact attractors of the network efficiently, such that we can select the proper attractors corresponding to the initial cell type and the desired cell type as the source and target attractors and predict the key nodes for the conversion. Depending on whether the source state is given or not, we can have two control problems: source-target control and target control. We develop several methods to solve the two problems using different control strategies. All the methods are implemented in our software CABEAN. Given a control problem, CABEAN can provide a rich set of realistic solutions that manipulate the dynamics in different ways, such that biologists can select suitable ones to validate with biological experiments. We believe our works can contribute to a better understanding of the regulatory mechanisms of biological processes and greatly facilitate the development of direct cell reprogramming. [less ▲]

Detailed reference viewed: 160 (19 UL)
See detailEmotion Regulation and Perceived Competence in Dyslexia and ADHD: Analyzing Predictors of Academic and Mental Health Outcomes in Adolescents
Battistutta, Layla UL

Doctoral thesis (2020)

Youths with dyslexia and ADHD are at risk for developing not only academic but also mental health problems. As these negative outcomes are however not found equally among all adolescents with dyslexia or ... [more ▼]

Youths with dyslexia and ADHD are at risk for developing not only academic but also mental health problems. As these negative outcomes are however not found equally among all adolescents with dyslexia or ADHD, this dissertation aimed at getting a better understanding of certain predictors and/or consequences of two mediating self-regulating mechanisms. Whereas study 1 focused on perceived competence as an important contributor to academic success or failure, studies 2, 3 and 4 analyzed the role of emotion regulation (ER) in the development of psychopathological symptoms. Study 1 showed that within a group of adolescents with dyslexia, adolescents with a late diagnosis hold lower general and academic perceived competency beliefs, with potential negative outcomes for academia. Study 2 gave a first insight into ER in dyslexia and revealed that while dyslexia might not be directly associated with ER difficulties, higher ADHD symptoms contribute to more ER difficulties not only in youths with clinical ADHD but also in youths with dyslexia. These findings were taken a step further in study 3, which showed that ER difficulties mediate the association between ADHD symptoms and further anxiety, depression and conduct disorder symptoms for youths with dyslexia, ADHD and comorbid dyslexia/ADHD. Moreover, it was demonstrated in study 4 that underlying working memory deficits, (and to a lesser extent) attentional control and inhibitory deficits are linked with ADHD symptoms, which in turn are associated with ER difficulties and further anxiety and depression symptoms. The findings are discussed within the larger context of perceived competence, ER as well as academic and psycho-social outcomes, and potential implications for the conceptualization, diagnosis, prevention and treatment of these disorders are considered. [less ▲]

Detailed reference viewed: 241 (13 UL)
Full Text
See detailJOINT DESIGN OF USER SCHEDULING AND PRECODING IN WIRELESS NETWORKS: A DC PROGRAMMING APPROACH
Bandi, Ashok UL

Doctoral thesis (2020)

These scenarios are of relevance and are already being considered in current and upcoming standards including 4G and 5G. This thesis begins by presenting the necessity of the joint design of scheduling ... [more ▼]

These scenarios are of relevance and are already being considered in current and upcoming standards including 4G and 5G. This thesis begins by presenting the necessity of the joint design of scheduling and precoding for the aforementioned scenario in detail in chapter 1. Further, the coupled nature of scheduling and precoding that prevails in many other designs is discussed. Following this, a detailed survey of the literature dealing with the joint design is presented. In chapter 2, the joint design of scheduling and precoding in the unicast scenario for multiuser MISO downlink channels for network functionality optimization considering sum-rate, Max-min SINR, and power. Thereafter, different challenges in terms of the problem formulation and subsequent reformulations for different metrics are discussed. Different algorithms, each focusing on optimizing the corresponding metric, are proposed and their performance is evaluated through numerical results. In chapter 3, the joint design of user grouping, group scheduling, user scheduling, and precoding is considered for MGMC. Differently to chapter 2, the optimization of a novel metric called multicast energy efficiency (MEE) is considered. This new paradigm for joint design in MGMC poses several additional challenges that can not be dealt with by the design in chapter 2. Therefore, towards addressing these additional challenges, a novel algorithm is proposed for MEE maximization and its efficacy is presented through simulations. In chapters 2 and 3, the joint design is considered within a given transmit slot and temporal design is not considered. In chapter 4, the joint design scheduling and precoding are considered over a block of multiple time slots for a unicast scenario. Differently to single slot design, the multi-slot joint design facilitates to address users' latency directly in terms of time slots. Noticing this, joint design across multiple slots is considered with the objective of minimizing the number of slots to serve all the users subject to users' QoS and latency constraints. Further, this multi-slot joint design problem is modeled as a structured group sparsity problem. Finally, by rendering the problem as a DC, high-quality stationary points are obtained through an efficient CCP based algorithm. In chapter 5, the joint scheduling and precoding schemes proposed in previous chapters are applied to satellite systems. Finally, The thesis concludes with the main research findings and the identification of new research challenges in chapter 6. [less ▲]

Detailed reference viewed: 150 (48 UL)
Full Text
See detailGeoGebraTAO: Geometry Learning using a Dynamic Adaptive ICT-Enhanced Environment to Promote Strong Differentiation of Children’s Individual Pathways
Dording, Carole UL

Doctoral thesis (2020)

In our project, we investigate the scientific validity of a specific self-built Adaptive Learning Tool in the field of dynamic geometry with a focus on the individual learning pathways of a highly diverse ... [more ▼]

In our project, we investigate the scientific validity of a specific self-built Adaptive Learning Tool in the field of dynamic geometry with a focus on the individual learning pathways of a highly diverse student population. A total of 164 children in Luxembourgish elementary schools, aged between 10 and 13 years, acted as the test group and explored elementary geometric concepts in a sequence of learning assignments created with GeoGebra, a dynamic mathematics system which is integrated into the computer-assisted testing framework TAO. They actively built new knowledge in an autonomous way and at their own pace with only minor support interventions by their teacher. Based on easily exploitable data collected within a sequence of exploratory learning assignments, the GEOGEBRATAO tool analyses the answers provided by the child and performs a diagnostic of the child’s competencies in geometry. With respect to this outcome, the tool manages to identify children struggling with geometry concepts and subsequently proposes a differentiated individual pathway through scaffolding and feedback practices. The children can voluntarily watch short video clips aimed to help them better understand any task that they might have difficulty with. A spaced repetition feature is another highly useful component of the tool. Pre- and post-test results show that the test group (working with the GEOGEBRATAO tool) and a parallel working control group (following a traditional paper-and-pencil geometry course), increased their geometry skills and knowledge through the training program, with the test group performing even better in items related to dynamic geometry. In addition, a more precise analysis within clusters based on similar performances in both pre- and post-tests and the child’s progress within the GEOGEBRATAO activities, provides evidence of some common ways of working with our educational technology tool, leading to overall improvement at an individualized level. [less ▲]

Detailed reference viewed: 227 (57 UL)
See detailProblems in nonequilibrium fluctuations across scales: A path integral approach
Cossetto, Tommaso UL

Doctoral thesis (2020)

In this thesis we study stochastic systems evolving with Markov jump processes. In a first work we discuss different representations of the stochastic evolution: the master equation, the generalized ... [more ▼]

In this thesis we study stochastic systems evolving with Markov jump processes. In a first work we discuss different representations of the stochastic evolution: the master equation, the generalized Langevin equation, and their path integrals. The description is used to derive the generating functions for out of equilibrium observables, together with the typical approximation techniques. In a second work the path integral is used to enforce thermodynamic consistency across scales. The description of identical units with all-to-al interactions is reduced from a micro- to a meso- to a macroscopic level. A suitable scaling of the dynamics and of the thermodynamic observables allows to preserve the thermodynamical structure at the different levels. In a third work we focus on the large deviation properties of chemical networks. The path integral allows to compute the dominant trajectories that constitute macroscopic fluctuations. For bi-stable systems the existence of multiple macroscopic contributions results in a phase transition for the macroscopic current. In a fourth work we study the response of such chemical currents to external perturbations. Out of equilibrium the system can display negative differential response, a feature that offers different strategies to minimize external or internal disturbances. Finally, in a fifth work, we start from a quantum system where part of the system can be traced out to act as multiple reservoirs at different temperatures. Using the Schwinger-Keldysh contour and Green's functions we can obtain the generating function for the different parts of the hamiltonian. The statistics of thermodynamic observables is accessible even in the strong coupling regime, while the semi-classical approximation is in agreement with the classical counterpart. [less ▲]

Detailed reference viewed: 186 (34 UL)
Full Text
See detailHomomorphic encryption and multilinear maps based on the approximate-GCD problem
Lima Pereira, Hilder Vitor UL

Doctoral thesis (2020)

Cryptographic schemes are constructed on top of problems that are believed to be hard. In particular, recent advanced schemes, as homomorphic primitives and obfuscators, use the approximate greatest ... [more ▼]

Cryptographic schemes are constructed on top of problems that are believed to be hard. In particular, recent advanced schemes, as homomorphic primitives and obfuscators, use the approximate greatest common divisor (AGCD) problem, which is simple to describe and easy to implement, since it does not require complex algebraic structures nor hard-to-sample probability distributions. However, in spite of its simplicity, the AGCD problem generally yields inefficient schemes, usually with large ciphertext expansion. In this thesis, we analyze the AGCD problem and several existing variants thereof and propose a new attack on the multi-prime AGCD problem. Then, we propose two new variants: 1. The vector AGCD problem (VAGCD), in which AGCD instances are represented as vectors and randomized with a secret random matrix; 2. The polynomial randomized AGCD problem (RAGCD), that consists of representing AGCD samples as polynomials and randomizing them with a secret random polynomial. We show that these new variants cannot be easier than the original AGCD problem and that all the known attacks, when adapted to the VAGCD and the RAGCD problem, are more expensive both in terms of time and of memory, allowing us then to chose smaller parameters and to improve the efficiency of the schemes using the AGCD as the underlying problem. Thus, by combining techniques from multilinear maps and indistinguishability obfuscation with the VAGCD problem, we provide the first implementation of a N-party non-interactive key exchange resistant against all known attacks. Still aiming to show that the VAGCD problem can lead to performance improvements in cryptographic primitives, we use it to construct a homomorphic encryption scheme that can natively and efficiently operate with vectors and matrices. For instance, for 100 bits of security, we can perform a sequence of 128 homomorphic products between 128-dimensional vectors and 128x128 matrices in less than one second. We also use our scheme in two applications: homomorphic evaluation of nondeterministic finite automata and a Naïve Bayes classifier. Finally, using the RAGCD problem, we construct a new homomorphic scheme for polynomials and we propose new fast bootstrapping procedures for fully homomorphic scheme (FHE) over the integers. Therewith, we can for the first time bootstrap AGCD-based FHE schemes in less than one second in a common personal computer. For the best of our knowledge, only FHE schemes based on the LWE problem had subsecond bootstrapping procedures, while AGCD-based schemes required several seconds or even minutes to be bootstrapped. [less ▲]

Detailed reference viewed: 168 (15 UL)
Full Text
See detailAnalysis, Detection, and Prevention of Cryptographic Ransomware
Genç, Ziya Alper UL

Doctoral thesis (2020)

Cryptographic ransomware encrypts files on a computer system, thereby blocks access to victim’s data, until a ransom is paid. The quick return in revenue together with the practical difficulties in ... [more ▼]

Cryptographic ransomware encrypts files on a computer system, thereby blocks access to victim’s data, until a ransom is paid. The quick return in revenue together with the practical difficulties in accurately tracking cryptocurrencies used by victims to perform the ransom payment, have made ransomware a preferred tool for cybercriminals. In addition, exploiting zero-day vulnerabilities found in Windows Operating Systems (OSs), the most widely used OS on desktop computers, has enabled ransomware to extend its threat and have detrimental effects at world-wide level. For instance, WannaCry and NotPetya have affected almost all countries, impacted organizations, and the latter alone caused damage which costs more than $10 billion. In this thesis, we conduct a theoretical and experimental study on cryptographic ransomware. In the first part, we explore the anatomy of a ransomware, and in particular, analyze the key management strategies employed by notable families. We verify that for a long-term success, ransomware authors must acquire good random numbers to seed Key Derivation Functions (KDFs). The second part of this thesis analyzes the security of the current anti-ransomware approaches, both in academic literature and real-world systems, with the aim to anticipate how such future generations of ransomware will work, and in order to start planning on how to stop them. We argue that among them, there will be some which will try to defeat current anti-ransomware; thus, we can speculate over their working principles by studying the weak points in the strategies that six of the most advanced anti-ransomware currently implements. We support our speculations with experiments, proving at the same time that those weak points are in fact vulnerabilities and that the future ransomware that we have imagined can be effective. Next, we analyze existing decoy strategies and discuss how they are effective in countering current ransomware by defining a set of metrics to measure their robustness. To demonstrate how ransomware can identify existing deception-based detection strategies, we implement a proof-of-concept decoy-aware ransomware that successfully bypasses decoys by using a decision engine with few rules. We also discuss existing issues in decoy-based strategies and propose practical solutions to mitigate them. Finally, we look for vulnerabilities in antivirus (AV) programs which are the de facto security tool installed at computers against cryptographic ransomware. In our experiments with 29 consumer-level AVs, we discovered two critilcal vulnerabilities. The first one consists in simulating mouse events to control AVs, namely to send them mouse “clicks” to deactivate their protection. We prove that 14 out of 29 AVs can be disabled in this way, and we call this class of attacks Ghost Control. The second one consists in controlling whitelisted applications, such as Notepad, by sending them keyboard events (such as “copy-and-paste”) to perform malicious operations on behalf of the malware. We prove that the anti-ransomware protection feature of AVs can be bypassed if we use Notepad as a “puppet” to rewrite the content of protected files as a ransomware would do. Playing with the words, and recalling the cat-and-mouse game, we call this class of attacks Cut-and-Mouse. In the third part of the thesis, we propose a strategy to mitigate cryptographic ransomware attacks. Based on our insights from the first part of the thesis, we present UShallNotPass which works by controlling access to secure randomness sources, i.e., Cryptographically Secure Pseudo-Random Number Generator (CSPRNG) Appliction Programming Interfaces (APIs). We tested UShallNotPass against 524 real-world ransomware samples, and observe that UShallNotPass stops 94% of them, including WannaCry, Locky, CryptoLocker and CryptoWall. Remarkably, it also nullifies NotPetya, the offspring of the family which so far has eluded all defenses. Next, we present NoCry, which shares the same defense strategy but implements an improved architecture. We show that NoCry is more secure (with components that are not vulnerable to known attacks), more effective (with less false negatives in the class of ransomware addressed) and more efficient (with minimal false positive rate and negligible overhead). To confirm that the new architecture works as expected, we tested NoCry against a new set of 747 ransomware samples, of which, NoCry could stop 97.1%, bringing its security and technological readiness to a higher level. Finally, in the fourth part, we present the potential future of the cryptographic ransomware. We identify new possible ransomware targets inspired by the cybersecurity incidents occurred in real-world scenarios. In this respect, we described possible threats that ransomware may pose by targeting critical domains, such as the Internet of Things and the Socio-Technical systems, which will worrisomely amplify the effectiveness of ransomware attacks. Next, we looked into whether ransomware authors re-use the work of others, available at public platforms and repositories, and produce insecure code (which might enable to build decryptors). By methodically reverse-engineering malware executables, we have found that, out of 21 ransomware samples, 9 contain copy-paste code from public resources. From this fact, we recall critical cases of code disclosure in the recent history of ransomware and, reflect on the dual-use nature of this research by arguing that ransomware are components in cyber-weapons. We conclude by discussing the benefits and limits of using cyber-intelligence and counter-intelligence strategies that could be used against this threat. [less ▲]

Detailed reference viewed: 491 (18 UL)
Full Text
See detailMachine Learning Techniques for Suspicious Transaction Detection and Analysis
Camino, Ramiro Daniel UL

Doctoral thesis (2020)

Financial services must monitor their transactions to prevent being used for money laundering and combat the financing of terrorism. Initially, organizations in charge of fraud regulation were only ... [more ▼]

Financial services must monitor their transactions to prevent being used for money laundering and combat the financing of terrorism. Initially, organizations in charge of fraud regulation were only concerned about financial institutions such as banks. However, nowadays, the Fintech industry, online businesses, or platforms involving virtual assets can also be affected by similar criminal schemes. Regardless of the differences between the entities mentioned above, malicious activities affecting them share many common patterns. This dissertation's first goal is to compile and compare existing studies involving machine learning to detect and analyze suspicious transactions. The second goal is to synthesize methodologies from the last goal for tackling different use cases in an organized manner. Finally, the third goal is to assess the applicability of deep generative models for enhancing existing solutions. In the first part of the thesis, we propose an unsupervised methodology for detecting suspicious transactions applied to two case studies. One is related to transactions from a money remittance network, and the other is related to a novel payment network based on distributed ledger technologies. Anomaly detection algorithms are applied to rank user accounts based on recency, frequency, and monetary features. The results are manually validated by domain experts, confirming known scenarios and finding unexpected new cases. In the second part, we carry out an analogous analysis employing supervised methods, along with a case study where we classify Ethereum smart contracts into honeypots and non-honeypots. We take features from the source code, the transaction data, and the funds' flow characterization. The proposed classification models proved to generalize well to unseen honeypot instances and techniques and allowed us to characterize previously unknown techniques. In the third part, we analyze the challenges that tabular data brings into the domain of deep generative models, a particular type of data used to represent financial transactions in the previous two parts. We propose a new model architecture by adapting state-of-the-art methods to output multiple variables from mixed types distributions. Additionally, we extend the evaluation metrics used in the literature to the multi-output setting, and we show empirically that our approach outperforms the existing methods. Finally, in the last part, we extend the work from the third part by applying the presented models to enhance classification tasks from the second part, commonly containing a severe class imbalance. We introduce the multi-input architecture to expand models alongside our previously proposed multi-output architecture. We compare three techniques to sample from deep generative models defining a transparent and fair large-scale experimental protocol and interesting visual analysis tools. We showed that general machine learning detection and visualization techniques could help address the fraud detection domain's many challenges. In particular, deep generative models can add value to the classification task given the imbalanced nature of the fraudulent class, in exchange for implementation and time complexity. Future and promising applications for deep generative models include missing data imputation and sharing synthetic data or data generators preserving privacy constraints. [less ▲]

Detailed reference viewed: 140 (12 UL)
Full Text
See detailAN NLP-BASED FRAMEWORK TO FACILITATE THE DERIVATION OF LEGAL REQUIREMENTS FROM LEGAL TEXTS
Sleimi, Amin UL

Doctoral thesis (2020)

Information systems in several regulated domains (e.g., healthcare, taxation, labor) must comply with the applicable laws and regulations. In order to demonstrate compliance, several techniques can be ... [more ▼]

Information systems in several regulated domains (e.g., healthcare, taxation, labor) must comply with the applicable laws and regulations. In order to demonstrate compliance, several techniques can be used for assessing that such systems meet their specified legal requirements. Since requirements analysts do not have the required legal expertise, they often rely on the advisory of legal professionals. Hence, this paramount activity is expensive as it involves numerous professionals. Add to this, the communication gap between all the involved stakeholders: legal professionals, requirements analysts and software engineers. Several techniques attempt to bridge this communication gap by streamlining this process. A promising way to do so is through the automation of legal semantic metadata extraction and legal requirements elicitation from legal texts. Typically, one has to search legal texts for the relevant information for the IT system at hand, extract the legal requirements entailed by these legal statements that are pertinent to the IT system, and validate the conclusiveness and correctness of the finalized set of legal requirements. Nevertheless, the automation of legal text processing raises several challenges, especially when applied to IT systems. Existing Natural Language Processing (NLP) techniques are not built to handle the peculiarities of legal texts. On the one hand, NLP techniques are far from perfect in handling several linguistic phenomena such as anaphora, word sense disambiguation and delineating the addressee of the sentence. Add to that, the performance of these NLP techniques decreases when applied to foreign languages (other than English). On the other hand, legal text is far from being identical to the formal language used in journalism. We note that the most prominent NLP techniques are developed and tested against a selection of newspapers articles. In addition, legal text introduces cross-references and legalese that are paramount to proper legal analysis. Besides, there is still some work to be done concerning topicalization, which we need to consider for the relevance of legal statements. Existing techniques for streamlining the compliance checking of IT systems often rely on code-like artifacts with no intuitive appeal to legal professionals. Subsequently, one has no practical way to double-check with legal professionals that the elicited legal requirements are indeed correct and complete regarding the IT system at hand. Further, manually eliciting the legal requirements is an expensive, tedious and error-prone activity. The challenge is to propose a suitable knowledge representation that can be easily understood by all the involved stakeholders but at the same time remains cohesive and conclusive enough to enable the automation of legal requirements elicitation. In this dissertation, we investigate to which extent one can automate legal processing in the Requirements Engineering context. We focus exclusively on legal requirements elicitation for IT systems that have to conform to prescriptive regulations. All our technical solutions have been developed and empirically evaluated in close collaboration with a government entity. [less ▲]

Detailed reference viewed: 126 (20 UL)
Full Text
See detailLearning of Control Behaviours in Flying Manipulation
Manukyan, Anush UL

Doctoral thesis (2020)

Machine learning is an ever-expanding field of research with a wide range of potential applications. It has been increasingly used in different robotics tasks enhancing their autonomy and intelligent ... [more ▼]

Machine learning is an ever-expanding field of research with a wide range of potential applications. It has been increasingly used in different robotics tasks enhancing their autonomy and intelligent behaviour. This thesis presents how machine learning techniques can enhance the decision-making ability for control tasks in aerial robots as well as amplify the safety, thus broadly improving their autonomy levels. The work starts with the development of a lightweight approach for identifying degradations of UAV hardware-related components, using traditional machine learning methods. By analysing the flight data stream from a UAV following a predefined mission, it predicts the level of degradation of components at early stages. In that context, real-world experiments have been conducted, showing that such approach can be used as a safety system during different experiments, where the flight path of the vehicle is defined a priori. The next objective of this thesis is to design intelligent control policies for flying robots with highly nonlinear dynamics, operating in continuous state-action setting, using model-free reinforcement learning methods. To achieve this objective, first, the nuances and potentials of reinforcement learning have been investigated. As a result, numerous insights and strategies have been pointed out for crafting efficient reward functions that lead to successful learning performance. Finally, a learning-based controller is provided for controlling a hexacopter UAV with 6-DoF, to perform stable navigation and hovering actions by directly mapping observations to low-level motor commands. To increase the complexity of the given objective, the degrees of freedom of the robotic platform is upgraded to 7-DoF, using a flying manipulation as learning agent. In this case, the agent learns to perform a mission composed of take-off, navigation, hovering and end-effector positioning tasks. Later, to demonstrate the effectiveness of the proposed controller and its ability to handle higher number of degrees of freedom, the flying manipulation has been extended to a robotic platform with 8-DoF. To overcome several challenges of reinforcement learning, the RotorS Gym experimental framework has been developed, providing a safe and close to real simulated environment for training multirotor systems. To handle the increasingly growing complexity of learning tasks, the Cyber Gym Robotics platform has been designed, which extends the RotorS Gym framework by several core functionalities. For instance, it offers an additional mission controller that allows to decompose complex missions into several subtasks, thus accelerating and facilitating the learning process. Yet another advantage of the Cyber Gym Robotics platform is its modularity which allows to seamlessly switch both, learning algorithms as well as agents. To validate these claims, real-world experiments have been conducted, demonstrating that the model trained in the simulation can be transferred onto a real physical robot with only minor adaptations. [less ▲]

Detailed reference viewed: 179 (10 UL)
Full Text
See detailDiscursive Input/Output Logic: Deontic Modals, and Computation
Farjami, Ali UL

Doctoral thesis (2020)

Detailed reference viewed: 148 (6 UL)
Full Text
See detailStaging the Nation in an Intermediate Space: Cultural Policy in Luxembourg and the State Museums (1918-1974)
Spirinelli, Fabio UL

Doctoral thesis (2020)

Cultural policy has been analysed from various perspectives, ranging from sociology over cultural studies to political science. Historians have also been interested in cultural policy, but they have ... [more ▼]

Cultural policy has been analysed from various perspectives, ranging from sociology over cultural studies to political science. Historians have also been interested in cultural policy, but they have barely reflected on a theoretical framework. In addition, cultural policy has not been thoroughly researched in Luxembourg. The present thesis aims to contribute to this gap and examines how national cultural policy in Luxembourg evolved from the 1920s to the early 1970s. It investigates the presence of the national idea in cultural policy, and possible tensions and connections between the idea of the nation and the use or inclusion of foreign cultural references. Drawing on the concept of Zwischenraum (intermediate space) coined by the historian Philip Ther, the study considers Luxembourg as a nationalised intermediate space with the tensions that this status entails. Furthermore, it investigates how the State Museums, particularly the history section, evolved in the cultural policy context. To analyse the evolution of cultural policy, three interconnected aspects are considered: structures, actors and discourses. Three main periods are considered in a chronological fashion: the interwar period marked by efforts of nation-building and an increasingly interventionist state; the Nazi occupation of Luxembourg (1940-1944), when the idea of an independent nation-state was turned into its opposite; the post-war period until the early 1970s, subdivided into an immediate post-war period marked by restitution and reconstruction, and the 1950s and the 1960s characterised by a state-administrator and a conservative cultural policy. These periods, however, are not always neatly separable and reveal continuities. For each period, the State Museums are analysed in their cultural policy context: from their construction in the age of nation-building, over their ambiguous situation during Nazi occupation, to their new missions in the post-war period. [less ▲]

Detailed reference viewed: 275 (39 UL)
Full Text
See detailAnalyzing Sustainable and Emerging Cities. The Inter-American Development Bank and Spatial Transformations
Mejia Idarraga, Santiago UL

Doctoral thesis (2020)

This research analyzes the Inter-American Development Bank’s (IDB) Emerging and Sustainable Cities Initiative (ESCI) through examination of the transfer of innovation between an influential city, Medellín ... [more ▼]

This research analyzes the Inter-American Development Bank’s (IDB) Emerging and Sustainable Cities Initiative (ESCI) through examination of the transfer of innovation between an influential city, Medellín (Colombia), and a host city, Xalapa (Mexico) which participated in the ESCI. It uses categories of quality of democracy to evaluate decision-making in the regionalization process of urban transformation initiatives. The study illustrates how Medellín’s experience of social urbanism is not exportable due to particular existing conditions which are not repeated in other Latin American cities, such as Xalapa. Furthermore, this research demonstrates the existence of a dysfunctional standardized region embodied by the processes proposed by the IDB. The Inter-American Development Bank developed the Emerging and Sustainable Cities Initiative between 2012 and 2019 in 77 cities of the American continent. This initiative is influenced by Medellín, which institutionalizes a model of spatial intervention known as 'social urbanism' or 'transformation of Medellín.' The IDB exports the publicized success of the Medellín model to intermediate cities in various countries with varying results. In the case of Xalapa, Mexico, the initiative had a negative effect because it did not go beyond the implementation stage. The causes of non-execution are symptoms of a problem in the design of the regionalization strategy that fails to homogenize urban planning techniques between diverse territorialities. The objective of the research was to analyze a transfer of urban development programs between territories at the nano-level and regional institutions at the macro-level, which create a new regional integration system through urban planning projects. The analysis of the implementation of macro-regional programs in nano urban regions was carried out through a multilevel analysis and a comparative study, combining qualitative and quantitative mixed approaches in Medellín and Xalapa. Data collection included a literature review utilizing the PRISMA method, elaboration of a map of actors, and semi-structured interviews. Data was analyzed through the categories of Quality of Democracy. As a result, I developed categories extracted from the Quality of Democracy for the analysis of urban projects. The results of a triangulation of interview-type sources, a review of indicators, and press releases yielded values that show no incidence of democratic quality in decision-making processes for the implementation of regionalized projects. This opens discussions on legality, accountability, freedom, equity, and auditing in the implementation of regional initiatives. I conclude that there is a parabola of regionalization of citizen initiatives whose origin is in nano territories. This initiative is regionalized by the Inter-American Development Bank through the Emerging and Sustainable Cities Initiative, in a dysfunctional standard regionalization process. This process fails due to structural divergences in political culture, normative design, decision-making processes, and normative incoherence among the cities participating in the parabola. [less ▲]

Detailed reference viewed: 152 (5 UL)
Full Text
See detailFrom Secure to Usable and Verifiable Voting Schemes
Zollinger, Marie-Laure UL

Doctoral thesis (2020)

Elections are the foundations of democracy. To uphold democratic principles, researchers have proposed systems that ensure the integrity of elections. It is a highly interdisciplinary field, as it can be ... [more ▼]

Elections are the foundations of democracy. To uphold democratic principles, researchers have proposed systems that ensure the integrity of elections. It is a highly interdisciplinary field, as it can be studied from a technical, legal or societal points of view. While lawyers give a legal framework to the voting procedures, security researchers translate these rules into technical properties that operational voting systems must satisfy, notably privacy and verifiability. If Privacy aims to protect vote-secrecy and provide coercion-resistance to the protocol, Verifiability allows voters to check that their vote has been taken into account in the general outcome, contributing to the assurance of the integrity of the elections. To satisfy both properties in a voting system, we rely on cryptographic primitives such as encryption, signatures, commitments schemes, or zero-knowledge proofs, etc. Many protocols, paper-based or electronic-based, have been designed to satisfy these properties. Although the security of some protocols, and their limits, have been analysed from a technical perspective, the usability has often been shown to have very low rates of effectiveness. The necessary cryptographic interactions have already shown to be one contributor to this problem, but the design of the interface could also contribute by misleading voters. As elections typically rarely happen, voters must be able to understand the system they use quickly and mostly without training, which brings the user experience at the forefront of the designed protocols. In this thesis, the first contribution is to redefine privacy and verifiability in the context of tracker-based verifiable schemes. These schemes, using a so-called tracking number for individual verification, need additional user steps that must be considered in the security evaluation. These security definitions are applied to the boardroom voting protocol F2FV used by the CNRS, and the e-voting protocol Selene, both use a tracker-based procedure for individual verifiability. We provide proofs of security in the symbolic model using the Tamarin prover. The second contribution is an implementation of the Selene protocol as a mobile and a web application, tested in several user studies. The goal is to evaluate the usability and the overall user experience of the verifiability features, as well as their understanding of the system through the evaluation of mental models. The third contribution concerns the evaluation of the voters' understanding of the coercion mitigation mechanism provided by Selene, through a unique study design using game theory for the evaluation of voters. Finally, the fourth contribution is about the design of a new voting scheme, Electryo, that is based on the Selene verification mechanisms but provides a user experience close to the standard paper-based voting protocols. [less ▲]

Detailed reference viewed: 279 (43 UL)
Full Text
See detailAnalyzing and Improving Very Deep Neural Networks: From Optimization, Generalization to Compression
Oyedotun, Oyebade UL

Doctoral thesis (2020)

Learning-based approaches have recently become popular for various computer vision tasks such as facial expression recognition, action recognition, banknote identification, image captioning, medical image ... [more ▼]

Learning-based approaches have recently become popular for various computer vision tasks such as facial expression recognition, action recognition, banknote identification, image captioning, medical image segmentation, etc. The learning-based approach allows the constructed model to learn features, which result in high performance. Recently, the backbone of most learning-based approaches are deep neural networks (DNNs). Importantly, it is believed that increasing the depth of DNNs invariably leads to improved generalization performance. Thus, many state-of-the-art DNNs have over 30 layers of feature representations. In fact, it is not uncommon to find DNNs with over 100 layers in the literature. However, training very DNNs that have over 15 layers is not trivial. On one hand, such very DNNs generally suffer optimization problems. On the other hand, very DNNs are often overparameterized such that they overfit the training data, and hence incur generalization loss. Moreover, overparameterized DNNs are impractical for applications that require low latency, small Graphic Processing Unit (GPU) memory for operation and small memory for storage. Interestingly, skip connections of various forms have been shown to alleviate the difficulty of optimizing very DNNs. In this thesis, we propose to improve the optimization and generalization of very DNNs with and without skip connections by reformulating their training schemes. Specifically, the different modifications proposed allow the DNNs to achieve state-of-the-art results on several benchmarking datasets. The second part of the thesis presents the theoretical analyses of DNNs without and with skip connections based on several concepts from linear algebra and random matrix theory. The theoretical results obtained provide new insights into why DNNs with skip connections are easy to optimize, and generalize better than DNNs without skip connections. Ultimately, the theoretical results are shown to agree with practical DNNs via extensive experiments. The third part of the thesis addresses the problem of compressing large DNNs into smaller models. Following the identified drawbacks of the conventional group LASSO for compressing large DNNs, the debiased elastic group least absolute shrinkage and selection operator (DEGL) is employed. Furthermore, the layer-wise subspace learning (SL) of latent representations in large DNNs is proposed. The objective of SL is learning a compressed latent space for large DNNs. In addition, it is observed that SL improves the performance of LASSO, which is popularly known not to work well for compressing large DNNs. Extensive experiments are reported to validate the effectiveness of the different model compression approaches proposed in this thesis. Finally, the thesis addresses the problem of multimodal learning using DNNs, where data from different modalities are combined into useful representations for improved learning results. Different interesting multimodal learning frameworks are applied to the problems of facial expression and object recognition. We show that under the right scenarios, the complementary information from multimodal data leads to better model performance. [less ▲]

Detailed reference viewed: 223 (14 UL)
Full Text
See detailData Analytics and Consensus Mechanisms in Blockchains
Feher, Daniel UL

Doctoral thesis (2020)

Blockchains, and especially Bitcoin have soared in popularity since their inceptions. This thesis furthers our knowledge of blockchains and their uses. First, we analyze transaction linkability in the ... [more ▼]

Blockchains, and especially Bitcoin have soared in popularity since their inceptions. This thesis furthers our knowledge of blockchains and their uses. First, we analyze transaction linkability in the privacy preserving cryptocurrency Zcash, based on the currency minting transactions (mining). Using predictable usage patterns and clustering heuristics on mining transactions, an attacker can link to publicly visible addresses in over 84% of the privacy preserving transactions. Then, we further analyze privacy issues for the privacy-oriented cryptocurrency Zcash. We study privacy preserving transactions and show ways to fingerprint user transactions, including active attacks. We introduce two new attacks, which we call the Danaan-gift attack and the Dust attack. Then, we investigate the generic landscape and hierarchy of miners as exemplified by Ethereum and Zcash. Both chains used application-specific integrated circuit (ASIC) resistant proofs-of-work which favor GPU mining in order to keep mining decentralized. This, however, has changed with the introduction of ASIC miners for these chains. This transition allows us to develop methods that might detect hidden ASIC mining in a chain (if it exists), and to study how the introduction of ASICs affects the decentralization of mining power. Finally, we describe how an attacker might use public blockchain information to invalidate miners' privacy, deducing the mining hardware of individual miners and their mining rewards. Then, we analyze the behavior of cryptocurrency exchanges on the Bitcoin blockchain, and compare the results to the exchange volumes reported by the same exchanges. We show, that in multiple cases these two values are close to each other, which confirms the integrity of their reported volumes. Finally, we present a heuristic to try to classify large clusters of addresses in the blockchain, and whether these clusters are controlled by an exchange. Finally, we describe how to couple reputation systems with distributed consensus protocols to provide a scalable permissionless consensus protocol with a low barrier of entry, while still providing strong resistance against Sybil attacks for large peer-to-peer networks of untrusted validators. We introduce the reputation module ReCon, which can be laid on top of various consensus protocols such as PBFT or HoneyBadger. The protocol takes external reputation ranking as input and then ranks nodes based on the outcomes of consensus rounds run by a small committee, and adaptively selects the committee based on the current reputation. [less ▲]

Detailed reference viewed: 221 (19 UL)
Full Text
See detailEssays on Tax Competition
Paulus, Nora UL

Doctoral thesis (2020)

Detailed reference viewed: 130 (23 UL)