References of "Dissertations and theses"
     in
Bookmark and Share    
See detailGegenkulturelle Tendenzen im postdramatischen Theater
Nonoa, Koku Gnatuloma UL

Doctoral thesis (in press)

Detailed reference viewed: 29 (3 UL)
See detailInvestigation of condensation process inside inclined tube
Zhang, Yu UL

Doctoral thesis (2020)

Generation III+ reactor designs partially rely on passive safety systems. It aims to increase the plant safety standards and to reduce investment costs. Passive Decay Heat Removal Systems, such as the ... [more ▼]

Generation III+ reactor designs partially rely on passive safety systems. It aims to increase the plant safety standards and to reduce investment costs. Passive Decay Heat Removal Systems, such as the Emergency Condenser (EC) of the KERENA reactor design, plays an important role in the safety in nuclear power plants. As part of the emergency cooling chain, EC removes the decay heat from the reactor pressure vessel and transfers it to the flooding pool. For the successful design of the EC, the reliable prediction of the condensation heat transfer inside inclined pipes is one of the important factors. One-dimensional (1D) codes, such as ATHLET, RELAP and TRACE, are widely used today by engineers to predict the thermal hydraulic behavior of the system in nuclear power plant. However, state-of-the-art 1D codes are mainly validated for active components, and the qualification of passive systems is still a remaining problem. The goal of this thesis therefore is to investigate the condensation phenomena in EC using current advanced 1D code ATHLET (Analysis of Thermal-hydraulics of Leaks and Transients). The performance of ATHLET for prediction of condensation in slightly inclined tube was assessed and the results showed that the standard models in ATHLET code have significant deficiencies on the prediction of the condensation heat transfer coefficients. Thus, the new empirical model has been derived using experimental data from COSMEA (COndenSation test rig for flow Morphology and hEAt transfer studies) tests, condensation experiments for flow morphology and heat transfer studies in a single slightly inclined tube conducted by HZDR (Helmholtz-Zentrum Dresden-Rossendorf) and data sourced from literature. The new model was developed using Machine Learning – Regression Analysis methodology in MATLAB, which consists of upper liquid film condensation and bottom convective condensation. It was further implemented in ATHLET with Python programming language and the modified ATHLET code was used to calculate the COSMEA experiments. The post-calculation results were compared to experiments in three aspects: heat flux, condensation rate and void fraction along the whole pipe. The outcomes showed that the modified ATHLET code can be used to recalculate the relevant heat transfer values of experiments under different pressure and mass flow rate conditions. [less ▲]

Detailed reference viewed: 46 (4 UL)
Full Text
See detailSmart Electrical and Thermal Energy Supply for Nearly Zero Energy Buildings
Rafii-Tabrizi, Sasan UL

Doctoral thesis (2020)

The European Union (EU) intends to reduce the greenhouse gas emissions to 80-95 % below 1990 levels by 2050. To achieve this goal, the EU focuses on higher energy efficiency mainly within the building ... [more ▼]

The European Union (EU) intends to reduce the greenhouse gas emissions to 80-95 % below 1990 levels by 2050. To achieve this goal, the EU focuses on higher energy efficiency mainly within the building sector and a share of renewable energy sources (RES) of around 30 % in gross final energy consumption by 2030. In this context, the concept of nearly zero-energy buildings (nZEB) is both an emerging and relevant research area. Balancing energy consumption with on-site renewable energy production in a cost-effective manner requires to develop suitable energy management systems (EMS) using demandside management strategies. This thesis develops an EMS using certainty equivalent (CE) economic model predictive control (EMPC) to optimally operate the building energy system with respect to varying electricity prices. The proposed framework is a comprehensive mixed integer linear programming model that uses suitable linearised grey box models and purely data-driven model approaches to describe the system dynamics. For this purpose, a laboratory prototype is available, which is capable of covering most building-relevant types of energy, namely thermal and electrical energy. Thermal energy for space heating, space cooling and domestic hot water is buffered in thermal energy storage systems. A dual source heat pump provides thermal energy for space heating and domestic hot water, whereas an underground ice storage covers space cooling. The environmental energy sources of the heat pump are ice storage or wind infrared sensitive collectors. The collectors are further used to regenerate the ice storage. Photovoltaic panels produce electrical energy which can be stored in a battery storage system. The electrical energy system is capable of selling and buying electricity from the public power grid. The laboratory test bench interacts with a virtual building model which is integrated into the building simulation software TRNSYS Simulation Studio. The EMS prototype is tested and validated on the basis of various simulations and under close to real-life laboratory conditions. The different test scenarios are generated using the typical day approach for each season. [less ▲]

Detailed reference viewed: 36 (17 UL)
Full Text
See detailGeneralized Langevin equations and memory effects in non-equilibrium statistical physics
Meyer, Hugues UL

Doctoral thesis (2020)

The dynamics of many-body complex processes is a challenge that many scientists from various fields have to face. Reducing the complexity of systems involving a large number of bodies in order to reach a ... [more ▼]

The dynamics of many-body complex processes is a challenge that many scientists from various fields have to face. Reducing the complexity of systems involving a large number of bodies in order to reach a simple description for observables captur- ing the main features of the process is a difficult task for which different approaches have been proposed over the past decades. In this thesis we introduce new tools to describe the coarse-grained dynamics of arbitrary observables in non-equilibrium processes. Following the projection operator formalisms introduced first by Mori and Zwanzig, and later on by Grabert, we first derive a non-stationary Generalized Langevin Equation that we prove to be valid in a wide spectrum of cases. This includes in particular driven processes as well as explicitly time-dependent observ- ables. The equation exhibits a priori memory effects, controlled by a so-called non- stationary memory kernel. Because the formalism does not provide extensive infor- mation about the memory kernel in general, we introduce a set of numerical meth- ods aimed at evaluating it from Molecular Dynamics simulation data. These proce- dures range from simple dimensionless estimations of the strength of the memory to the determination of the entire kernel. Again, the methods introduced are very general and require as input a small number of quantities directly computable from numerical of experimental timeseries. We finally conclude this thesis by using the projection operator formalisms to derive an equation of motion for work and heat in dissipative processes. This is done in two different ways, either by using well-known integral fluctuation theorems, or by explicitly splitting the dynamics into adiabatic and dissipative parts. [less ▲]

Detailed reference viewed: 66 (2 UL)
Full Text
See detailConstant curvature surfaces and volumes of convex co-compact hyperbolic manifolds
Mazzoli, Filippo UL

Doctoral thesis (2020)

We investigate the properties of various notions of volume for convex co-compact hyperbolic 3-manifolds, and their relations with the geometry of the Teichmüller space. We prove a first-order variation ... [more ▼]

We investigate the properties of various notions of volume for convex co-compact hyperbolic 3-manifolds, and their relations with the geometry of the Teichmüller space. We prove a first-order variation formula for the dual volume of the convex core, as a function over the space of quasi-isometric deformations of a convex co-compact hyperbolic 3-manifold. For quasi-Fuchsian manifolds, we show that the dual volume of the convex core is bounded from above by a linear function of the Weil-Petersson distance between the pair of hyperbolic structures on the boundary of the convex core. We prove that, as we vary the convex co-compact structure on a fixed hyperbolic 3-manifold with incompressible boundary, the infimum of the dual volume of the convex core coincides with the infimum of the Riemannian volume of the convex core. We study various properties of the foliation by constant Gaussian curvature surfaces (k-surfaces) of convex co-compact hyperbolic 3-manifolds. We present a description of the renormalized volume of a quasi-Fuchsian manifold in terms of its foliation by k-surfaces. We show the existence of a Hamiltonian flow over the cotangent space of Teichmüller space, whose flow lines corresponds to the immersion data of the k-surfaces sitting inside a fixed hyperbolic end, and we determine a generalization of McMullen’s Kleinian reciprocity, again by means of the constant Gaussian curvature surfaces foliation. [less ▲]

Detailed reference viewed: 127 (10 UL)
Full Text
See detailBlockchain-enabled Traceability and Immutability for Financial Applications
Khan, Nida UL

Doctoral thesis (2020)

􏰔􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁The dissertation explores the efficacy of exploiting the transparency and immutability characteristics of blockchain platforms in a financial ecosystem. It elaborates on blockchain ... [more ▼]

􏰔􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁The dissertation explores the efficacy of exploiting the transparency and immutability characteristics of blockchain platforms in a financial ecosystem. It elaborates on blockchain technology employing a succinct approach, which serves as the foundation to comprehend the contributions of the present research work. The dissertation gives a verified mathematical model, derived using Nash equilibrium, to function as a framework for blockchain governance. The work elucidates the design, implementation and evaluation of a management plane to monitor and manage blockchain-based decentralized applications. The dissertation also solves the problem of data privacy by the development and evaluation of a management plane for differential privacy preservation through smart contracts. Further, the research work discusses the compliance of the privacy management plane to GDPR using a permissioned blockchain platform. The dissertation is a pioneer in conducting an implementation-based, comparative and an exploratory analysis of tokenization of ethical investment certificates. The dissertation also verifies the utility of blockchain to solve some prevalent issues in social finance. It accomplishes this through the development and testing of a blockchain-based donation application. A qualitative review of the economic impact of blockchain-based micropayments has also been conducted. The discussion on the economic impact also includes a proposition for extending the access of blockchain-based financial services to the underbanked and unbanked people. The work concludes with a hypothetical model of a financial ecosystem, depicting the deployment of the major contributions of this dissertation. 􏰃􏰝􏰃􏰄􏰧􏰁􏰈􏰧􏰨 􏰞􏰇􏰉􏰈􏰏􏰆􏰅􏰁􏰇􏰈􏰆􏰕􏰨 􏰏􏰁􏰂􏰄􏰉􏰊􏰅􏰁􏰋􏰃 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛 􏰩􏰁􏰅􏰓 􏰅􏰓􏰃 􏰊􏰇􏰅􏰃􏰈􏰅􏰁􏰆􏰕 􏰅􏰇 􏰖􏰇􏰝􏰊􏰕􏰃􏰅􏰃􏰕􏰛 􏰇􏰋􏰃􏰄􏰓􏰆􏰉􏰕 􏰅􏰓􏰃 􏰃􏰪􏰁􏰂􏰅􏰁􏰈􏰧 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰁􏰈􏰞􏰄􏰆􏰂􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃􏰍 􏰚􏰓􏰃 􏰫􏰈􏰆􏰈􏰖􏰃 􏰂􏰃􏰖􏰅􏰇􏰄 􏰁􏰂 􏰕􏰃􏰆􏰏􏰁􏰈􏰧 􏰁􏰈 􏰅􏰓􏰃 􏰏􏰃􏰊􏰕􏰇􏰛􏰝􏰃􏰈􏰅 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰅􏰇 􏰄􏰃􏰂􏰇􏰕􏰋􏰃 􏰃􏰈􏰏􏰃􏰝􏰁􏰖 􏰁􏰂􏰂􏰉􏰃􏰂 􏰄􏰃􏰕􏰆􏰅􏰃􏰏 􏰅􏰇 􏰅􏰄􏰆􏰈􏰂􏰊􏰆􏰄􏰃􏰈􏰖􏰛􏰨 􏰅􏰓􏰁􏰄􏰏􏰘􏰊􏰆􏰄􏰅􏰛 􏰞􏰄􏰆􏰉􏰏 􏰆􏰂 􏰩􏰃􏰕􏰕 􏰆􏰂 􏰅􏰁􏰝􏰃􏰘 􏰖􏰇􏰈􏰂􏰉􏰝􏰁􏰈􏰧 􏰆􏰈􏰏 􏰃􏰪􏰊􏰃􏰈􏰂􏰁􏰋􏰃 􏰅􏰄􏰆􏰈􏰂􏰆􏰖􏰅􏰁􏰇􏰈􏰂􏰍 􏰚􏰓􏰃 􏰫􏰈􏰆􏰈􏰖􏰃 􏰂􏰃􏰖􏰅􏰇􏰄 􏰓􏰆􏰂 􏰈􏰇􏰅 􏰙􏰃􏰃􏰈 􏰆􏰙􏰕􏰃 􏰅􏰇 􏰂􏰃􏰄􏰋􏰃 􏰅􏰓􏰃 􏰉􏰈􏰏􏰃􏰄􏰙􏰆􏰈􏰗􏰃􏰏 􏰆􏰈􏰏 􏰉􏰈􏰙􏰆􏰈􏰗􏰃􏰏 􏰊􏰇􏰊􏰉􏰕􏰆􏰅􏰁􏰇􏰈􏰂 􏰇􏰞 􏰅􏰓􏰃 􏰩􏰇􏰄􏰕􏰏 􏰂􏰇 􏰞􏰆􏰄􏰍 􏰬􏰭􏰮􏰜􏰀􏰘􏰯􏰰 􏰁􏰈 􏰱􏰲􏰱􏰲 􏰓􏰆􏰂 􏰁􏰈􏰏􏰁􏰖􏰆􏰅􏰃􏰏 􏰅􏰓􏰆􏰅 􏰆 􏰂􏰅􏰄􏰇􏰈􏰧 􏰏􏰁􏰧􏰁􏰅􏰆􏰕 􏰁􏰈􏰞􏰄􏰆􏰂􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃 􏰁􏰂 􏰈􏰃􏰃􏰏􏰃􏰏 􏰅􏰇 􏰙􏰃 􏰆􏰙􏰕􏰃 􏰅􏰇 􏰏􏰃􏰆􏰕 􏰙􏰃􏰅􏰅􏰃􏰄 􏰩􏰁􏰅􏰓 􏰏􏰁􏰂􏰄􏰉􏰊􏰅􏰁􏰇􏰈 􏰆􏰈􏰏 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰖􏰆􏰈 􏰊􏰕􏰆􏰛 􏰆 􏰊􏰁􏰋􏰇􏰅􏰆􏰕 􏰄􏰇􏰕􏰃 􏰁􏰈 􏰃􏰂􏰅􏰆􏰙􏰕􏰁􏰂􏰓􏰁􏰈􏰧 􏰅􏰓􏰁􏰂 􏰏􏰁􏰧􏰁􏰅􏰆􏰕 􏰁􏰈􏰞􏰄􏰆􏰂􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰃􏰪􏰊􏰕􏰇􏰄􏰃􏰂 􏰅􏰓􏰃 􏰃􏰳􏰖􏰆􏰖􏰛 􏰇􏰞 􏰃􏰪􏰊􏰕􏰇􏰁􏰅􏰁􏰈􏰧 􏰅􏰓􏰃 􏰅􏰄􏰆􏰈􏰂􏰊􏰆􏰄􏰃􏰈􏰖􏰛 􏰆􏰈􏰏 􏰁􏰝􏰝􏰉􏰅􏰆􏰙􏰁􏰕􏰁􏰅􏰛 􏰖􏰓􏰆􏰄􏰆􏰖􏰅􏰃􏰄􏰁􏰂􏰅􏰁􏰖􏰂 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰊􏰕􏰆􏰅􏰞􏰇􏰄􏰝􏰂 􏰁􏰈 􏰆 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰃􏰖􏰇􏰂􏰛􏰂􏰅􏰃􏰝􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰃􏰕􏰆􏰙􏰇􏰄􏰆􏰅􏰃􏰂 􏰇􏰈 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛 􏰃􏰝􏰊􏰕􏰇􏰛􏰁􏰈􏰧 􏰆 􏰂􏰉􏰖􏰖􏰁􏰈􏰖􏰅 􏰆􏰊􏰊􏰄􏰇􏰆􏰖􏰓􏰨 􏰩􏰓􏰁􏰖􏰓 􏰂􏰃􏰄􏰋􏰃􏰂 􏰆􏰂 􏰅􏰓􏰃 􏰞􏰇􏰉􏰈􏰏􏰆􏰅􏰁􏰇􏰈 􏰅􏰇 􏰖􏰇􏰝􏰊􏰄􏰃􏰓􏰃􏰈􏰏 􏰅􏰓􏰃 􏰖􏰇􏰈􏰅􏰄􏰁􏰙􏰉􏰅􏰁􏰇􏰈􏰂 􏰇􏰞 􏰅􏰓􏰃 􏰊􏰄􏰃􏰂􏰃􏰈􏰅 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗􏰍 􏰔􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰧􏰇􏰋􏰘 􏰃􏰄􏰈􏰆􏰈􏰖􏰃 􏰁􏰂 􏰆􏰈 􏰉􏰈􏰄􏰃􏰂􏰇􏰕􏰋􏰃􏰏 􏰞􏰆􏰖􏰃􏰅 􏰇􏰞 􏰅􏰓􏰃 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛􏰨 􏰁􏰈 􏰅􏰓􏰃 􏰆􏰙􏰂􏰃􏰈􏰖􏰃 􏰇􏰞 􏰩􏰓􏰁􏰖􏰓􏰨 􏰅􏰓􏰃 􏰏􏰁􏰂􏰅􏰄􏰁􏰙􏰉􏰅􏰃􏰏 􏰈􏰃􏰅􏰩􏰇􏰄􏰗 􏰖􏰆􏰈 􏰞􏰆􏰁􏰕 􏰅􏰇 􏰞􏰉􏰈􏰖􏰅􏰁􏰇􏰈 􏰇􏰊􏰅􏰁􏰝􏰆􏰕􏰕􏰛 􏰇􏰄 􏰃􏰋􏰃􏰈 􏰂􏰅􏰆􏰕􏰕􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰧􏰁􏰋􏰃􏰂 􏰆 􏰋􏰃􏰄􏰁􏰫􏰃􏰏 􏰝􏰆􏰅􏰓􏰃􏰝􏰆􏰅􏰁􏰖􏰆􏰕 􏰝􏰇􏰏􏰃􏰕􏰨 􏰏􏰃􏰘 􏰄􏰁􏰋􏰃􏰏 􏰉􏰂􏰁􏰈􏰧 􏰑􏰆􏰂􏰓 􏰃􏰴􏰉􏰁􏰕􏰁􏰙􏰄􏰁􏰉􏰝􏰨 􏰅􏰇 􏰞􏰉􏰈􏰖􏰅􏰁􏰇􏰈 􏰆􏰂 􏰆 􏰞􏰄􏰆􏰝􏰃􏰩􏰇􏰄􏰗 􏰞􏰇􏰄 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰧􏰇􏰋􏰃􏰄􏰈􏰆􏰈􏰖􏰃􏰍 􏰚􏰓􏰃 􏰉􏰂􏰆􏰧􏰃 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰁􏰈 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰇􏰄􏰧􏰆􏰈􏰁􏰵􏰆􏰅􏰁􏰇􏰈􏰂 􏰃􏰈􏰅􏰆􏰁􏰕􏰂 􏰅􏰓􏰃 􏰄􏰃􏰴􏰉􏰁􏰄􏰃􏰝􏰃􏰈􏰅 􏰇􏰞 􏰆 􏰂􏰇􏰉􏰈􏰏 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰂􏰅􏰄􏰆􏰅􏰃􏰧􏰛 􏰅􏰇 􏰇􏰙􏰘 􏰞􏰉􏰂􏰖􏰆􏰅􏰃 􏰅􏰓􏰃 􏰖􏰇􏰝􏰊􏰕􏰃􏰪􏰁􏰅􏰁􏰃􏰂 􏰇􏰞 􏰅􏰓􏰃 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛 􏰆􏰈􏰏 􏰄􏰃􏰊􏰕􏰁􏰖􏰆􏰅􏰃 􏰅􏰓􏰃 􏰆􏰏􏰝􏰁􏰈􏰁􏰂􏰅􏰄􏰆􏰅􏰁􏰋􏰃 􏰞􏰉􏰈􏰖􏰅􏰁􏰇􏰈􏰂 􏰃􏰪􏰁􏰂􏰅􏰁􏰈􏰧 􏰁􏰈 􏰕􏰃􏰧􏰆􏰖􏰛 􏰂􏰛􏰂􏰅􏰃􏰝􏰂􏰍 􏰚􏰓􏰃 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗 􏰃􏰕􏰉􏰖􏰁􏰏􏰆􏰅􏰃􏰂 􏰅􏰓􏰃 􏰏􏰃􏰂􏰁􏰧􏰈􏰨 􏰁􏰝􏰊􏰕􏰃􏰝􏰃􏰈􏰅􏰆􏰅􏰁􏰇􏰈 􏰆􏰈􏰏 􏰃􏰋􏰆􏰕􏰉􏰆􏰅􏰁􏰇􏰈 􏰇􏰞 􏰆 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰊􏰕􏰆􏰈􏰃 􏰅􏰇 􏰝􏰇􏰈􏰁􏰅􏰇􏰄 􏰆􏰈􏰏 􏰝􏰆􏰈􏰆􏰧􏰃 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰏􏰃􏰖􏰃􏰈􏰅􏰄􏰆􏰕􏰁􏰵􏰃􏰏 􏰆􏰊􏰊􏰕􏰁􏰖􏰆􏰅􏰁􏰇􏰈􏰂􏰍 􏰶􏰄􏰁􏰋􏰆􏰖􏰛 􏰇􏰞 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰏􏰆􏰅􏰆 􏰁􏰂 􏰃􏰪􏰅􏰄􏰃􏰝􏰃􏰕􏰛 􏰁􏰝􏰊􏰇􏰄􏰅􏰆􏰈􏰅 􏰞􏰇􏰄 􏰆􏰈􏰛 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰇􏰄􏰧􏰆􏰈􏰁􏰵􏰆􏰅􏰁􏰇􏰈 􏰙􏰉􏰅 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰨 􏰙􏰛 􏰋􏰁􏰄􏰅􏰉􏰃 􏰇􏰞 􏰁􏰅􏰂 􏰁􏰈􏰓􏰃􏰄􏰃􏰈􏰅 􏰁􏰈􏰞􏰄􏰆􏰂􏰘 􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃􏰨 􏰊􏰇􏰂􏰃􏰂 􏰆 􏰖􏰓􏰆􏰕􏰕􏰃􏰈􏰧􏰃 􏰁􏰈 􏰅􏰓􏰁􏰂 􏰄􏰃􏰂􏰊􏰃􏰖􏰅􏰍 􏰚􏰓􏰁􏰂 􏰁􏰂 􏰆􏰕􏰂􏰇 􏰇􏰈􏰃 􏰇􏰞 􏰅􏰓􏰃 􏰇􏰋􏰃􏰄􏰆􏰄􏰖􏰓􏰁􏰈􏰧 􏰞􏰆􏰖􏰅􏰇􏰄􏰂 􏰅􏰓􏰆􏰅 􏰝􏰆􏰗􏰃􏰂 􏰉􏰂􏰆􏰧􏰃 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰴􏰉􏰃􏰂􏰅􏰁􏰇􏰈􏰆􏰙􏰕􏰃 􏰞􏰄􏰇􏰝 􏰅􏰓􏰃 􏰷􏰉􏰄􏰇􏰊􏰃􏰆􏰈 􏰸􏰀􏰶􏰎 􏰊􏰃􏰄􏰂􏰊􏰃􏰖􏰅􏰁􏰋􏰃􏰍 􏰶􏰄􏰁􏰋􏰆􏰖􏰛􏰘􏰊􏰄􏰃􏰂􏰃􏰄􏰋􏰁􏰈􏰧 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰊􏰕􏰆􏰅􏰞􏰇􏰄􏰝􏰂 􏰓􏰆􏰋􏰃 􏰆􏰕􏰂􏰇 􏰊􏰄􏰇􏰋􏰃􏰏 􏰅􏰇 􏰙􏰃 􏰋􏰉􏰕􏰈􏰃􏰄􏰆􏰙􏰕􏰃 􏰅􏰇 􏰏􏰆􏰅􏰆 􏰙􏰄􏰃􏰆􏰖􏰓􏰃􏰂􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰂􏰇􏰕􏰋􏰃􏰂 􏰅􏰓􏰁􏰂 􏰊􏰄􏰇􏰙􏰕􏰃􏰝 􏰙􏰛 􏰁􏰋 􏰀􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰂􏰉􏰊􏰃􏰄􏰋􏰁􏰂􏰇􏰄􏰌 􏰀􏰄􏰍 􏰎􏰆􏰏􏰉 􏰐􏰅􏰆􏰅􏰃 􏰑􏰁􏰏􏰆 􏰒􏰓􏰆􏰈 􏰅􏰓􏰃 􏰏􏰃􏰋􏰃􏰕􏰇􏰊􏰝􏰃􏰈􏰅 􏰆􏰈􏰏 􏰃􏰋􏰆􏰕􏰉􏰆􏰅􏰁􏰇􏰈 􏰇􏰞 􏰆 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰊􏰕􏰆􏰈􏰃 􏰞􏰇􏰄 􏰏􏰁􏰹􏰃􏰄􏰃􏰈􏰅􏰁􏰆􏰕 􏰊􏰄􏰁􏰋􏰆􏰖􏰛 􏰊􏰄􏰃􏰂􏰃􏰄􏰋􏰆􏰅􏰁􏰇􏰈 􏰅􏰓􏰄􏰇􏰉􏰧􏰓 􏰂􏰝􏰆􏰄􏰅 􏰖􏰇􏰈􏰅􏰄􏰆􏰖􏰅􏰂􏰍 􏰚􏰓􏰃 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗 􏰏􏰁􏰂􏰖􏰉􏰂􏰂􏰃􏰂 􏰅􏰓􏰃 􏰖􏰇􏰝􏰊􏰕􏰁􏰆􏰈􏰖􏰃 􏰇􏰞 􏰅􏰓􏰃 􏰊􏰄􏰁􏰋􏰆􏰖􏰛 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰊􏰕􏰆􏰈􏰃 􏰅􏰇 􏰸􏰀􏰶􏰎 􏰉􏰂􏰁􏰈􏰧 􏰆 􏰊􏰃􏰄􏰝􏰁􏰂􏰂􏰁􏰇􏰈􏰃􏰏 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰊􏰕􏰆􏰅􏰞􏰇􏰄􏰝􏰍 􏰚􏰓􏰃 􏰈􏰃􏰩 􏰃􏰖􏰇􏰈􏰇􏰝􏰁􏰖 􏰊􏰆􏰄􏰆􏰏􏰁􏰧􏰝 􏰃􏰈􏰞􏰇􏰄􏰖􏰃􏰏 􏰙􏰛 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰨 􏰏􏰃􏰖􏰃􏰈􏰅􏰄􏰆􏰕􏰁􏰵􏰃􏰏 􏰫􏰈􏰆􏰈􏰖􏰃􏰨 􏰊􏰄􏰃􏰂􏰃􏰈􏰅􏰂 􏰈􏰇􏰋􏰃􏰕 􏰖􏰓􏰆􏰕􏰘 􏰕􏰃􏰈􏰧􏰃􏰂 􏰆􏰈􏰏 􏰉􏰈􏰊􏰄􏰃􏰖􏰃􏰏􏰃􏰈􏰅􏰃􏰏 􏰂􏰅􏰄􏰆􏰅􏰃􏰧􏰁􏰖 􏰆􏰏􏰋􏰆􏰈􏰅􏰆􏰧􏰃􏰂􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰁􏰂 􏰆 􏰊􏰁􏰇􏰈􏰃􏰃􏰄 􏰁􏰈 􏰖􏰇􏰈􏰏􏰉􏰖􏰅􏰁􏰈􏰧 􏰆􏰈 􏰁􏰝􏰊􏰕􏰃􏰝􏰃􏰈􏰅􏰆􏰅􏰁􏰇􏰈􏰘􏰙􏰆􏰂􏰃􏰏􏰨 􏰖􏰇􏰝􏰊􏰆􏰄􏰆􏰅􏰁􏰋􏰃 􏰆􏰈􏰏 􏰆􏰈 􏰃􏰪􏰊􏰕􏰇􏰄􏰆􏰅􏰇􏰄􏰛 􏰆􏰈􏰆􏰕􏰛􏰂􏰁􏰂 􏰇􏰞 􏰅􏰇􏰗􏰃􏰈􏰁􏰵􏰆􏰅􏰁􏰇􏰈 􏰇􏰞 􏰃􏰅􏰓􏰁􏰖􏰆􏰕 􏰁􏰈􏰋􏰃􏰂􏰅􏰝􏰃􏰈􏰅 􏰖􏰃􏰄􏰅􏰁􏰫􏰖􏰆􏰅􏰃􏰂􏰍 􏰐􏰇􏰖􏰁􏰆􏰕 􏰫􏰈􏰆􏰈􏰖􏰃 􏰝􏰆􏰄􏰗􏰃􏰅􏰂 􏰓􏰆􏰋􏰃 􏰙􏰃􏰃􏰈 􏰏􏰃􏰋􏰃􏰕􏰇􏰊􏰁􏰈􏰧 􏰁􏰈 􏰷􏰉􏰄􏰇􏰊􏰃 􏰆􏰈􏰏 􏰆􏰄􏰃 􏰋􏰁􏰃􏰩􏰃􏰏 􏰆􏰂 􏰂􏰉􏰂􏰅􏰆􏰁􏰈􏰆􏰙􏰕􏰃 􏰫􏰘 􏰈􏰆􏰈􏰖􏰃 􏰙􏰛 􏰅􏰓􏰃 􏰷􏰉􏰄􏰇􏰊􏰃􏰆􏰈 􏰬􏰇􏰝􏰝􏰁􏰂􏰂􏰁􏰇􏰈􏰍 􏰚􏰓􏰃 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗 􏰋􏰃􏰄􏰁􏰫􏰃􏰂 􏰅􏰓􏰃 􏰉􏰅􏰁􏰕􏰁􏰅􏰛 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰅􏰇 􏰂􏰇􏰕􏰋􏰃 􏰂􏰇􏰝􏰃 􏰊􏰄􏰃􏰋􏰆􏰕􏰃􏰈􏰅 􏰁􏰂􏰂􏰉􏰃􏰂 􏰁􏰈 􏰂􏰇􏰖􏰁􏰆􏰕 􏰫􏰈􏰆􏰈􏰖􏰃􏰍 􏰚􏰓􏰃 􏰩􏰇􏰄􏰗 􏰆􏰖􏰖􏰇􏰝􏰊􏰕􏰁􏰂􏰓􏰃􏰂 􏰅􏰓􏰁􏰂 􏰅􏰓􏰄􏰇􏰉􏰧􏰓 􏰅􏰓􏰃 􏰏􏰃􏰋􏰃􏰕􏰇􏰊􏰝􏰃􏰈􏰅 􏰆􏰈􏰏 􏰅􏰃􏰂􏰅􏰁􏰈􏰧 􏰇􏰞 􏰆 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰏􏰇􏰈􏰆􏰅􏰁􏰇􏰈 􏰆􏰊􏰊􏰕􏰁􏰖􏰆􏰅􏰁􏰇􏰈􏰍 􏰠 􏰴􏰉􏰆􏰕􏰁􏰅􏰆􏰅􏰁􏰋􏰃 􏰄􏰃􏰋􏰁􏰃􏰩 􏰇􏰞 􏰅􏰓􏰃 􏰃􏰖􏰇􏰈􏰇􏰝􏰁􏰖 􏰁􏰝􏰊􏰆􏰖􏰅 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰝􏰁􏰖􏰄􏰇􏰊􏰆􏰛􏰝􏰃􏰈􏰅􏰂 􏰓􏰆􏰂 􏰆􏰕􏰂􏰇 􏰙􏰃􏰃􏰈 􏰖􏰇􏰈􏰏􏰉􏰖􏰅􏰃􏰏􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰖􏰉􏰂􏰂􏰁􏰇􏰈 􏰇􏰈 􏰅􏰓􏰃 􏰃􏰖􏰇􏰈􏰇􏰝􏰁􏰖 􏰁􏰝􏰊􏰆􏰖􏰅 􏰆􏰕􏰂􏰇 􏰁􏰈􏰖􏰕􏰉􏰏􏰃􏰂 􏰆 􏰊􏰄􏰇􏰊􏰇􏰂􏰁􏰅􏰁􏰇􏰈 􏰞􏰇􏰄 􏰃􏰪􏰅􏰃􏰈􏰏􏰁􏰈􏰧 􏰅􏰓􏰃 􏰆􏰖􏰖􏰃􏰂􏰂 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰂􏰃􏰄􏰋􏰁􏰖􏰃􏰂 􏰅􏰇 􏰅􏰓􏰃 􏰉􏰈􏰘 􏰏􏰃􏰄􏰙􏰆􏰈􏰗􏰃􏰏 􏰆􏰈􏰏 􏰉􏰈􏰙􏰆􏰈􏰗􏰃􏰏 􏰊􏰃􏰇􏰊􏰕􏰃􏰍 􏰚􏰓􏰃 􏰩􏰇􏰄􏰗 􏰖􏰇􏰈􏰖􏰕􏰉􏰏􏰃􏰂 􏰩􏰁􏰅􏰓 􏰆 􏰓􏰛􏰊􏰇􏰅􏰓􏰃􏰅􏰁􏰖􏰆􏰕 􏰝􏰇􏰏􏰃􏰕 􏰇􏰞 􏰆 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰃􏰖􏰇􏰂􏰛􏰂􏰅􏰃􏰝􏰨 􏰏􏰃􏰊􏰁􏰖􏰅􏰁􏰈􏰧 􏰅􏰓􏰃 􏰏􏰃􏰊􏰕􏰇􏰛􏰝􏰃􏰈􏰅 􏰇􏰞 􏰅􏰓􏰃 􏰝􏰆􏰺􏰇􏰄 􏰖􏰇􏰈􏰅􏰄􏰁􏰙􏰉􏰅􏰁􏰇􏰈􏰂 􏰇􏰞 􏰅􏰓􏰁􏰂 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈􏰍 [less ▲]

Detailed reference viewed: 272 (3 UL)
Full Text
See detailBlockchain Technology for Data Sharing in the Banking Sector
Norvill, Robert UL

Doctoral thesis (2020)

Detailed reference viewed: 25 (1 UL)
Full Text
See detailInkjet-printed piezoelectric films for transducers
Godard, Nicolas UL

Doctoral thesis (2020)

Lead zirconate titanate (PZT) thin films are a popular choice for piezoelectric devices such as microelectromechanical systems, micro-pumps, micro-mirrors or energy harvesters. Various fabrication ... [more ▼]

Lead zirconate titanate (PZT) thin films are a popular choice for piezoelectric devices such as microelectromechanical systems, micro-pumps, micro-mirrors or energy harvesters. Various fabrication techniques exist for the deposition of PZT in the form of thin films. Physical vapor deposition (PVD) methods are particularly cost-intensive, as they require vacuum conditions and expensive infrastructure. Fabrication costs can be decreased by the use of chemical solution deposition (CSD), where the metal precursors are dispersed in a solvent medium and coated onto a substrate. Thermal treatments convert the liquid precursor into a functional solid film. Spin coating is a conventional coating technique allowing for the deposition of homogeneous layers over large-area substrates. However, it is inherently wasteful, as most of the precursor material is spun off the substrate in the coating process. In addition, as spin coating results in complete coverage of the substrate, layer patterning requires lithography, which adds up extra steps and costs to the overall process. Inkjet printing is an additive manufacturing technique that has the potential to address both of these issues, thus further decreasing manufacturing costs and the associated ecological footprint. The working principle of inkjet printing can be described as the deposition of individual ink droplets at digitally determined locations on the substrate surface, which then merge into a continuous film. Inkjet printing is compatible with CSD processing of PZT thin films, as demonstrated by the previous works in the field. However, the adaptation of standard CSD processing for inkjet printing comes with several challenges, which have to be considered to obtain state-of-the-art functional PZT layers. In the present work, we explore several issues related to the processing of PZT thin films by inkjet printing and we provide possible solutions to address them, in a way that had not been described yet by the state of the art. In particular, we describe a novel strategy that uses inkjet-printed alkanethiolate-based self-assembled monolayers for direct patterning of PZT thin films on platinized silicon. Then, we present a systematic study of the pyrolysis step of the process, which enabled us to print dense and textured layers with state-of-the-art electrical properties. We also developed a proof-of-concept piezoelectric energy harvesting device based on inkjet-printed PZT films. Finally, we unveil a comparative study where we identified an alternative solvent for CSD processing of PZT thin films. [less ▲]

Detailed reference viewed: 17 (3 UL)
Full Text
See detailA Real-World Flexible Job Shop Scheduling Problem With Sequencing Flexibility: Mathematical Programming, Constraint Programming, and Metaheuristics
Tessaro Lunardi, Willian UL

Doctoral thesis (2020)

In this work, the online printing shop scheduling problem is considered. This challenging real scheduling problem, that emerged in the nowadays printing industry, corresponds to a flexible job shop ... [more ▼]

In this work, the online printing shop scheduling problem is considered. This challenging real scheduling problem, that emerged in the nowadays printing industry, corresponds to a flexible job shop scheduling problem with sequencing flexibility that includes several complicating specificities such as resumable operations, periods of unavailability of the machines, sequence-dependent setup times, partial overlapping between operations with precedence constraints, fixed operations, among others. In the present work, a mixed integer linear programming model, a constraint programming model, and heuristic methods such as local search and metaheuristics for the minimization of the makespan are presented. Modeling the problem is twofold. On the one hand, the problem is precisely defined. On the other hand, the capabilities and limitations of a commercial software for solving the models are analyzed. Numerical experiments show that the commercial solver is able to optimally solve only a fraction of the small-sized instances when considering the mixed integer linear programming formulation. While considering the constraint programming formulation of the problem, medium-sized instances are optimally solved, and feasible solutions for large-sized instances of the problem are found. Ad-hoc heuristic methods, such as local search and metaheuristic approaches that fully exploit the structure of the problem, are proposed and evaluated. Based on a common representation scheme and neighborhood function, trajectory and populational metaheuristics are considered. Extensive numerical experiments with large-sized instances show that the proposed metaheuristic methods are suitable for solving practical instances of the problem; and that they outperform the half-heuristic-half-exact off-the-shelf constraint programming solver. Numerical experiments with classical instances of the flexible job shop scheduling problem show that the introduced methods are also competitive when applied to this particular case. [less ▲]

Detailed reference viewed: 75 (17 UL)
See detailWAVEFORM DESIGN FOR AUTOMOTIVE JOINT RADAR-COMMUNICATION SYSTEM
Dokhanchi, Sayed Hossein UL

Doctoral thesis (2020)

Detailed reference viewed: 82 (6 UL)
Full Text
See detailOPTICAL DEFECT SPECTROSCOPY IN CUINS2 THIN FILMS AND SOLAR CELLS
Lomuscio, Alberto UL

Doctoral thesis (2020)

Pure-sulphide Cu(In,Ga)S2 solar cells have reached certified power conversion efficiency as high as 15.5 %. While this record performance has been achieved by growing the semiconducting absorber at very ... [more ▼]

Pure-sulphide Cu(In,Ga)S2 solar cells have reached certified power conversion efficiency as high as 15.5 %. While this record performance has been achieved by growing the semiconducting absorber at very high temperature with a copper deficient composition, all other previous records were based on chalcopyrite films deposited under Cu excess. Still, this world record is far from the theoretical power conversion achievable in single junction solar cell for this semiconductor (about 30 %), which has a tunable band gap between 1.5 and 2.4 eV. This thesis aims to gain insight into the optoelectronic properties of this semiconductor, particularly CuInS2, looking at their variation as a function of the deposition temperature and of the absorber composition. The investigations are carried out mainly by photoluminescence (PL) spectroscopy, which allows to measure the quasi Fermi level splitting (QFLS), that is an upper limit of the maximum open circuit voltage (VOC) an absorber is capable of. PL spectroscopy is used to get insights onto the electronic defects as well, both the shallow ones, which contribute to the doping, and the deep ones, which enhance non-radiative recombination. By increasing the Cu content in the as-grown compositions, the morphology and microstructure of the thin films improve, as they show larger grains and less structural defects than films deposited with Cu deficiency. The composition affects the QFLS as well, which is significantly higher for sample deposited under Cu excess, in contrast to the observations in selenide chalcopyrite. The increment of the process temperature leads to an improvement of the QFLS too, although absorbers grown in Cu deficiency are less influenced, likely because of a lower sodium content in the high-temperature glass used as substrate. The QFLS increase correlates with the lowering of a deep defect related band, which manifests itself with a peak maximum at around 0.8 eV in room temperature PL spectra. In literature, the low efficiencies exhibited by Cu(In,Ga)S2–based solar cells are often attributed to interface problems at the p-n junction, i.e. at the absorber-buffer layer interface. In this work, the comparison of the QFLS and VOC of pure sulphides CIGS with those measured on selenides clearly points out that the lower efficiencies exhibited by the former are caused also by the intrinsic lower optoelectronic quality of Cu(In,Ga)S2 films. To shed light on the electronic structure, high quality CuInS2 films are deeply investigated by means of low temperature PL. Four shallow defects are detected: one shallow donor at about 30 meV from the conduction band and three shallow acceptors at about 105, 145 and 170 meV from the valence band. The first of these acceptors dominates the band edge luminescence of sample grown with composition close to the stoichiometry, whereas the second deeper acceptor is characteristic of absorbers deposited in Cu rich regime. The deepest of these acceptors seems to be present over a wide range of compositions, although its luminescence is observable only for slight Cu-poor samples with sodium incorporation during the deposition. The quality of the examined films allows the observations of phonon coupling of these shallow defects for the first time in this semiconductor. All these observations on shallow defects and their phonon coupling behaviour allowed to revise the defect model for this semiconductor. The findings of this thesis reveal the strong similarity of the shallow defects structure with selenium based compounds. On the other hand, the presence of deep defects in CuInS2 strongly limits the optoelectronic quality of the bulk material, causing the gap in power conversion efficiencies compared to low-band gap Cu(In,Ga)Se2 solar cells, which show efficiencies above 23%. [less ▲]

Detailed reference viewed: 95 (13 UL)
Full Text
See detailCultural Psychological Re-Formulation of Ego-Defence into Ego-Construction
Mihalits, Dominik Stefan UL

Doctoral thesis (2020)

The developing self is a complex concept that recurrently occupies a variety of academic disciplines, and that is yet to be clarified from a holistically, transdisciplinary standpoint. For instance ... [more ▼]

The developing self is a complex concept that recurrently occupies a variety of academic disciplines, and that is yet to be clarified from a holistically, transdisciplinary standpoint. For instance, psychoanalytical theories offer detailed insight into the intraindividual psycho-dynamics of personal development. Cultural psychological theories, on the other hand, stress a culture’s influence on a person’s day-to-day development and advance a detailed account of semiotic, i.e., culturally mediated, sign construction that underlies psychological process-es and results from it at the same time. Importantly, and what a cultural psychological standpoint therefore offers, is a view on culture that withdraws from conceiving it as an own entity (e.g., cannot be calculated as an external factor), but views it as deeply entangled with the formation of personality development. Both theory strands thus each complexly address ‘sides of the same coin’, namely the phenomenon of the developing self, but have not yet been systematically linked with each other from a holistic perspective. Therefore, this thesis addresses the question, how an integrative perspective of psychoanalytical psychodynamics can be synthesized with cultural psychological metatheory on development. More precisely, I theoretically explore how psychoanalytical theories of ego defence mechanisms help furthering an analysis of ego construction. By using the concept of ego construction, I argue that cultural psychological construction processes that are entangled with people’s engagement with their culturally laden environments can further elaborate psychoanalytical theories on ego defence. To approach ego defence, this project departs from Freudian psychoanalytic theory. It draws on the differentiation between needs and wishes that leads to an inner tension where defence mechanisms help in understanding the tension upon delayed gratification. Pushing beyond this traditional perspective and by assuming a high entanglement of needs and wishes, the defence needs to be recognized as an ongoing process, conceptualized as a continuous and recurring process rather than as mechanisms. It is a central conclusion of this Ph.D. project that, therefore, concepts of defence must leave their descriptive level to overcome the problem of cause and effect, allowing an understanding of development as open psychodynamic and cultural system. [less ▲]

Detailed reference viewed: 117 (6 UL)
Full Text
See detailInvestigating the potential of investing in fine stringed instruments as an alternative investment asset
Ortiz-Munoz, Angela UL

Doctoral thesis (2020)

Often seen as a passion project or part of a philanthropic venture, rare and fine stringed instruments offer an exciting option to diversify one’s investment portfolio while providing an opportunity for ... [more ▼]

Often seen as a passion project or part of a philanthropic venture, rare and fine stringed instruments offer an exciting option to diversify one’s investment portfolio while providing an opportunity for an exceptional long-term investment. Though, historically rare violins have not been widely recognized as assets for investment, this category is gaining interest due to its steady increase in value, a lively international market and a finite and diminishing supply. This study demonstrates that fine stringed instruments offer a steady increase of approximately 3,7-6,9% return per annum with a dramatic percentage increase since the 80’s. In this thesis, the stringed instrument public auction and private dealer markets are reviewed, the price dynamics are studied, and some fundamental intra market specific limitations are tackled in order to observe the true underlying returns of this asset. In order to build solid conclusions, the largest fine stringed instrument auction database has been developed encompassing the period from 1850 until today, although for the analysis a focus is put in the period from the 1980’s until today, as it is when the demand and, consequently the market for violins boomed. [less ▲]

Detailed reference viewed: 79 (4 UL)
See detailOn the practical security of white-box cryptography
Wang, Junwei UL

Doctoral thesis (2020)

Cryptography studies how to secure communications and information. The security of a cryptosystem depends on the secrecy of the underlying key. White-box cryptography explores methods to hide a ... [more ▼]

Cryptography studies how to secure communications and information. The security of a cryptosystem depends on the secrecy of the underlying key. White-box cryptography explores methods to hide a cryptographic key into some software deployed in the real world. Classical cryptography only assumes that the adversary accesses the target cryptographic primitive in a black-box manner in which she can only observe or manipulate the input and output of the primitive, but cannot know or tamper with its internal details. The gray-box model further allows an adversary to exploit key- dependent sensitive information leaked from the execution of physical implementations. All sorts of side-channel attacks exploit some physical information leakage, such as the power consumption of the device. The white-box model considers the worst-case scenario in which the adversary has complete control over the software and its execution environment. The goal of white-box cryptography is to securely implement a cryptographic primitive against such a powerful adversary. Although the scientific community has proposed some candidate solutions to build white-box cryptography, all have proven ineffective. Consequently, this problem has remained open for almost two decades since the concept was introduced. The continuous growth in market demand and the emerging potential applications have driven the industry to deploy secretly-designed proprietary solutions. Al- though this paradigm of achieving security through obscurity contradicts the widely accepted Kerckhoffs' principle in cryptography, this is currently the only option for white-box cryptography. Security experts have reported how gray-box attacks could be used to extract keys from several publicly available white-box implementations. In a gray-box attack, the adversary adapts side-channel analysis techniques to the white-box context, i.e., to target computation traces made of noise-free run- time information instead of the noisy physical leakage. Gray-box attacks are generic since they do not require any a priori knowledge of the implementation and hence avoid costly reverse engineering. Some non-publicly scrutinized industrial white- box schemes in the market are believed to be under the threat of gray-box attacks. This thesis focuses on the analysis and improvement of gray-box attacks and the associated countermeasures for white-box cryptography. We first provide an in- depth analysis of why gray-box attacks are capable of breaking the classical white- box design which is based on table encodings. Next, we introduce a new gray-box at- tack named linear decoding analysis and show that linearly encoding sensitive information is insufficient to protect the cryptographic software. Afterward, we describe how to combine state-of-the-art countermeasures to resist gray-box attacks and comprehensively elaborate on the (in)effectiveness of these combined countermeasures in terms of computation complexity. Finally, we introduce a new attack technique that exploits the data-dependency of the targeted implementation to substantially lower the complexity of the existing gray-box attacks on white-box cryptography. In addition to the theoretical analyses and new attack techniques introduced in this thesis, we report some attack experiments against practical white-box implementations. In particular, we could break the winning implementations of two consecutive editions of the well-known WhibOx white-box cryptography competition. [less ▲]

Detailed reference viewed: 41 (9 UL)
Full Text
See detailDemountable composite beams: Analytical calculation approaches for shear connections with multilinear load-slip behaviour
Kozma, Andras UL

Doctoral thesis (2020)

The work carried out throughout the thesis focused on the behaviour of demountable composite beams in order to facilitate the integration of steel-concrete composite construction into the concept of ... [more ▼]

The work carried out throughout the thesis focused on the behaviour of demountable composite beams in order to facilitate the integration of steel-concrete composite construction into the concept of circular economy. There are several hindrances in the way of reuse when considering traditional composite structures. One of them is the method that the current construction practice applies for connecting the concrete deck to the steel beam. The traditionally applied welded studs are advantageous in the terms of structural performance; however, they do not provide the ability of dismounting. In order to overcome this issue, different demountable shear connection types were investigated that use pretensioned bolted connections. The investigations included laboratory experiments in the means of push-out tests and full-scale beam-tests. The experiments were complemented by numerical simulations and parametric studies. The experiments showed that the developed shear connections have highly a nonlinear load-slip behaviour. When these types of connections are applied in a composite beam, the nonlinearity of the shear connection causes a nonlinear load-deflection response already in the elastic phase. Analytical equations were derived for the description of the elastic properties of composite beams with nonlinear shear connection. For the calculation of the elastic deflections an iterative procedure was developed. This method is capable of capturing the nonlinear load-deflection response. With the developed iterative method, the elastic deflections can be determined with a similar accuracy by using spreadsheet calculations as by using nonlinear finite element simulations. Due to the highly nonlinear behaviour of the tested shear connections the basic assumptions of Eurocode 4 for the determination of the plastic moment resistance of composite beams with partial shear connection are not valid anymore. The code does not enable the use of equidistant shear connector spacing and the design needs to be conducted using fully elastic analysis. This would make the use of demountable shear connections complicated and uneconomic. In the face of these issues, the probability of the practical application of demountable and reusable composite structures would be very low. On the other hand, experiments and numerical simulations show that composite beams can develop plasticity even if a non-ductile shear connection is applied. In order to overcome these issues, a new calculation method was developed for the prediction of the plastic moment resistance of demountable composite beams. A simplified method was proposed based on the developed procedure by defining an effective shear resistance for the demountable shear connections. The effective shear resistance allows the current calculation method to be extended for demountable shear connections. In this way, the benefits of composite construction can be maintained while providing the possibility of reuse. [less ▲]

Detailed reference viewed: 24 (4 UL)
Full Text
See detailGenetic regulators of ventral midbrain gene expression and nigrostriatal circuit integrity
Gui, Yujuan UL

Doctoral thesis (2020)

Complex traits are a fundamental feature of diverse organisms. Understanding the genetic architecture of a complex trait is arduous but paramount because heterogeneity is prevalent in populations and ... [more ▼]

Complex traits are a fundamental feature of diverse organisms. Understanding the genetic architecture of a complex trait is arduous but paramount because heterogeneity is prevalent in populations and often disease-related. Genome-wide association studies have identified many genetic variants associated with complex human traits, but they can only explain a small portion of the expected heritability. This is partially because human genomes are highly diverse with large inter-personal difference. It has been estimated that every human differs from each other by at least 5 million variants. Moreover, many common variants with small effect can contribute to complex traits, but they cannot survive from stringent statistical cutoff given the currently available sample size. Mice are an ideal substitute. They are maintained in a controlled condition to minimize the variation introduced by environment. Each mouse of an inbred strain is genetically identical, but different strains bear innate genetic heterogeneity between each other, mimicking human diversity. Hence, in this work we used inbred mouse strains to study the genetic variation of complex traits. We focused on ventral midbrain, the brain region controlling motor functions and behaviors such as anxiety and fear learning that differ profoundly between inbred mouse strains. Such phenotypic diversity is directed by differences in gene expression that is controlled by cis- and trans-acting regulatory variants. Profound understanding on the genetic variation of ventral midbrain and its related phenotypic differences could pave the way to apprehend the whole genetic makeup of its associated disease phenotypes such as Parkinson’s disease and schizophrenia. Therefore, we set out to investigate the cis- and trans-acting variants affecting mouse ventral midbrain by coupling tissue-level and cell type-specific transcriptomic and epigenomic data. Transcriptomic comparison on ventral midbrains of C57BL/6J, A/J and DBA/2J, three inbred strains segregated by ~ 6 million genetic variants, pinpointed PTTG1 was the only transcription factor significantly altered at transcriptional level between the three strains. Pttg1 ablation on C57BL/6J background led to midbrain transcriptome to shift closer to A/J and DBA/2J during aging, suggesting Pttg1 is a novel regulator for ventral midbrain transcriptome. As ventral midbrain is a mixture of cells, tissue level transcriptome cannot always reveal cell type-specific regulatory variation. Therefore, we set out to generate single nuclei chromatin accessibility profiles on ¬ventral midbrains of C57BL/6J and A/J, providing a rich resource to study the transcriptional control of cellular identity and genetic diversity in this brain region. Data integration with existing single cell transcriptomes predicted the key transcription factors controlling cell identity. Putative regulatory variants showed differential accessibility across cell types, indicating genetic variation can direct cell type-specific gene expression. Comparing chromatin accessibility between mice revealed potential trans-acting variation that can affect strain-specific gene expression in a given cell type. The diverse transcriptome profiles in ventral midbrain can lead to phenotypic variation. Nigrostriatal circuit, bridging from ventral midbrain to dorsal striatum by dopaminergic neurons, is an important pathway controlling motor activity. To search for phenotypes related to dopaminergic neurons, we measured the dopamine concentration in dorsal striatum of eight inbred mouse strains. Interestingly, dopamine levels were varied among stains, suggesting it is a complex trait linked to genetic variation in ventral midbrain. To understand the genetic variation contributing to dopamine level differences, we conducted quantitative trait locus (QTL) mapping with 32 CC strains and found a QTL significantly associated with the trait on chromosome X. As expression changes are likely to be underlying the phenotypic variation, we leveraged our previous transcriptomic data from C57BL/6J and A/J to search for genes differentially expressed in the QTL locus. Col4a6 is the most likely QTL gene because of its 9-fold expression difference between C57BL/6J and A/J. Indeed, COL4A6 has been shown to regulate axogenesis during brain development. This coincides with our observation that A/J had less axon branching in dorsal striatum than C57BL/6J, prompting us to propose that Col4a6 can regulate the axon formation of dopaminergic neurons in embryonic stages. Our study provides a comprehensive overview on cis- and trans-regulatory variants affecting expression phenotypes in ventral midbrain, and how they could possibly introduce phenotypic difference associated with this brain region. In addition, our single nuclei chromatin landscapes of ventral midbrain are a rich resource for analysis on gene regulation and cell identity. Our work paves the way to apprehend full genetic makeup on the gene expression control of ventral midbrain, the result of which is important to understand the genetic background of midbrain associated phenotypes. [less ▲]

Detailed reference viewed: 65 (11 UL)
Full Text
See detailMicrostructure-based multiscale modeling of mechanical response for materials with complex microstructures
Kabore, Brice Wendlassida UL

Doctoral thesis (2020)

Complex microstructures are found in several material especially in biological tissues, geotechnical materials and many manufactured materials including composites. These materials are difficult to handle ... [more ▼]

Complex microstructures are found in several material especially in biological tissues, geotechnical materials and many manufactured materials including composites. These materials are difficult to handle by classical numerical analysis tools and the need to incorporate more details on the microstructure have been observed. This thesis focuses on the microstructure-based multi-scale modeling of the mechanical response of materials with complex microstructures and whose mechanical properties are inherently dependent on their internal structure. The conditions of interest are large displacements and high-rate deformation. This work contributes to the understanding of the relevance of microstructure informations on the macroscopic response. A primary application of this research is the investigation and modeling of snow behavior, it has been extended to modeling the impact response in concrete and composite. In the first part, a discrete approach for fine-scale modeling is applied to study the behavior of snow under the conditions mentioned above. Also, application of the this modeling approach to concrete and composite can be found in the appendices. The fine-scale approach presented herein is based on the coupling of Discrete Element Method and aspects of beam theory. This fine-scale approach has proven to be successful in modeling micro-scale processes found in snow. The micro-scale processes are mainly intergranular friction, intergranular bond fracture, creep, sintering, cohesion, and grain rearrangement. These processes not only influence the overall response of the material but also induce permanent changes in its internal structure. Therefore, the initial geometry considered during numerical analysis should be updated after each time or loading increment before further loading. Moreover, when the material matrix is partially granular and continuum, the influence of fluctuating grains micro-inertia caused by debonding, cracking and contact have a significant effect on the macroscopic response especially under dynamic loading. Consequently, the overall rate and history dependent behavior of the material is more easily captured by discrete models. Discrete modeling has proven to be efficient approach for acquiring profound scientific insight into deformation and failure processes of many materials. While important details can be obtained using the discrete models, computational cost and intensive calibration process is required for a good prediction material behavior in the real case scenarios. Therefore, in order to extend the abovementioned fine-scale model to real engineering cases a coarse-scale continuum model based have been developed using an upscaling approach. This upscaled model is based on the macroscopic response of the material with a special regard to the microstructure information of the material. Different strategies are presented for incorporating the microstructure information in the model. Micro-scale related dissipation mechanisms have been incorporated in the coarse-scale model through viscoplasticity and fracture in finite strain formulation. The thesis is divided into nine chapters, where each is an independent paper published or submitted as a refereed journal article. [less ▲]

Detailed reference viewed: 96 (11 UL)
Full Text
See detailA Formal Approach to Ontology Recommendation for Enhanced Interoperability in Open IoT Ecosystems
Kolbe, Niklas UL

Doctoral thesis (2020)

The vision of the Internet of Things (IoT) promises novel, intelligent applications to improve services across all industries and domains. Efficient data and service discovery are crucial to unfold the ... [more ▼]

The vision of the Internet of Things (IoT) promises novel, intelligent applications to improve services across all industries and domains. Efficient data and service discovery are crucial to unfold the potential value of cross-domain IoT applications. Today, the Web is the primary enabler for integrating data from distributed networks, with more and more sensors and IoT gateways connected to the Web. However, semantic data models, standards and vocabularies used by IoT vendors and service providers are highly heterogeneous, which makes data discovery and integration a challenging task. Industrial and academic research initiatives increasingly rely on Semantic Web technologies to tackle this challenge. Ongoing research efforts emphasize the development of formal ontologies for the description of Things, sensor networks, IoT services and domain-dependent observations to annotate and link data on the Web. Within this context, there is a research gap in investigating and proposing ontology recommendation approaches that foster the reuse of most suitable ontologies relevant to semantically annotate IoT data sources. Improved ontology reuse in the IoT enhances semantic interoperability and thus facilitates the development of more intelligent and context-aware systems. In this dissertation, we show that ontology recommendation can form a key building block to achieve this consensus in the IoT. In particular, we consider large-scale IoT systems, also referred to as IoT ecosystems, in which a wide range of stakeholders and service providers have to cooperate. In such ecosystems, semantic interoperability can only be efficiently achieved when a high degree of consensus on relevant ontologies among data providers and consumers exists. This dissertation includes the following contributions. First, we conceptualize the task of ontology recommendation and evaluate existing approaches with regard to IoT ecosystem requirements. We identify several limitations in ontology recommendation, especially concerning the IoT, which motivates the main focus on ontology ranking in this dissertation. Second, we subsequently propose a novel approach to ontology ranking that offers a fairer scoring of ontologies if their popularity is unknown and thus helps in providing a better recommendation in the current state of the IoT. We employ a `learning to rank' approach to show that qualitative ranking features can improve the ranking performance and potentially substitute an explicit popularity feature. Third, we propose a novel ontology ranking evaluation benchmark to address the lack of comparison studies for ontology ranking approaches as a general issue in the Semantic Web. We develop a large, representative evaluation dataset that we derive from the collected user click logs of the Linked Open Vocabularies (LOV) platform. It is the first dataset of its kind that is capable of comparing learned ontology ranking models as proposed in the literature under real-world constraints. Fourth, we present an IoT ecosystem application to support data providers in semantically annotating IoT data streams with integrated ontology term recommendation and perform an evaluation based on a smart parking use case. In summary, this dissertation presents the advancements of the state-of-the-art in the design of ontology recommendation and its role for establishing and maintaining semantic interoperability in highly heterogeneous and evolving ecosystems of inter-related IoT services. Our experiments show that ontology ranking features that are well designed with regard to the underlying ontology collection and respective user behavior can significantly improve the ranking quality and, thus, the overall recommendation capabilities of related tools. [less ▲]

Detailed reference viewed: 163 (5 UL)
See detailNavigating the narrow circle: Rawls and Stout on justification, discourse and institutions
Burks, Deven Kent UL

Doctoral thesis (2020)

Life in political society unfolds within the bounds of a narrow circle, epistemic and moral. A person has only finite faculties and restricted moral motivation. When formulating projects, the person ought ... [more ▼]

Life in political society unfolds within the bounds of a narrow circle, epistemic and moral. A person has only finite faculties and restricted moral motivation. When formulating projects, the person ought to recognize these limits but also to check them. Accordingly, she seeks a deliberative ideal which is sensitive both to good epistemic practice and to respectful relations. How might the person best justify the shape of her society’s institutions, statutes and policies? What reflexive attitudes and dispositions ought she to adopt towards her justificatory resources? The person might work through the sequence of standpoints from John Rawls’s “political liberalism”: a first-person, action-guiding framework of deliberation and reflection. Alternatively, she might model the exploratory discourse and personal virtues characteristic of Jeffrey Stout’s “democratic traditionalism”. This work reconstructs Rawls’s and Stout’s approaches to justification, discourse and institutions and compares their differing methods in search of the most adequate deliberative ideal for democratic society. [less ▲]

Detailed reference viewed: 86 (2 UL)
See detailL'accord exclusif d'élection de for à travers de la Convention de La Haye: une efficacité mitigée
Mchirgui, Zohra UL

Doctoral thesis (2020)

L'accord exclusif d'élection de for en tant que mode d'attribution de compétence dans les litiges commerciaux internationaux fait partie de l'économie du contrat international. Il constitue une composante ... [more ▼]

L'accord exclusif d'élection de for en tant que mode d'attribution de compétence dans les litiges commerciaux internationaux fait partie de l'économie du contrat international. Il constitue une composante indispensable de l'autonomie des parties en tant que principe gouvernant les rapports commerciaux internationaux. À cet égard, la promotion du commerce et des investissements internationaux nécessitent l'élaboration d'un régime international apportant la sécurité et assurant l'efficacité des accords exclusifs d'élection de for. Tel est l'objectif proclamé par les rédacteurs de la Convention de La Haye sur les accords d'élection de for. L'analyse des dispositions de la Convention révèle que cette efficacité prônée par cet instrument est mitigée. Ce constat se vérifie tant au niveau de la validité de l'accord qu'au niveau de ses effets. [less ▲]

Detailed reference viewed: 27 (1 UL)
See detailRouting Strategies and Content Dissemination Techniques for Software-Defined Vehicular Networks
di Maio, Antonio UL

Doctoral thesis (2020)

Over the past years, vehicular networking has enabled a wide range of new applications that improve vehicular safety, efficiency, comfort, and environmental impact. Vehicular networks, however, normally ... [more ▼]

Over the past years, vehicular networking has enabled a wide range of new applications that improve vehicular safety, efficiency, comfort, and environmental impact. Vehicular networks, however, normally operate in communication-hostile environments and are characterized by dynamic topologies and volatile links, making it challenging to guarantee Quality of Service (QoS) and reliability of vehicular applications. To this end, the present work explores how the centralized coordination offered by Software-Defined Networking can improve the Quality of Service in vehicular networks, particularly for Vehicle-to-Vehicle (V2V) unicast routing and content dissemination. With regard to V2V routing, this work motivates the case for centralized network coordination by studying the performance of traditional MANET routing protocols when applied to urban VANETs, showing that they cannot provide satisfactory performance for modern vehicular applications because of their limited global network awareness, slow convergence, and high signaling. Hence, this work proposes and validates a centralized Multi-Flow Congestion-Aware Routing (MFCAR) algorithm to allocate multiple data flows on V2V routes. The first novelty of MFCAR is the SDN-based node-busyness estimation technique. The second novelty is the enhancement of the path-cost formulation as a linear combination of path length and path congestion, allowing the user application to fine-tune its QoS requirements between throughput and delay. Concerning content dissemination, this work proposes a Fairness- and Throughput-Enhanced Scheduling for Content Dissemination in VANETs (ROADNET), a centralized strategy to improve the tradeoff between data throughput and user fairness in deterministic vehicular content dissemination. ROADNET’s main novelties are the design of a graph-based multi-channel transmission scheduler and the enforcement of a transmission-priority policy that prevents user starvation. As additional contributions, the present work proposes a heuristic for the centralized selection of opportunistic content dissemination parameters and discusses the main security issues in Software-Defined Vehicular Networking (SDVN) along with possible countermeasures. The proposed techniques are evaluated in realistic scenarios (LuST), using discrete-event network simulators (OMNeT++) and microscopic vehicular-mobility simulators (SUMO). It is shown that MFCAR can improve PDR, throughput and delay for unicast V2V routing up to a five-fold gain over traditional algorithms. ROADNET can increase content dissemination throughput by 36% and user fairness by 6% compared to state-of-the-art techniques, balancing the load over multiple channels with a variance below 1%. [less ▲]

Detailed reference viewed: 112 (11 UL)
See detailSurface Energy Modification of Filter Media to achieve optimal Performance Characteristics in select Applications
Staudt, Johannes UL

Doctoral thesis (2020)

The surface modification of modern filter media is examined from the perspective of the energetic properties and how they influence select filtration applications. In contrast to the known mechanical ... [more ▼]

The surface modification of modern filter media is examined from the perspective of the energetic properties and how they influence select filtration applications. In contrast to the known mechanical filtration mechanisms, which are mainly applicable to the solid-liquid separations, new findings strongly suggest that direct interaction forces between the filter and the functional fluids must be taken into account in order to achieve sufficient efficiencies. Separation processes of liquid phases such as liquidliquid coalescence (LLC) or the treatment of process gases with liquid-gas coalescence (LGC) require special properties of filter media with regard to the degree of interaction with these phases. These include but are not limited to surface energy, wettability, chemical resistance, etc. The focus falls increasingly on eliminating the undesired interactions of modern filters with the fluid to be filtered. Filtration with modern fine filter media can result in undesired additive removal (ADDREM), particularly of those additives that are not fully dissolved in carrier fluid. Specifically this refers to the removal of antifoamants from gear oils, which lead to serious consequential damage of those systems. The interfacial interactions between the filter media and the functional fluids are also responsible for other effects such as the highly undesirable phenomenon of electrostatic charging/discharging (ESC/ESD) during the filtration of low-conductivity oils. In this work, the effect of the surface energy modification, in particular, is examined in greater detail. Ultimately, the surface energy of modern filter media is characterized and modified in order to optimize their performance in select applications. The work also presents some examples that illustrate the importance of surface energy in highly challenging filtration applications. [less ▲]

Detailed reference viewed: 36 (6 UL)
Full Text
See detailThe D²Rwanda mixed-methods study including a cluster-randomised controlled clinical trial
Lygidakis, Charilaos UL

Doctoral thesis (2020)

Diabetes mellitus prevalence has been estimated at 5.1% in Rwanda. Several factors, including an increase in screening and diagnosis programmes, the urbanization of the population, and changes in ... [more ▼]

Diabetes mellitus prevalence has been estimated at 5.1% in Rwanda. Several factors, including an increase in screening and diagnosis programmes, the urbanization of the population, and changes in lifestyle are likely to contribute to a sharp increase in the prevalence of diabetes mellitus in the next decade. Patients with low health literacy levels are often unable to recognise the signs and symptoms of diabetes mellitus, and may access their health provider late, hence presenting with more complications. The Rwandan health care system is facing a severe shortage in human resources. In response to the need for a better management of non-communicable diseases at primary health care level, a new type of community health workers was introduced: the home-based care practitioners (HBCPs). Approximately 200 HBCPs were trained and deployed in selected areas (“cells”) in nine hospitals across the country. There is growing evidence for the efficacy of interventions using mobile devices in low- and middle-income countries. In Rwanda, there is an urgent call to using mobile health interventions for the prevention and management of non-communicable diseases. The D²Rwanda (Digital Diabetes in Rwanda) research project aims at responding to this call. The overall objectives of the D²Rwanda project are: a) to determine the efficacy of an integrated programme for the management of diabetes in Rwanda, which would include monthly patient assessments by HBCPs and an educational and self-management mobile health patient tool, and; b) to qualitatively explore the ways these interventions would be enacted, their challenges and effects, and changes in the patients’ health behaviours and HBCPs’ work satisfaction. The project employed a mixed-methods sequential explanatory design consisting of a one-year cluster randomised controlled trial with two interventions and followed by focus group discussions with patients and HBCPs. The dissertation presents three studies from the D²Rwanda project. The first study aimed at describing the protocol of the research project, reporting the research questions, inclusion and exclusion criteria, primary and secondary outcomes, measurements, power calculation, randomisation methods, data collection, analysis plan, implementation fidelity and ethical considerations. The aim of the second study was to report on the translation and cultural adaptation of the Problem Areas in Diabetes (PAID) questionnaire and the evaluation of its psychometric properties. First, the questionnaire was translated following a standard protocol. Second, 29 participants were interviewed before producing a final version. Third, we examined a sample of 266 adult patients living with diabetes to determine the psychometric characteristics of the questionnaire. The full scale showed good internal reliability (Cronbach’s α = 0.88). A four-factor model with subdimensions of emotional, treatment, food-related and social-support problems was found to be an adequately approximate fit (RMSEA = 0.056; CFI = 0.951; TLI = 0.943). The mean total PAID score of the sample was high (48.21). Important cultural and contextual differences were noted, urging a more thorough examination of conceptual equivalence with other cultures. The third study aimed at reporting on the disease-related quality of life of patients living with diabetes mellitus in a non-representative sample in Rwanda and to identify potential predictors. This cross-sectional study was part of the baseline assessment of the clinical controlled trial. Between January and August 2019, 206 adult patients living with diabetes were recruited. Disease-specific quality of life was measured using the Kinyarwanda version of the Diabetes-39 (D-39) questionnaire, which was translated and cross-culturally adapted beforehand by the same group of researchers. A haemoglobin A1c (HbA1c) test was performed on all patients. Socio-demographic and clinical data were collected, including medical history, disease-related complications and comorbidities. “Anxiety and worry” and “sexual functioning” were the two most affected dimensions. Hypertension was the most frequent comorbidity (49.0% of participants). The duration of the disease and HbA1c values were not correlated with any of the D-39 dimensions. The five dimensions of quality of life were predicted differentially by gender, age, years of education, marital status, achieving a HbA1c of 7%, hypertension, presence of complications and hypoglycaemic episodes. A moderating effect was identified between use of insulin and achieving a target HbA1c of 7% in the “diabetes control” scale. Further prospective studies are needed to determine causal relationships. [less ▲]

Detailed reference viewed: 84 (5 UL)
Full Text
See detailESSAYS ON AGGLOMERATION, RESILIENCE, AND REGIONAL INNOVATION
Kalash, Basheer UL

Doctoral thesis (2020)

Detailed reference viewed: 31 (6 UL)
Full Text
See detailOn idempotent n-ary semigroups
Devillet, Jimmy UL

Doctoral thesis (2020)

This thesis, which consists of two parts, focuses on characterizations and descriptions of classes of idempotent n-ary semigroups where n >= 2 is an integer. Part I is devoted to the study of various ... [more ▼]

This thesis, which consists of two parts, focuses on characterizations and descriptions of classes of idempotent n-ary semigroups where n >= 2 is an integer. Part I is devoted to the study of various classes of idempotent semigroups and their link with certain concepts stemming from social choice theory. In Part II, we provide constructive descriptions of various classes of idempotent n-ary semigroups. More precisely, after recalling and studying the concepts of single-peakedness and rectangular semigroups in Chapters 1 and 2, respectively, in Chapter 3 we provide characterizations of the classes of idempotent semigroups and totally ordered idempotent semigroups, in which the latter two concepts play a central role. Then in Chapter 4 we particularize the latter characterizations to the classes of quasitrivial semigroups and totally ordered quasitrivial semigroups. We then generalize these results to the class of quasitrivial n-ary semigroups in Chapter 5. Chapter 6 is devoted to characterizations of several classes of idempotent n-ary semigroups satisfying quasitriviality on certain subsets of the domain. Finally, Chapter 7 focuses on characterizations of the class of symmetric idempotent n-ary semigroups. Throughout this thesis, we also provide several enumeration results which led to new integer sequences that are now recorded in The On-Line Encyclopedia of Integer Sequences (OEIS). For instance, one of these enumeration results led to a new definition of the Catalan numbers. [less ▲]

Detailed reference viewed: 45 (5 UL)
Full Text
See detailFormal Framework for Verifying Implementations of Byzantine Fault-Tolerant Protocols Under Various Models
Vukotic, Ivana UL

Doctoral thesis (2020)

The complexity of critical systems our life depends on (such as water supplies, power grids, blockchain systems, etc.) is constantly increasing. Although many different techniques can be used for proving ... [more ▼]

The complexity of critical systems our life depends on (such as water supplies, power grids, blockchain systems, etc.) is constantly increasing. Although many different techniques can be used for proving correctness of these systems errors still exist, because these techniques are either not complete or can only be applied to some parts of these systems. This is why fault and intrusion tolerance (FIT) techniques, such as those along the well-known Byzantine Fault-Tolerance paradigm (BFT), should be used. BFT is a general FIT technique of the active replication class, which enables seamless correct functioning of a system, even when some parts of that system are not working correctly or are compromised by successful attacks. Although powerful, since it systematically masks any errors, standard (i.e., ``homogeneous'') BFT protocols are expensive both in terms of the messages exchanged, the required number of replicas, and the additional burden of ensuring them to be diverse enough to enforce failure independence. For example, standard BFT protocols usually require 3f+1 replicas to tolerate up to f faults. In contrast to these standard protocols based on homogeneous system models, the so-called hybrid BFT protocols are based on architectural hybridization: well-defined and self-contained subsystems of the architecture (hybrids) follow system model and fault assumptions differentiated from the rest of the architecture (the normal part). This way, they can host one or more components trusted to provide, in a trustworthy way, stronger properties than would be possible in the normal part. For example, it is typical that whilst the normal part is asynchronous and suffers arbitrary faults, the hybrids are synchronous and fail-silent. Under these favorable conditions, they can reliably provide simple but effective services such as perfect failure detection, counters, ordering, signatures, voting, global timestamping, random numbers, etc. Thanks to the systematic assistance of these trusted-trustworthy components in protocol execution, hybrid BFT protocols dramatically reduce the cost of BFT. For example, hybrid BFT protocols require 2f+1 replicas instead of 3f +1 to tolerate up to f faults. Although hybrid BFT protocols significantly decrease message/time/space complexity vs. homogeneous ones, they also increase structural complexity and as such the probability of finding errors in these protocols increases. One other fundamental correctness issue not formally addressed previously, is ensuring that safety and liveness properties of trusted-trustworthy component services, besides being valid inside the hybrid subsystems, are made available, or lifted, to user components at the normal asynchronous and arbitrary-on-failure distributed system level. This thesis presents a theorem-prover based, general, reusable and extensible framework for implementing and proving correctness of synchronous and asynchronous homogeneous FIT protocols, as well as hybrid ones. Our framework comes with: (1) a logic to reason about homogeneous/hybrid fault-models; (2) a language to implement systems as collections of interacting homogeneous/hybrid components; and (3) a knowledge theory to reason about crash/Byzantine homogeneous and hybrid systems at a high-level of abstraction, thereby allowing reusing proofs, and capturing the high-level logic of distributed systems. In addition, our framework supports the lifting of properties of trusted-trustworthy components, first to the level of the local subsystem the trusted component belongs to, and then to the level of the distributed system. As case studies and proofs-of-concept of our findings, we verified seminal protocols from each of the relevant categories: the asynchronous PBFT protocol, two variants of the synchronous SM protocol, as well as two versions of hybrid MinBFT protocol. [less ▲]

Detailed reference viewed: 116 (5 UL)
Full Text
See detailReconciling data privacy with sharing in next-generation genomic workflows
Fernandes, Maria UL

Doctoral thesis (2020)

Privacy attacks reported in the literature alerted the research community for the existing serious privacy issues in current biomedical process workflows. Since sharing biomedical data is vital for the ... [more ▼]

Privacy attacks reported in the literature alerted the research community for the existing serious privacy issues in current biomedical process workflows. Since sharing biomedical data is vital for the advancement of research and the improvement of medical healthcare, reconciling sharing with privacy assumes an overwhelming importance. In this thesis, we state the need for effective privacy-preserving measures for biomedical data processing, and study solutions for the problem in one of the harder contexts, genomics. The thesis focuses on the specific properties of the human genome that make critical parts of it privacy-sensitive and tries to prevent the leakage of such critical information throughout the several steps of the sequenced genomic data analysis and processing workflow. In order to achieve this goal, it introduces efficient and effective privacy-preserving mechanisms, namely at the level of reads filtering right upon sequencing, and alignment. Human individuals share the majority of their genome (99.5%), the remaining 0.5% being what distinguishes one individual from all others. However, that information is only revealed after two costly processing steps, alignment and variant calling, which today are typically run in clouds for performance efficiency, but with the corresponding privacy risks. Reaping the benefits of cloud processing, we set out to neutralize the privacy risks, by identifying the sensitive (i.e., discriminating) nucleotides in raw genomic data, and acting upon that. The first contribution is DNA-SeAl, a systematic classification of genomic data into different levels of sensitivity with regard to privacy, leveraging the output of a state-of-the-art automatic filter (SRF) isolating the critical sequences. The second contribution is a novel filtering approach, LRF, which undertakes the early protection of sensitive information in the raw reads right after sequencing, for sequences of arbitrary length (long reads), improving SRF, which only dealt with short reads. The last contribution proposed in this thesis is MaskAl, an SGX-based privacy-preserving alignment approach based on the filtering method developed. These contributions entailed several findings. The first finding of this thesis is the performance × privacy product improvement achieved by implementing multiple sensitivity levels. The proposed example of three sensitivity levels allows to show the benefits of mapping progressively sensitive levels to classes of alignment algorithms with progressively higher privacy protection (albeit at the cost of a performance tradeoff). In this thesis, we demonstrate the effectiveness of the proposed sensitivity levels classification, DNA-SeAl. Just by considering three levels of sensitivity and taking advantage of three existing classes of alignment algorithms, the performance of privacy-preserving alignment significantly improves when compared with state-of-the-art approaches. For reads of 100 nucleotides, 72% have low sensitivity, 23% have intermediate sensitivity, and the remaining 5% are highly sensitive. With this distribution, DNA-SeAl is 5.85× faster and it requires 5.85× less data transfers than the binary classification – two sensitivity levels. The second finding is the sensitive genomic information filtering improvement by replacing the per read classification with a per nucleotide classification. With this change, the filtering approach proposed in this thesis (LRF) allows the filtering of sequences of arbitrary length (long reads), instead of the classification limited to short reads provided by the state-of-the-art filtering approach (SRF). This thesis shows that around 10% of an individuals genome is classified as sensitive by the developed LRF approach. This improves the 60% achieved by the previous state of the art, the SRF approach. The third finding is the possibility of building a privacy-preserving alignment approach based on reads filtering. The sensitivity-adapted alignment relying on hybrid environments, in particular composed by common (e.g., public cloud) and trustworthy execution environments (e.g., SGX enclave cloud) in clouds, gets the best of both worlds: it enjoys the resource and performance optimization of cloud environments,while providing a high degree of protection to genomic data. We demonstrate that MaskAl is 87% faster than existing privacy-preserving alignment algorithms (Balaur), with similar privacy guarantees. On the other hand, Maskal is 58% slower compared to BWA, a highly efficient non-privacy preserving alignment algorithm. In addition, MaskAl requires less 95% of RAM memory and it requires between 5.7 GB and 15 GB less data transfers in comparison with Balaur. This thesis breaks new ground on the simultaneous achievement of two important goals of genomics data processing: availability of data for sharing; and privacy preservation. We hope to have shown that our work, being generalisable, gives a significant step in the direction of, and opens new avenues for, wider-scale, secure, and cooperative efforts and projects within the biomedical information processing life cycle. [less ▲]

Detailed reference viewed: 131 (35 UL)
Full Text
See detailFrom drug resistance mechanisms to microRNA function in melanoma
Kozar, Ines UL

Doctoral thesis (2020)

Cutaneous melanoma is an aggressive skin cancer that emerges from the unrestrained proliferation of melanocytes, which are the pigment producing cells in the basal layer of the epidermis. Despite the fact ... [more ▼]

Cutaneous melanoma is an aggressive skin cancer that emerges from the unrestrained proliferation of melanocytes, which are the pigment producing cells in the basal layer of the epidermis. Despite the fact that it only accounts for approximately 5% of all skin cancers, melanoma is responsible for the vast majority of skin cancer-related deaths. As more than half of the patients with sporadic melanoma harbour activating mutations in the protein kinase BRAF, the development of small kinase inhibitors targeting mutated BRAF led to an increased overall survival of patients with metastatic melanoma. Despite the initially promising results, the rapidly emerging resistance to these targeted therapies remains a serious clinical issue. To investigate the mechanisms underlying resistance to targeted therapies, we used in vitro BRAF-mutant drug-sensitive and drug-resistant melanoma cell models that were generated in our laboratory. First, we performed a kinase inhibitor library screening with the aim to identify novel kinase inhibitor combinations to circumvent or delay BRAF inhibitor-induced resistance. We have characterised synergistic kinase inhibitors targeting the MAPK pathway and the cell cycle showing promising effects in BRAF-mutant drug-sensitive and -resistant cells, which could be used as an effective sequential or alternative treatment option for late-stage melanoma patients. Additionally, we investigated the impact of BRAF inhibitors at the transcriptional level by comparing miRNome and transcriptome changes in drug-sensitive and -resistant melanoma cells. We identified miRNAs (e.g. miR-509, miR-708) and genes (e.g. PCSK2, AXL) that were distinctly differentially expressed in resistant compared to sensitive cells. Subsequent co-expression analyses revealed a low MITF/AXL ratio in a subset of resistant cell lines, suggesting that miRNAs might be involved in the switch from one molecular phenotype to another, thus conferring tolerance to targeted therapies. Finally, we applied a method based on cross-linking ligation and sequencing of hybrids (qCLASH) to provide a comprehensive snapshot of the miRNA targetome in our BRAF-mutant melanoma cells. To our knowledge, we implemented for the first time a CLASH-based method to cancer cells, and identified over 8k direct and distinct miRNA-target interactions in melanoma cells, including many with non-predicted and non-canonical binding characteristics, thus expanding the pool of miRNA-target interactions. Taken together, these results provide new insights into complex and heterogeneous responses to BRAF inhibition, adding an additional level of complexity to drug-induced (post-) transcriptional network rewiring in melanoma. [less ▲]

Detailed reference viewed: 54 (12 UL)
Full Text
See detailNon-localized contact between beams with circular and elliptical cross-sections
Magliulo, Marco UL

Doctoral thesis (2020)

Numerous materials and structures are aggregates of slender bodies. We can, for example, refer to struts in metal foams, yarns in textiles, fibers in muscles or steel wires in wire ropes. To predict the ... [more ▼]

Numerous materials and structures are aggregates of slender bodies. We can, for example, refer to struts in metal foams, yarns in textiles, fibers in muscles or steel wires in wire ropes. To predict the mechanical performance of these materials and structures, it is important to understand how the mechanical load is distributed between the different bodies. If one can predict which slender body is the most likely to fail, changes in the design could be made to enhance its performance. As the aggregates of slender bodies are highly complex, simulations are required to numerically compute their mechanical behaviour. The most widely employed computational framework is the Finite Element Method in which each slender body is modeled as a series of beam elements. On top of an accurate mechanical representation of the individual slender bodies, the contact between the slender bodies must often be accurately modeled. In the past couple of decades, contact between beam elements has received wide-spread attention. However, the focus was mainly directed towards beams with circular cross-sections, whereas elliptical cross-sections are also relevant for numerous applications. Only two works have considered contact between beams with elliptical cross-sections, but they are limited to point-wise contact, which restricts their applicability. In this Ph.D. thesis, different frameworks for beams with elliptical cross-sections are proposed in case a point-wise contact treatment is insufficient. The thesis also reports a framework for contact scenarios where a beam is embedded inside another beam, which is in contrast to conventional contact frameworks for beams in which penetrating beams are actively repelled from each other. Finally, two of the three contact frameworks are enhanced with frictional sliding, where friction not only occurs due to sliding in the beams’ longitudinal directions but also in the transversal directions. [less ▲]

Detailed reference viewed: 38 (8 UL)
Full Text
See detailCondition assessment of bridge structures by damage localisation based on the DAD-method and close-range UAV photogrammetry
Erdenebat, Dolgion UL

Doctoral thesis (2020)

The provided dissertation presents a so-called “Deformation Area Difference (DAD)” method for condition assessment of existing bridges, especially for the detection of stiffness-reducing damages. The ... [more ▼]

The provided dissertation presents a so-called “Deformation Area Difference (DAD)” method for condition assessment of existing bridges, especially for the detection of stiffness-reducing damages. The method is based on the one hand on conventional static load deflection experiments and on the other hand on a high-precision measurement of the structural deflection. The experimental load on the bridge should be generated within the serviceability limit state in order to enable a non-destructive inspection. In the course of the laboratory tests, the most innovative measuring techniques were applied, whereby the photogrammetry has delivered promising results. With the help of additional studies on the influences of camera quality and calibration, the measuring precision of photogrammetry could be brought to its limits. Both the theoretical investigations and the laboratory tests showed the successful use of the DAD method for the identification of local damages. Therefore, the first in-situ experiment was carried out on a single-span, prestressed bridge in Luxembourg. The knowledge gained from this was combined with statistical investigations based on finite element calculations and artificially generated measurement noise effect in order to determine the application limits, such as the achievable measurement precision, identifiable degree of damage, required number of measurement repetitions, influence of the damage position, optimal size of the structural deformation, etc. The development of the DAD method ready for application usefully supplements the state of the art and contributes to the reliable assessment of the bridge condition. [less ▲]

Detailed reference viewed: 94 (8 UL)
Full Text
See detailCollective Effects in Stochastic Thermodynamics
Herpich, Tim UL

Doctoral thesis (2020)

Detailed reference viewed: 121 (11 UL)
Full Text
See detailRapid Automatized Naming and Phonological Awareness: The predictive effect for learning to read and write and their relationship with developmental dyslexia
Botelho da Silva, Patrícia UL

Doctoral thesis (2020)

Among the predictors of reading, rapid automatized naming (RAN) and phonological awareness (PA) are the best predictors. The predictive effect of these abilities is different, and they predict different ... [more ▼]

Among the predictors of reading, rapid automatized naming (RAN) and phonological awareness (PA) are the best predictors. The predictive effect of these abilities is different, and they predict different aspects of reading, being dependent on the orthographic regularity of the language as well as the student’s level or grade in school. The double deficit theory describes these two components as impaired in people with dyslexia and reading disabilities. Longitudinal studies that analyze cognitive processes supporting the development of reading and literacy are important for understanding processes in good readers as well as will help mitigate the effects of dyslexia and reading disability. The present thesis pursues two major aims. The first aim is to analyze the structure of RAN and predictive effect of RAN and PA skills on reading and writing tasks in Brazilian Portuguese in two studies. Study 1 sought to investigate the structure of RAN tests for Brazilian Portuguese throughout its development according to age. The results were important in determining the bidimensional model (alphanumeric and non-alphanumeric) throughout development of age and development of literacy. In addition, the results showed that the period between kindergarten and elementary school may show greater development of RAN skills in conjunction with literacy learning. In Study 2, we sought to investigate the predictive effect of PA and RAN on the development of reading and writing ability in Brazilian Portuguese. The results showed that RAN ability was a better predictor than PA of reading and writing skills for Brazilian Portuguese in relation to reading and writing speed. In addition, the type of stimulus of RAN influenced the predictive effect. Alphanumeric RAN better predicts reading, while non-alphanumeric stimuli predict writing. The second aim of this study is to compare the performance of children and adolescents with and without developmental dyslexia in RAN, PA, and reading tests and to verify the predictive effect of RAN in participants with dyslexia. Study 3 showed that the cognitive profile of dyslexic children was compatible with a single deficit in RAN according to double deficit theory. Impairment has only been found in RAN ability as well as in processes such as visual attention, which is an underlying process of RAN skills. Therefore, despite the importance of PA for the development of reading and writing, both in good readers and in those with reading impairments, RAN ability proved to be a good predictor for both groups in Brazilian Portuguese. [less ▲]

Detailed reference viewed: 65 (2 UL)
See detailART LAUNDERING: PROTECTING CULTURAL HERITAGE THROUGH CRIMINAL LAW
Mosna, Anna UL

Doctoral thesis (2020)

Detailed reference viewed: 28 (5 UL)
See detailUNTERSUCHUNG DER METALLURGISCHEN PHASENBILDUNG UND DEREN EINFLUSS AUF DIE VERBINDUNGSEIGENSCHAFTEN SOWIE AUF DIE VERSAGENSURSACHEN VON LASERGESCHWEIßTEN HARTMETALL-STAHL-VERBUNDEN
Schiry, Marc UL

Doctoral thesis (2020)

Laser beam welding of hard metal to steel offers multiple advantages regarding resource saving, mechanical strength of the joint and automation capability. The present work focuses on the fundamental ... [more ▼]

Laser beam welding of hard metal to steel offers multiple advantages regarding resource saving, mechanical strength of the joint and automation capability. The present work focuses on the fundamental research and development of the laser based welding process of tungsten carbide-cobalt hard metals with a tempering steel. Metallurgical analysis of the welding process showed that the formation of intermetallic and/or intermediate phases has a significant influence on the properties and mechanical strength of the dissimilar joint. The amount of molten hard metal in the steel melt bath plays a key role for the formation of the different phases. Therefore, a new parameter dy was defined, which correlates with the hard metal content in the melt pool. It is shown that for hard metals with 12 wt.% of cobalt binder, the phase transformation in the weld seam starts with a relative hard metal content of 10 vol.%. This threshold is dependent on the relative cobalt concentration in the hard metal. The tungsten carbide grain size has a low influence on the phase transformation in the weld seam. Steel melt pools with hard metal content lower than 10 vol.% show under metallographic observation a martensitic/bainitic microstructure. Simulation of the stress formation in the joint showed that due to the volume expansion of martensite during the transformation, tensile stress in the hard metal part was formed. Under shear load, these tensile stresses compensate with the induced compressive stresses and results an almost stress free interface. High shear strengths of the dissimilar joints are possible. A higher percentage of hard metal melting during the welding process increases the carbon and tungsten content in the melt bath. Consequently, the martensite start temperature decreases significantly. When the initiating temperature for martensite transformation falls under room temperature, the weld seam transforms into an austenitic microstructure. Because of the missing volume expansion during cooling of the weld seam volume, low stresses in the hard metal are generated. Under shear load of the joint area, high tensile stresses appear in the sintered part. These stress concentration decreases the shear strength of the weld and lead to premature failure. For the industrial use case, high mechanical strength and a robust manufacturing process is needed. Therefore, the laser welding process of hard metal to steel was optimized. The joint properties strongly depend on the weld bead geometry. Weld seams with x- or v-shaped profiles enable local concentrated metallurgical bonding of the sintered part to the steel sheet. Reduction of the horizontal focal distance of the laser beam to the interface increases the bonding ratio, but also intensifies the melting of the hard metal part and lead to the metallurgical transformation. By tilting a v-shape weld seam, it was possible to optimize the bonding behavior and to minimize the amount of liquefied hard metal in the melt bath. Hard metal with low amounts of binder showed a high temperature sensitivity. After laser welding of these grades, hot cracks were found in the sinter material. These cracks were formed due to the high stresses, which are generate during cooling of the dissimilar joint. Therefore, a laser based heat treatment process was developed and applied. With a defined pre- and post-heating of the joint area, the cooling rate was reduced significantly and the stresses in the hard metal part minimized. High shear strengths were the result. [less ▲]

Detailed reference viewed: 35 (4 UL)
Full Text
See detailImproving the understanding of binge-watching behavior: An exploration of its underlying psychological processes
Flayelle, Maèva UL

Doctoral thesis (2020)

The advent of the digital age with its progress in on-demand viewing technology has been associated in recent years with a dramatic increase in binge-watching (i.e., watching multiple episodes of TV ... [more ▼]

The advent of the digital age with its progress in on-demand viewing technology has been associated in recent years with a dramatic increase in binge-watching (i.e., watching multiple episodes of TV series in one session), to the point that this practice has now become the new normative way to consume TV shows. Nevertheless, along with its massive rise has come concerns about the associated mental and physical health outcomes, with initial studies even assuming its addictive nature. At a time when the psychological investigation of this behavior was only in its infancy, the current PhD thesis, therefore, aimed at improving the understanding of binge-watching, by clarifying the psychological processes involved in its development and maintenance. To this end, six empirical studies were conducted along two main research axes: 1) the conceptualization and assessment of binge-watching behaviors, and 2) the exploration of binge-watchers’ psychological characteristics. Study 1 consisted of a preliminary qualitative exploration of the phenomenological characteristics of binge-watching. Capitalizing on these pilot findings, Study 2 reported on the development and psychometric validation of two assessment instruments, measuring TV series watching motivations (“Watching TV Series Motives Questionnaire”, WTSMQ) and binge-watching engagement and symptoms (“Binge-Watching Engagement and Symptoms Questionnaire”, BWESQ). Finally, Study 3 aimed at cross-culturally validating the WTSMQ and BWESQ in nine languages (i.e., English, French, Spanish, Italian, German, Hungarian, Persian, Arabic, Chinese). Subsequent to this first line of investigation, Study 4 then explored potential binge-watchers’ subtypes by taking into consideration three key psychological factors, i.e. motivations for binge-watching, impulsivity traits, and emotional reactivity. Study 5 consisted of a pre-registered experimental study aimed at ascertaining differences on behavioral and self-reported impulsivity in non-problematic and problematic binge-watchers. Finally, Study 6 carried out the first systematic review of literature on binge-watching correlates. Beyond providing two theoretically and psychometrically sound binge-watching measures that may enable widespread expansion of international research on the topic, this doctoral research also allowed for important insights into the heterogeneous and complex nature of binge-watching, as well as into the understanding of its underlying psychological mechanisms. Centrally, by revealing that high – but healthy – and problematic engagement in binge-watching are underlined by distinct motivational and dispositional psychological processes, the overall findings of this PhD thesis bring an alternative etiological comprehension of problematic binge-watching as a maladaptive coping or emotional regulation strategy to deal with negative affect states. [less ▲]

Detailed reference viewed: 166 (5 UL)
See detailPublic Hearings in Investor-State Treaty Arbitration: Revisiting the Principle
Harvey Geb. Koprivica, Ana UL

Doctoral thesis (2020)

This thesis examines the scope, role and contemporary application of the traditional principle of public hearings, with a particular focus on the specific dispute resolution system of investor-state ... [more ▼]

This thesis examines the scope, role and contemporary application of the traditional principle of public hearings, with a particular focus on the specific dispute resolution system of investor-state arbitration. Whereas there have been extensive discussions in recent years surrounding developments which aim to increase the procedural openness and transparency of this traditionally private dispute resolution system, the emergence of a distinct requirement of holding public hearings in such contexts has not yet been given much attention. This thesis seeks to provide a better understanding of public hearings in investment treaty arbitration. By going beyond the usual narrative of the legitimacy and policy objectives of transparency, a more systematic and comprehensive approach to the issue of public hearings in investor-state arbitration is adopted. In addressing existing gaps in the literature, this thesis contends that current developments related to the principle of public hearings should not be analysed as a phenomenon specific to investor-state arbitration, but should instead be analysed within the broader context of the analogous developments at both the domestic and international level. In conducting such an investigation, this thesis situates the debate surrounding public hearings in investment treaty arbitration within a broader legal landscape, encompassing both national and international courts and tribunals. By examining the evolution of public hearings, it is argued that a steady shift in the understanding of the principle of public hearings over time may be detected. Public hearings have gone from serving merely as a means of protecting the individual from the secrecy and arbitrariness of the state, to becoming a democratic tool which the public is entitled to use not only in order to monitor and evaluate the administration of justice, but also as a platform for facilitating further public debate. In other words, the thesis demonstrates a shift in the understanding of the principle of public hearings from being a mere right of an individual to be heard in an open court, to the (additional) right of the general public to have an insight into what goes on in the courtroom and the active duty of the courts to ensure that this right is respected. This latter aspect of the principle of public hearings is subject to comparative analysis which examines the normative and practical solutions adopted by national and international courts when applying the principle of public hearings. While detecting a divergent legal landscape when it comes to providing public access to hearings, this thesis reveals a general trend towards greater regulation of the ways in which the public may obtain such access. What is more, it shall be shown that, in an era of expanded media coverage of public hearings, the subsequent enlargement of the audiences for such hearings, and the possibility to instantly disseminate information about proceedings through various technologies creates new paths for procedural openness and new challenges for the courts. Based on the findings of this comparative analysis, the thesis argues that it is not only the principle of public hearings which has been renewed and transformed. In seeking to adapt to the principle, the procedures in which the principle of public hearings operates have also started to change. This comparative analysis then forms the basis upon which a critical analysis and in-depth assessment is provided within the context of public hearings in investor-state arbitration. From a more “dispute-oriented” perspective, the thesis looks into the considerations and challenges that ought to be taken into account by arbitral tribunals and parties when organising a public hearing. By not losing sight of the implications for the system as a whole, however, the thesis addresses the future impact that the introduction of public hearings into the system investor-state arbitration may have on that system and, notably, upon its procedures. The key finding here is that the increasing relevance of public hearings in investor-state arbitration constitutes merely one part of the overall evolution of the “public” dimension of the requirement of public hearings. When taking these developments together, the thesis concludes that the debate on what constitutes a truly public hearing has entered a new epoch, with new actors and new challenges. [less ▲]

Detailed reference viewed: 237 (8 UL)
Full Text
See detailDESIGN AND OPTIMIZATION OF SIMULTANEOUS WIRELESS INFORMATION AND POWER TRANSFER SYSTEMS
Gautam, Sumit UL

Doctoral thesis (2020)

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world ... [more ▼]

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world population keeps witnessing or hearing about new generations of mobile/wireless technologies within every half to one decade. It is certain the wireless communication systems have enabled the exchange of information without any physical cable(s), however, the dependence of the mobile devices on the power cables still persist. Each passing year unveils several critical challenges related to the increasing capacity and performance needs, power optimization at complex hardware circuitries, mobility of the users, and demand for even better energy efficiency algorithms at the wireless devices. Moreover, an additional issue is raised in the form of continuous battery drainage at these limited-power devices for sufficing their assertive demands. In this regard, optimal performance at any device is heavily constrained by either wired, or an inductive based wireless recharging of the equipment on a continuous basis. This process is very inconvenient and such a problem is foreseen to persist in future, irrespective of the wireless communication method used. Recently, a promising idea for simultaneous wireless radio-frequency (RF) transmission of information and energy came into spotlight during the last decade. This technique does not only guarantee a more flexible recharging alternative, but also ensures its co-existence with any of the existing (RF-based) or alternatively proposed methods of wireless communications, such as visible light communications (VLC) (e.g., Light Fidelity (Li-Fi)), optical communications (e.g., LASER-equipped communication systems), and far-envisioned quantum-based communication systems. In addition, this scheme is expected to cater to the needs of many current and future technologies like wearable devices, sensors used in hazardous areas, 5G and beyond, etc. This Thesis presents a detailed investigation of several interesting scenarios in this direction, specifically concerning design and optimization of such RF-based power transfer systems. The first chapter of this Thesis provides a detailed overview of the considered topic, which serves as the foundation step. The details include the highlights about its main contributions, discussion about the adopted mathematical (optimization) tools, and further refined minutiae about its organization. Following this, a detailed survey on the wireless power transmission (WPT) techniques is provided, which includes the discussion about historical developments of WPT comprising its present forms, consideration of WPT with wireless communications, and its compatibility with the existing techniques. Moreover, a review on various types of RF energy harvesting (EH) modules is incorporated, along with a brief and general overview on the system modeling, the modeling assumptions, and recent industrial considerations. Furthermore, this Thesis work has been divided into three main research topics, as follows. Firstly, the notion of simultaneous wireless information and power transmission (SWIPT) is investigated in conjunction with the cooperative systems framework consisting of single source, multiple relays and multiple users. In this context, several interesting aspects like relay selection, multi-carrier, and resource allocation are considered, along with problem formulations dealing with either maximization of throughput, maximization of harvested energy, or both. Secondly, this Thesis builds up on the idea of transmit precoder design for wireless multigroup multicasting systems in conjunction with SWIPT. Herein, the advantages of adopting separate multicasting and energy precoder designs are illustrated, where we investigate the benefits of multiple antenna transmitters by exploiting the similarities between broadcasting information and wirelessly transferring power. The proposed design does not only facilitates the SWIPT mechanism, but may also serve as a potential candidate to complement the separate waveform designing mechanism with exclusive RF signals meant for information and power transmissions, respectively. Lastly, a novel mechanism is developed to establish a relationship between the SWIPT and cache-enabled cooperative systems. In this direction, benefits of adopting the SWIPT-caching framework are illustrated, with special emphasis on an enhanced rate-energy (R-E) trade-off in contrast to the traditional SWIPT systems. The common notion in the context of SWIPT revolves around the transmission of information, and storage of power. In this vein, the proposed work investigates the system wherein both information and power can be transmitted and stored. The Thesis finally concludes with insights on the future directions and open research challenges associated with the considered framework. [less ▲]

Detailed reference viewed: 218 (17 UL)
Full Text
See detailDESIGN AND OPTIMIZATION OF SIMULTANEOUS WIRELESS INFORMATION AND POWER TRANSFER SYSTEMS
Gautam, Sumit UL

Doctoral thesis (2020)

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world ... [more ▼]

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world population keeps witnessing or hearing about new generations of mobile/wireless technologies within every half to one decade. It is certain the wireless communication systems have enabled the exchange of information without any physical cable(s), however, the dependence of the mobile devices on the power cables still persist. Each passing year unveils several critical challenges related to the increasing capacity and performance needs, power optimization at complex hardware circuitries, mobility of the users, and demand for even better energy efficiency algorithms at the wireless devices. Moreover, an additional issue is raised in the form of continuous battery drainage at these limited-power devices for sufficing their assertive demands. In this regard, optimal performance at any device is heavily constrained by either wired, or an inductive based wireless recharging of the equipment on a continuous basis. This process is very inconvenient and such a problem is foreseen to persist in future, irrespective of the wireless communication method used. Recently, a promising idea for simultaneous wireless radio-frequency (RF) transmission of information and energy came into spotlight during the last decade. This technique does not only guarantee a more flexible recharging alternative, but also ensures its co-existence with any of the existing (RF-based) or alternatively proposed methods of wireless communications, such as visible light communications (VLC) (e.g., Light Fidelity (Li-Fi)), optical communications (e.g., LASER-equipped communication systems), and far-envisioned quantum-based communication systems. In addition, this scheme is expected to cater to the needs of many current and future technologies like wearable devices, sensors used in hazardous areas, 5G and beyond, etc. This Thesis presents a detailed investigation of several interesting scenarios in this direction, specifically concerning design and optimization of such RF-based power transfer systems. The first chapter of this Thesis provides a detailed overview of the considered topic, which serves as the foundation step. The details include the highlights about its main contributions, discussion about the adopted mathematical (optimization) tools, and further refined minutiae about its organization. Following this, a detailed survey on the wireless power transmission (WPT) techniques is provided, which includes the discussion about historical developments of WPT comprising its present forms, consideration of WPT with wireless communications, and its compatibility with the existing techniques. Moreover, a review on various types of RF energy harvesting (EH) modules is incorporated, along with a brief and general overview on the system modeling, the modeling assumptions, and recent industrial considerations. Furthermore, this Thesis work has been divided into three main research topics, as follows. Firstly, the notion of simultaneous wireless information and power transmission (SWIPT) is investigated in conjunction with the cooperative systems framework consisting of single source, multiple relays and multiple users. In this context, several interesting aspects like relay selection, multi-carrier, and resource allocation are considered, along with problem formulations dealing with either maximization of throughput, maximization of harvested energy, or both. Secondly, this Thesis builds up on the idea of transmit precoder design for wireless multigroup multicasting systems in conjunction with SWIPT. Herein, the advantages of adopting separate multicasting and energy precoder designs are illustrated, where we investigate the benefits of multiple antenna transmitters by exploiting the similarities between broadcasting information and wirelessly transferring power. The proposed design does not only facilitates the SWIPT mechanism, but may also serve as a potential candidate to complement the separate waveform designing mechanism with exclusive RF signals meant for information and power transmissions, respectively. Lastly, a novel mechanism is developed to establish a relationship between the SWIPT and cache-enabled cooperative systems. In this direction, benefits of adopting the SWIPT-caching framework are illustrated, with special emphasis on an enhanced rate-energy (R-E) trade-off in contrast to the traditional SWIPT systems. The common notion in the context of SWIPT revolves around the transmission of information, and storage of power. In this vein, the proposed work investigates the system wherein both information and power can be transmitted and stored. The Thesis finally concludes with insights on the future directions and open research challenges associated with the considered framework. [less ▲]

Detailed reference viewed: 76 (3 UL)
Full Text
See detailUrban Green Amenity and City Structure
Tran, Thi Thu Huyen UL

Doctoral thesis (2020)

One of the main components that make cities attractive to their residents is their public park and garden systems. Green urban areas from a small community garden to famous areas such as `Jardin du ... [more ▼]

One of the main components that make cities attractive to their residents is their public park and garden systems. Green urban areas from a small community garden to famous areas such as `Jardin du Luxembourg' in Paris not only shape the face of the city but are a quintessential aspect of the quality of life for local inhabitants. They offer places for local recreation, beautiful views, cleaner air and many other advantages. Recent research has validated the connection between urban parks and the well-being of the city's inhabitants. Although green urban areas might seem to be meagre in comparison with other natural ecosystems such as wetlands or forests, the value of the environmental, recreational, and other services they offer is likely to be disproportionally high due to their strategic locations. This dissertation focuses on studying the optimal provision of green urban areas and the welfare effects of a substantial change in green provision policies in the presence of other types of land uses and adverse shocks. It comprises four papers (chapters). [less ▲]

Detailed reference viewed: 71 (23 UL)
Full Text
See detailTaking Language Out Of The Equation: The Assessment Of Basic Math Competence Without Language
Greisen, Max UL

Doctoral thesis (2020)

Although numeracy, next to literacy, is an essential skill in many knowledge based societies of the 21st century, between 5 and 10 % of the population suffer from more or less severe mathematics learning ... [more ▼]

Although numeracy, next to literacy, is an essential skill in many knowledge based societies of the 21st century, between 5 and 10 % of the population suffer from more or less severe mathematics learning disorders or dyscalculia. However, mathematical ability is not a pure construct. Instead, mathematical ability maintains a complex relationship with linguistic abilities. This relationship has significant implications for the assessment of a person’s mathematical ability in multilingual contexts. The hereby presented research project addresses the consequences of that relationship in the context of Luxembourg, a highly multilingual country at the center of Europe. The aim of the project was to tackle psychometric issues that arise when the test taker does not master the language of the test sufficiently and to offer an alternative solution to available assessment batteries based on verbal instructions and tasks. In the first study, we demonstrate the role of reading comprehension in the language of instruction on third grader’s performance in mathematics in Luxembourg and show that non-native speaker’s underachievement in mathematics can be largely or entirely explained by their lacking reading comprehension in the language of assessment. In the next study we report on the two first pilot studies with NUMTEST, an assessment battery that aims to measure children’s basic mathematical competence by replacing verbal instructions and task content with video instructions and animated tasks. The findings of these studies show that children’s basic mathematical competence can indeed be reliably assessed using this new paradigm. Opportunities and limitations of the paradigm are discussed. The third and final study of this project addresses the psychometric characteristics of this newly developed assessment battery. Its findings show that the NUMTEST battery provides for good reliability and concurrent validity all while being language neutral. In summary, the presented project provides for an encouraging proof of concept for the video instruction method while offering preliminary evidence for its validity as an early screener for math learning difficulties. [less ▲]

Detailed reference viewed: 62 (8 UL)
See detailA Defence of Moral Revisionism
Lamothe, Christopher Laurence UL

Doctoral thesis (2020)

This dissertation examines the implications of J. L. Mackie’s moral error theory. Rather than attempting to prove that moral error theory is true, I analyze the responses to moral error theory, in order ... [more ▼]

This dissertation examines the implications of J. L. Mackie’s moral error theory. Rather than attempting to prove that moral error theory is true, I analyze the responses to moral error theory, in order to highlight the various problems that arise when we believe that there are no moral facts. Some of the problems for moral error theorists relate to moral language, moral attitudes and moral desert. The positions that I analyze include: moral fictionalism, moral conservationism, moral negotiationism, moral conversionism, moral propagandism and various forms of weak and strong abolitionism. Ultimately, I conclude that the responses that have been offered thus far have been problematic. In addition to explaining all of the established responses to moral error theory, I offer my own response, called moral revisionism. Moral revisionism is a version of weak abolitionism that recommends abolishing moral attitudes. It also recommends reframing moral discourse into a discourse based on the satisfaction of desires. As such, it is heavily dependent on hypothetical imperatives. Even though moral error theory poses numerous challenges, I argue that these challenges are not insurmountable, if we adopt moral revisionism. Moreover, a society that adopts moral revisionism could be even more instrumentally valuable than a moral one. [less ▲]

Detailed reference viewed: 73 (3 UL)
Full Text
See detailMAGRID - FROM DEVELOPING A LANGUAGE-NEUTRAL LEARNING APPLICATION TO PREDICTIVE LEARNING ANALYTICS
Pazouki, Tahereh UL

Doctoral thesis (2020)

Mathematical proficiency serves as one of the foundations that must be solid if learners are to succeed in the classroom. However, the hierarchical nature of mathematical development means that basic math ... [more ▼]

Mathematical proficiency serves as one of the foundations that must be solid if learners are to succeed in the classroom. However, the hierarchical nature of mathematical development means that basic math skills must form the groundwork for subsequent mathematical constructs. As a matter of fact, a fragile mathematical foundation leads to some unavoidable roadblocks further ahead. That being said, early childhood education and the preschool years are highly foundational, and thus, it is crucial to train early mathematical abilities. If we bear in mind that teaching and testing in schools rely on communication between the teacher and learner, fluency in the language of instruction exerts a tangible influence on this process. Consequently, students who are not proficient in the language of instruction are set up to have a poor mathematical foundation that will likely hold them back in relation to their peers. This is a shortcoming that can drive a gap between learners within heterogeneous school settings, which becomes a hurdle for students and teachers alike. Because digital devices are now more available and present in today's classrooms, digital interventions (e.g., computer software and tablet applications) are one channel that can be used to bridge such a gap through the introduction of language-neutral training and testing programs. Students who are not pro ficient in the language of instruction would be enabled to acquire and maintain the mastery of early mathematical skills when the emphasis is placed on visual content rather than language skills in the teaching of mathematical concepts. Moreover, when digital devices are used as learning platforms, learners' interactions (behavior) with the training content can be recorded in log files. Analyzing the log data can provide teachers with detailed information on learners' progress, which may ultimately remove the necessity to administer tests and examinations. The present dissertation explores the extent of educational technologies in the teaching and testing of early math abilities. In this regard, we address two main areas of focus in research: The first one is to examine the possibility of developing a language-neutral application for learning mathematical abilities for young learners so that they do not need to depend on their language proficiency to acquire mathematical skills. Findings of empirical studies have demonstrated that students who participated in the early mathematics training program using the MaGrid application performed significantly better on various measures of early mathematical abilities. Therefore, the MaGrid application can provide an efficient way to reinforce the math knowledge of all preschoolers, including segments (second-language learners) that have traditionally been underserved. The second focus is on studying the possibility of inferring young learners level of mathematical competencies on the basis of their interactions in a learning platform. The idea behind analyzing log data of learners behavior was to evaluate the alternative approach (behavioral-based assessment) to formative assessment for measuring students improvements and competency levels. Along these lines, we derived system-specific behavioral indicators using the presented systematic approach, and then we evaluated the predictive power behind them. Findings illustrate that assessing learners' level of knowledge could be carried out through the analysis of traces students leave in an e-learning environment and without interrupting their learning process. Results further unveil how digital devices can be used in schools to enhance learning outcomes. [less ▲]

Detailed reference viewed: 69 (10 UL)
Full Text
See detailSOCIAL CLAUSES IN OUTSOURCING PROCESSES. SOCIAL RIGHTS VERSUS ECONOMIC FREEDOMS IN THE PRISM OF THE MULTILEVEL LEGAL ORDER
Marchi, Giulia UL

Doctoral thesis (2020)

The aim of the thesis is to study the different provisions included in the notion of social clause and offer an in-depth and systematic analysis of some types of social clauses in the Italian and European ... [more ▼]

The aim of the thesis is to study the different provisions included in the notion of social clause and offer an in-depth and systematic analysis of some types of social clauses in the Italian and European Union legal order, their characteristics, their regulation, and the legal interests that are aimed at protecting, as well as their classification in the multilevel legal order. This study is preliminary for the assessment on the legitimacy of such clauses in relation to economic freedoms. Indeed, the application of social clauses is one of the fields in which the contrast between fundamental social rights and economic freedoms emerges most clearly and in relation to which some differences occur in the reconciliation of interests made by the Courts at national and EU level. The most relevant distinction for this research relies on the classification developed by Italian scholars, who, on the basis of contents and interests protected by social clauses, usually distinguish between equal treatment or first-generation social clauses and rehiring or second-generation social clauses. Also the nature of the procurement contract, public or private, may condition the judgment on the legitimacy of social clauses with regard to economic freedoms, due to the different regulations applicable in the two fields and the tension between the protection of competition and social objectives, which historically characterizes the action of the public administration in the field of public procurements. The thesis investigates the legislation and the case law concerning the first- and second-generation social clauses, both statutory and contractual, at European and Italian level, with reference to public and private procurements. From this study, it is clear that there are problems of effectiveness and applicability concerning first- and second-generation social clauses, as well as of conflicting interests: social clauses may hinder the competition, negatively condition the entrepreneur’s freedom to conduct a business, affect the entrepreneurs’ freedom of association, and, therefore, risk to contrast with the constitutional and European provisions protecting those freedoms. In this reasoning, the different levels of protection and the importance recognised to social rights and economic freedoms in the Italian and European Union legal systems, the legal framework generated by their interaction, and the type of balancing in the two legal orders must be considered. In the search for a reasonable balance between economic freedoms and social rights, the thesis explores whether the various interests protected by the different types of social clauses can fall within that notion of employment protection which constitutes an overriding reason relating to the public interest justifying a limitation of the economic freedoms and investigates to what extent the prevention of unfair competition and social dumping, the protection of workers’ rights and employment stability can justify a restriction of fundamental economic freedoms. In conclusion, the aim of the study is to investigate if it is possible to achieve a fair balance between opposing interests, social rights, on the one hand, and economic freedoms and free competition, on the other, and comprehend how the interests contrasting with them in relation to the case of social clauses can be balanced. [less ▲]

Detailed reference viewed: 49 (6 UL)
See detailNational Banking Law in the European Single Supervisory Mechanism
Voordeckers, Olivier UL

Doctoral thesis (2020)

In 2013, the European Central Bank acquired the competence to apply national banking law in a direct manner to banks established in the euro area, as part of its supervisory competence under the Single ... [more ▼]

In 2013, the European Central Bank acquired the competence to apply national banking law in a direct manner to banks established in the euro area, as part of its supervisory competence under the Single Supervisory Mechanism. This marks the first instance of direct administration of national law by a European institution. The thesis scrutinises the salient features as well as the strengths and weaknesses of this new administrative system in the landscape of European Union law. [less ▲]

Detailed reference viewed: 69 (2 UL)
Full Text
See detailPERFORMANCE EVALUATION AND MODELLING OF SAAS WEB SERVICES IN THE CLOUD
Ibrahim, Abdallah Ali Zainelabden Abdallah UL

Doctoral thesis (2020)

This thesis studies the problem of performance evaluation and assurance of communications and services quality in cloud computing. The cloud computing paradigm has significantly changed the way of doing ... [more ▼]

This thesis studies the problem of performance evaluation and assurance of communications and services quality in cloud computing. The cloud computing paradigm has significantly changed the way of doing business. With cloud computing, companies and end-users can access the vast majority of services online through a virtualized environment. The main three services typically consumed by cloud users are Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Cloud Services Providers (CSPs) deliver cloud services to cloud customers on a pay-per-use model while the quality of the provided services is defined using Service Level Agreements (SLAs). Unfortunately, there is no standard mechanism which exists to verify and assure that delivered services satisfy the signed SLA agreement in an automatic way, which impedes the possibility to measure accurately the Quality of Service (QoS). In this context, this thesis aims at offering an automatic framework to evaluate the QoS and SLA compliance of Web Services (WSs) offered across several CSPs. Yet unlike other approaches, the framework aims at quantifying in a fair and by stealth way the performance and scalability of the delivered WS. By stealthiness, it refers to the capacity of evaluating a given Cloud service through multiple workload patterns that makes them indistinguishable from regular user traffic from the provider point of view. This thesis work is motivated by recent scandals in the automotive sector, which demonstrate the capacity of solution providers to adapt to the behavior of their product when submitted to an evaluation campaign to improve the performance results. The framework defines a set of Common performance metrics handled by a set of agents within customized clients for measuring the behavior of cloud applications on top of a given CSP. Once modeled accurately, the agent behavior can be dynamically adapted to hide the true nature of the framework client to the CSP. In particular, the following contributions are proposed: • A new framework of performance metrics for communication systems of cloud computing SaaS. The proposed framework evaluates and classifies in a fair and stealth way the performance and scalability of the delivered WS across multiple CSPs. • Analysis of the performance metrics for the cloud SaaS Web Service (WS) by analyzing all the possible metrics which could be used to evaluate and monitor the behavior of the cloud applications. • Benchmarking the cloud SaaS applications and web services by using referenced benchmarking tools and frameworks. • Modeling the SaaS WS by providing a set of Gaussian models. These models can be used to help the other researchers to generate data representing the CSP’s behavior under a high load and under the normal usage in just a couple of minutes without any experiments. • A novel optimization model to obfuscate the testing from the CSP and to achieve stealthiness. The optimization process relies on meta-heuristic and machine learning algorithms, such as Genetic Algorithm and Gaussian Process Regression accordingly. • A virtual QoS aggregator and SLA checker which takes care of evaluating the QoS and SLA compliance of the WS offered across the considered CSPs. • Ranking multiple CSPs based on multi-criteria decision analysis. [less ▲]

Detailed reference viewed: 43 (7 UL)
Full Text
See detailExperimentelle Untersuchungen und analytischen Modellierung von adiabaten Siedevorgängen in Naturumlaufsystemen
Haag, Michel UL

Doctoral thesis (2020)

Modern nuclear reactors increasingly use passive safety systems to ensure the integrity of the containment in the event of an accident. Natural circulation systems allow passive heat removal from the ... [more ▼]

Modern nuclear reactors increasingly use passive safety systems to ensure the integrity of the containment in the event of an accident. Natural circulation systems allow passive heat removal from the containment to the environment. One of their disadvantages is that instabilities develop as soon as evaporation occurs in the system. This thesis investigates instabilities in two-phase natural circulation systems. After comprehensive literature review on existing test facilities, the first part of this work presents experimental investigations on the stability behaviour of natural circulation systems. To conduct these investigations, the INTRAVIT test facility was designed and constructed at the University of Luxembourg. INTRAVIT offers both a high degree of flexibility in the design of the pipelines and the advantage of a direct, electrically controllable heat supply. Two measurement campaigns were carried out. During the first campaign, the influence of the heating tube inclination angle on the instabilities was investigated at constant riser pipe length. During the second measurement campaign, the influence of the riser pipe length and the influence of the flow resistance in the downcomer pipe were investigated at a constant inclination angle of the heating tube. For these investigations, temperatures, mass flow and the distribution of void in the riser pipe were analysed. In addition, pressure response during the instabilities was measured to investigate the pressure shocks caused by water hammering. The second part of this work develops an analytical model to decribe evaporation processes in adiabatic pipes. The model consists of several sub models that calculate the interfacial surface density and the evaporation rate as a function of the flow pattern. An integrated nucleation model calculates the onset of the evaporation. The derived model was implemented in the system code ATHLET. Experiments from the INTRAVIT measurement campaigns were then modelled using the new evaporation model as well as the standard evaporation model. Both were compared with the measurement data. The design and construction of the INTRAVIT test facility is a foundation for future research on instabilities in natural circulation systems. Moreover, a new evaporation model is presented, which can easily be adapted and refined by modifying individual sub models. [less ▲]

Detailed reference viewed: 38 (5 UL)
Full Text
See detailMachine Learning-based Methods for Driver Identification and Behavior Assessment: Applications for CAN and Floating Car Data
Jafarnejad, Sasan UL

Doctoral thesis (2020)

The exponential growth of car generated data, the increased connectivity, and the advances in artificial intelligence (AI), enable novel mobility applications. This dissertation focuses on two use-cases ... [more ▼]

The exponential growth of car generated data, the increased connectivity, and the advances in artificial intelligence (AI), enable novel mobility applications. This dissertation focuses on two use-cases of driving data, namely distraction detection and driver identification (ID). Low and medium-income countries account for 93% of traffic deaths; moreover, a major contributing factor to road crashes is distracted driving. Motivated by this, the first part of this thesis explores the possibility of an easy-to-deploy solution to distracted driving detection. Most of the related work uses sophisticated sensors or cameras, which raises privacy concerns and increases the cost. Therefore a machine learning (ML) approach is proposed that only uses signals from the CAN-bus and the inertial measurement unit (IMU). It is then evaluated against a hand-annotated dataset of 13 drivers and delivers reasonable accuracy. This approach is limited in detecting short-term distractions but demonstrates that a viable solution is possible. In the second part, the focus is on the effective identification of drivers using their driving behavior. The aim is to address the shortcomings of the state-of-the-art methods. First, a driver ID mechanism based on discriminative classifiers is used to find a set of suitable signals and features. It uses five signals from the CAN-bus, with hand-engineered features, which is an improvement from current state-of-the-art that mainly focused on external sensors. The second approach is based on Gaussian mixture models (GMMs), although it uses two signals and fewer features, it shows improved accuracy. In this system, the enrollment of a new driver does not require retraining of the models, which was a limitation in the previous approach. In order to reduce the amount of training data a Triplet network is used to train a deep neural network (DNN) that learns to discriminate drivers. The training of the DNN does not require any driving data from the target set of drivers. The DNN encodes pieces of driving data to an embedding space so that in this space examples of the same driver will appear closer to each other and far from examples of other drivers. This technique reduces the amount of data needed for accurate prediction to under a minute of driving data. These three solutions are validated against a real-world dataset of 57 drivers. Lastly, the possibility of a driver ID system is explored that only uses floating car data (FCD), in particular, GPS data from smartphones. A DNN architecture is then designed that encodes the routes, origin, and destination coordinates as well as various other features computed based on contextual information. The proposed model is then evaluated against a dataset of 678 drivers and shows high accuracy. In a nutshell, this work demonstrates that proper driver ID is achievable. The constraints imposed by the use-case and data availability negatively affect the performance; in such cases, the efficient use of the available data is crucial. [less ▲]

Detailed reference viewed: 100 (1 UL)
Full Text
See detailThe ALMA-Yactul Ecosystem: A Holistic Approach for Student-centered Integration of Learning Material
Grevisse, Christian UL

Doctoral thesis (2020)

Digital learning resources play a key role in technology enhanced learning, yet their organization poses a challenge to both learners and teachers. Students are confronted with an ever-growing amount of ... [more ▼]

Digital learning resources play a key role in technology enhanced learning, yet their organization poses a challenge to both learners and teachers. Students are confronted with an ever-growing amount of available resources in an open, heterogeneous corpus. Finding relevant learning material for a given context or task, such as an exercise, is non-trivial, especially for novices in a complex domain, who often lack specific search skills. In addition, there is often no direct link between the learning material and the task at hand, and the constant interruption of the task in order to search for resources may have an impact on the cognitive load. Moreover, from the perspective of teachers and instructional designers, authoring high-quality learning material is a time-intensive task. Hence, reusing the authored material in multiple contexts would be beneficial. This dissertation addresses these issues by proposing the ALMA-Yactul ecosystem, a holistic approach for student-centered integration of learning material. Learners can benefit from scaffolding support to retrieve learning material relevant to their current context at a fine-grained level and across the boundaries of individual courses. This integration is showcased in multiple applications and domains, such as a plugin for an Integrated Development Environment or an enhanced sketchnoting app. While the former provides novices in computer programming the necessary tools to scaffold the search for heterogeneous documents on fine-grained syntactical elements, the latter allows for suggesting further information while taking notes in class. In both cases, it is not necessary for learners to leave their current study environment. To implicitly link learning resources and tasks, Semantic Web technologies such as ontologies are used to annotate documents. For this purpose, an extensible and lightweight modular domain ontology for programming languages has been created. While the main study domain in this work is computer science with a special focus on programming, the transferability of the proposed approach to other domains is demonstrated through multiple examples. Furthermore, to foster the active engagement of students in the learning process, Yactul, a game-based platform for continuous active learning has been developed. Apart from its use in the classroom, the platform also provides formative assessment to the individual learner by keeping track of her performance on a per-concept basis. Yactul activities are a key element in the ecosystem, both benefitting from and contributing to the integration of learning material. Finally, teachers are assisted in semantically enhancing their resources through semi-automatic annotation support within popular authoring tools. A Knowledge Graph-based approach is employed for core concept identification. Apart from analysing the usage of this ecosystem and evaluating the user satisfaction in university courses, an experiment with high school pupils lacking prior knowledge in programming yielded positive results with respect to the proposed scaffolding support. [less ▲]

Detailed reference viewed: 63 (13 UL)
Full Text
See detailSYMBOL LEVEL PRECODING TECHNIQUES FOR HARDWARE AND POWER EFFICIENT WIRELESS TRANSCEIVERS
Domouchtsidis, Stavros UL

Doctoral thesis (2020)

Large-scale antennas are crucial for next generation wireless communication systems as they improve spectral efficiency, reliability and coverage compared to the traditional ones that are employing ... [more ▼]

Large-scale antennas are crucial for next generation wireless communication systems as they improve spectral efficiency, reliability and coverage compared to the traditional ones that are employing antenna arrays of few elements. However, the large number of antenna elements leads to a big increase in power consumption of conventional fully digital transceivers due to the one Radio Frequency (RF) chain / per antenna element requirement. The RF chains include a number of different components among which are the Digital-to-Analog Converters (DACs)/Analog-to-Digital Converters (ADCs) that their power consumption increases exponential with the resolution they support. Motivated by this, in this thesis, a number of different architectures are proposed with the view to reduce the power consumption and the hardware complexity of the transceiver. In order to optimize the transmission of data through them, corresponding symbol level precoding (SLP) techniques were developed for the proposed architectures. SLP is a technique that mitigates multi-user interference (MUI) by designing the transmitted signals using the Channel State Information and the information-bearing symbols. The cases of both frequency flat and frequency selective channels were considered. First, three different power efficient transmitter designs for transmission over frequency flat channels and their respective SLP schemes are considered. The considered systems tackle the high hardware complexity and power consumption of existing SLP techniques by reducing or completely eliminating fully digital RF chains. The precoding design is formulated as a constrained least squares problem and efficient algorithmic solutions are developed via the Coordinate Descent method. Next, the case of frequency selective channels is considered. To this end, Constant Envelope precoding in a Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing system (CE MIMO-OFDM) is considered. In CE MIMO-OFDM the transmitted signals for each antenna are designed to have constant amplitude regardless of the channel realization and the information symbols that must be conveyed to the users. This facilitates the use of power-efficient components, such as phase shifters and non-linear power amplifiers. The precoding problem is firstly formulated as a least-squares problem with a unit-modulus constraint and solved using an algorithm based on the coordinate descent (CCD) optimization framework and then, after reformulating the problem into an unconstrained non-linear least squares problem, a more computationally efficient solution using the Gauss-Newton algorithm is presented. Then, CE MIMO-OFDM is considered for a system with low resolution DACs. The precoding design problem is formulated as a mixed discrete- continuous least-squares optimization one which is NP-hard. An efficient low complexity solution is developed based also on the CCD optimization framework. Finally, a precoding scheme is presented for OFDM transmission in MIMO systems based on one-bit DACs and ADCs at the transmitter’s and the receiver’s end, respectively, as a way to reduce the total power consumption. The objective of the precoding design is to mitigate the effects of one-bit quantization and the problem is formulated and then is split into two NP hard least squares optimization problems. Algorithmic solutions are developed for the solution of the latter problems, based on the CCD framework. [less ▲]

Detailed reference viewed: 48 (6 UL)
Full Text
See detailThe low-dimensional algebraic cohomology of infinite-dimensional Lie algebras of Virasoro-type
Ecker, Jill Marie-Anne UL

Doctoral thesis (2020)

In this doctoral thesis, the low-dimensional algebraic cohomology of infinite-dimensional Lie algebras of Virasoro-type is investigated. The considered Lie algebras include the Witt algebra, the Virasoro ... [more ▼]

In this doctoral thesis, the low-dimensional algebraic cohomology of infinite-dimensional Lie algebras of Virasoro-type is investigated. The considered Lie algebras include the Witt algebra, the Virasoro algebra and the multipoint Krichever-Novikov vector field algebra. We consider algebraic cohomology, meaning we do not put any constraints of continuity on the cochains. The Lie algebras are considered as abstract Lie algebras in the sense that we do not work with particular realizations of the Lie algebras. The results are thus independent of any underlying choice of topology. The thesis is self-contained, as it starts with a technical chapter introducing the definitions, concepts and methods that are used in the thesis. For motivational purposes, some time is spent on the interpretation of the low-dimensional cohomology. First results include the computation of the first and the third algebraic cohomology of the Witt and the Virasoro algebra with values in the trivial and the adjoint module, the second algebraic cohomology being known already. A canonical link between the low-dimensional cohomology of the Witt and the Virasoro algebra is exhibited by using the Hochschild-Serre spectral sequence. More results are given by the computation of the low-dimensional algebraic cohomology of the Witt and the Virasoro algebra with values in general tensor-densities modules. The study consists of a mix between elementary algebra and algorithmic analysis. Finally, some results concerning the low-dimensional algebraic cohomology of the multipoint Krichever-Novikov vector field algebra are derived. The thesis is concluded with an outlook containing possible short-term goals that could be achieved in the near future as well as some long-term goals. [less ▲]

Detailed reference viewed: 87 (6 UL)
Full Text
See detailDeep Pattern Mining for Program Repair
Liu, Kui UL

Doctoral thesis (2019)

Error-free software is a myth. Debugging thus accounts for a significant portion of software maintenance and absorbs a large part of software cost. In particular, the manual task of fixing bugs is tedious ... [more ▼]

Error-free software is a myth. Debugging thus accounts for a significant portion of software maintenance and absorbs a large part of software cost. In particular, the manual task of fixing bugs is tedious, error- prone and time-consuming. In the last decade, automatic bug-fixing, also referred to as automated program repair (APR) has boomed as a promising endeavor of software engineering towards alleviating developers’ burden. Several potentially promising techniques have been proposed making APR an increasingly prominent topic in both the research and practice communities. In production, APR will drastically reduce time-to-fix delays and limit downtime. In a development cycle, APR can help suggest changes to accelerate debugging. As an emergent domain, however, program repair has many open problems that the community is still exploring. Our work contributes to this momentum on two angles: the repair of programs for functionality bugs, and the repair of programs for method naming issues. The thesis starts with highlighting findings on key empirical studies that we have performed to inform future repair approaches. Then, we focus on template-based program repair scenarios and explore deep learning models for inferring accurate and relevant patterns. Finally, we integrate these patterns into APR pipelines, which yield the state of the art repair tools. The dissertation includes the following contributions: • Real-world Patch Study: Existing APR studies have shown that the state-of-the-art techniques in automated repair tend to generate patches only for a small number of bugs even with quality issues (e.g., incorrect behavior and nonsensical changes). To improve APR techniques, the community should deepen its knowledge on repair actions from real-world patches since most of the techniques rely on patches written by human developers. However, previous investigations on real-world patches are limited to statement level that is not sufficiently fine-grained to build this knowledge. This dissertation starts with deepening this knowledge via a systematic and fine-grained study of real-world Java program bug fixes. • Fault Localization Impact: Existing test-suite-based APR systems are highly dependent on the performance of the fault localization (FL) technique that is the process of the widely studied APR pipeline. However, APR systems generally focus on the patch generation, but tend to use similar but different strategies for fault localization. To assess the impact of FL on APR, we identify and investigate a practical bias caused by the FL step in a repair pipeline. We propose to highlight the different FL configurations used in the literature, and their impact on APR systems when applied to the real bugs. Then, we explore the performance variations that can be achieved by “tweaking” the FL step. • Fix Pattern Mining: Fix patterns (a.k.a. fix templates) have been studied in various APR scenarios. Particularly, fix patterns have been widely used in different APR systems. To date, fix pattern mining is mainly studied in three ways: manually summarization, transformation inferring and code change action statistics. In this dissertation, we explore mining fix patterns for static bugs leveraging deep learning and clustering algorithms. • Avatar: Fix pattern based patch generation is a promising direction in the APR community. Notably, it has been demonstrated to produce more acceptable and correct patches than the patches obtained with mutation operators through genetic programming. The performance of fix pattern based APR systems, however, depends on the fix ingredients mined from commit changes in development histories. Unfortunately, collecting a reliable set of bug fixes in repositories can be challenging. We propose to investigate the possibility in an APR scenario of leveraging code changes that address violations by static bug detection tools. To that end, we build the Avatar APR system, which exploits fix patterns of static analysis violations as ingredients for patch generation. • TBar: Fix patterns are widely used in patch generation of APR, however, the repair performance of a single fix pattern is not studied. We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. • Debugging Method Names: Except the issues about semantic/static bugs in programs, we note that how to debug inconsistent method names automatically is important to improve program quality. In this dissertation, we propose a deep learning based approach to spotting and refactoring inconsistent method names in programs. [less ▲]

Detailed reference viewed: 170 (11 UL)
Full Text
See detailDEVELOPING INDIVIDUAL-BASED GUT MICROBIOME METABOLIC MODELS FOR THE INVESTIGATION OF PARKINSON’S DISEASE-ASSOCIATED INTESTINAL MICROBIAL COMMUNITIES
Baldini, Federico UL

Doctoral thesis (2019)

The human phenotype is a result of the interactions of environmental factors with genetic ones. Some environmental factors such as the human gut microbiota composition and the related metabolic functions ... [more ▼]

The human phenotype is a result of the interactions of environmental factors with genetic ones. Some environmental factors such as the human gut microbiota composition and the related metabolic functions are known to impact human health and were put in correlation with the development of different diseases. Most importantly, disentangling the metabolic role played by these factors is crucial to understanding the pathogenesis of complex and multifactorial diseases, such as Parkinson’s Disease. Microbial community sequencing became the standard investigation technique to highlight emerging microbial patterns associated with different health states. However, even if highly informative, such technique alone is only able to provide limited information on possible functions associated with specific microbial communities composition. The integration of a systems biology computational modeling approach termed constraint-based modeling with sequencing data (whole genome sequencing, and 16S rRNA gene sequencing), together with the deployment of advanced statistical techniques (machine learning), helps to elucidate the metabolic role played by these environmental factors and the underlying mechanisms. The first goal of this PhD thesis was the development and deployment of specific methods for the integration of microbial abundance data (coming from microbial community sequencing) into constraint-based modeling, and the analysis of the consequent produced data. The result was the implementation of a new automated pipeline, connecting all these different methods, through which the study of the metabolism of different gut microbial communities was enabled. Second, I investigated possible microbial differences between a cohort a Parkinson’s disease patients and controls. I discovered microbial and metabolic changes in Parkinson’s disease patients and their relative dependence on several physiological covariates, therefore exposing possible mechanisms of pathogenesis of the disease.Overall, the work presented in this thesis represents method development for the investigation of before unexplored functional metabolic consequences associated with microbial changes of the human gut microbiota with a focus on specific complex diseases such as Parkinson’s disease. The consequently formulated hypothesis could be experimentally validated and could represent a starting point to envision possible clinical interventions. [less ▲]

Detailed reference viewed: 64 (15 UL)
Full Text
See detailIMPROVEMENT OF THE LOAD-BEARING CAPACITY OF DRY-STACKED MASONRY
Chewe Ngapeya, Gelen Gael UL

Doctoral thesis (2019)

Mortar bonded masonry is one of the oldest construction technics traditionally used around the world. However, dry-stacked masonry (DSM) is a competitive system that confers significant assets to masonry ... [more ▼]

Mortar bonded masonry is one of the oldest construction technics traditionally used around the world. However, dry-stacked masonry (DSM) is a competitive system that confers significant assets to masonry in the sense that, concisely, it saves construction time, requires less skill labourers and ease the construction as well as the de-construction. Despite all this major benefits, the current use of DSM is hindered by the geometric imperfections of the block units and the lack of adapted design codes. Indeed, the block geometric imperfections, i.e. the bed-joint roughness and the height difference, cause a significant uneven load-distribution in DSM, which generally leads to a premature cracking and a drop of the wall compressive strength. On the other hand, the lack of adapted design codes entail significant safety hazards in the construction of such masonry walls. In view of the foregoing, through systematic numerical, experimental and analytical investigations, the present thesis aims to analyse the impacts of the block bed-joint imperfections on the mechanical response of DSM axially loaded. Furthermore, the current thesis aims to develop a strategy to overcome the block geometric imperfections and alleviate its impacts on the load-bearing capacity of DSM. Finally, the present thesis intends to develop a design model for predicting the load-bearing capacity of DSM, while taking into account the effects of the block geometric imperfections for a safe design. First of all, at the beginning of the research project, a new dry-stacked masonry block is designed and labelled ‘M-Block’. The impact of the bed-joint roughness and the block height variation on the stress distribution in a DSM is analysed through numerical modelling. It is shown that the block height difference yields five potential load cases that block units may suffer upon the axial compression of a DSM wall. Accordingly, it is also shown that a nominal DSM wall can exhibit different load percolation paths and different damages. Further, a strategy is presented to overcome the bed-joint imperfections, increase the actual contact area in the bed-joints and ultimately improve the load-bearing capacity of DSM, by adding a material layer (the ‘contact layer’) on the raw DSMb. The capacity of the contact layer to increase the actual contact and level the stress distribution was first investigated through numerical models then evidenced through experimental tests on masonry triplets. The contact layer was also investigated for improving the load-bearing capacity of dry-stacked masonry, with satisfactory results obtained on wallets tested in the lab. As the finite element modelling is cumbersome and the experimental investigations onerous and laborious, an analytical model has then been developed for predicting the load-bearing capacity of DSM. A statistical modelling has been developed for determining a factor δh, which stands for the reduction of the nominal section of a DSM generated by the block height variation. Experimental tests were also performed on masonry triplets for measuring the ultimate actual contact in the bed-joints and defining a factor δr, which stands for the reduction of the nominal contact area generated by the block bed-joint roughness. The two defined parameters were then exploited to establish the design model that takes into account the block imperfections in the prediction of the load-bearing capacity of DSM. The design model was shown quite well capable of predicting the load-bearing capacity of DSM with a mean accuracy of 93% - 106% and a standard deviation of 12% - 10%. [less ▲]

Detailed reference viewed: 41 (17 UL)
Full Text
See detailEssays on Demographic Change and Economic Performance
Iong, Ka Kit UL

Doctoral thesis (2019)

Detailed reference viewed: 103 (45 UL)
See detailAssessment of Fundamental Motives
Dörendahl, Jan UL

Doctoral thesis (2019)

Motivation, that is, the orientation of the momentary execution of life towards a positively evaluated target state (Rheinberg & Vollmeyer, 2012), is one of the most important psychological constructs ... [more ▼]

Motivation, that is, the orientation of the momentary execution of life towards a positively evaluated target state (Rheinberg & Vollmeyer, 2012), is one of the most important psychological constructs related to success in various life domains such as school, work or family life (Pellegrino & Hilton, 2013). To be able to provide guidance to people and to support them in achieving success in these life domains, it is essential to identify those motivational aspects that are relevant for the respective life domain and to make them assessable. Here, motivational aspects that are inherent in a person, such as motives or goals, are promising candidates. Both constructs can be defined as explicit cognitive representations of desirable end states (Brandstätter, Schüler, Puca, & Lozo, 2013; Karoly, 1999). However, motives and goals are distinguished at the level of abstraction, that is, goals are a concrete representation of more abstract motives (Elliot & Church, 1997). In order to broaden our understanding in the assessment of motives and goals that are of relevance for education and work as two of the most important life domains, three studies were conducted in the context of the presented dissertation. In Paper 1, we revised the theory of Fundamental Motives (Havercamp, 1998; Reiss & Havercamp, 1998) and developed a time- and cost-efficient questionnaire to assess them in research settings. Fundamental Motives are a self-contained framework of 16 motives considered to be relevant for people in their everyday lives. The framework is appealing as it provides an approach to narrow down the plethora of motives to those relevant in a variety of life domains. In addition, the framework is already used extensively in coaching for work and other areas of life (Reyss & Birkhahn, 2009). First, an initial item pool was successively refined into 16 scales with three items each (named 16 motives research scales; 16mrs) across two samples with a total sample size of N = 569 representative for the German population with respect to age, sex and education. Second, we used another representative sample (N = 999) to validate the questionnaire and explore its nomological network. Results support the reliability and validity of the 16mrs for the assessment of Fundamental Motives. Investigations of the nomological network indicate, that Fundamental Motives represent aspects of personality that are different from the Big Five personality traits (e.g., Costa & McCrae, 1992) and cover motivational aspects beyond the well-established Power, Achievement, Affiliation, Intimacy, and Fear motives (Heckhausen & Heckhausen, 2010). Paper 2 was based on the results of Study 1, since we applied the framework of Fundamental Motives, as measured by the 16mrs constructed in Study 1, to the life domain of work. In coaching in work contexts, Fundamental Motives have been used extensively, not least because of their fine-grained level of abstraction that allows for a straightforward interpretation by the client. By investigating how the satisfaction of fine-grained motives supplied by characteristics of the workplace (i.e., need-supply fit) contribute to job satisfaction, we further validated the framework itself. At the same time were able to gain more detailed insights into how need-supply fit impacts job satisfaction, compared to broader motive or value clusters. To this end, we used the representative sample from Study 1 (N = 999) and selected all working people (n = 723). We applied polynomial regression in combination with response surface analysis (e.g., Edwards & Parry, 1993), which allows to simultaneously investigate how different levels of Fundamental Motives on the one hand and different levels of supply to satisfy these motives at the workplace on the other contribute to job satisfaction. We found that job satisfaction was highest when the level of supply by the workplace exceeded the level of the motive for Social Acceptance, Status, Autonomy, Sex, and Retention motives. When a high level of the motive and a high level of supply met, job satisfaction was highest for Curiosity, Idealism, and Social Participation motives. When the supplies fell short compared to the level of the motive, job satisfaction was negatively affected by the need-supply fit of Social Acceptance, Status, Sex, Retention, Curiosity, and Idealism motives. The results can be used in coaching and career development to uncover potential causes of low job satisfaction and provide guidance to clients on how to enhance their job satisfaction. In Paper 3, we shifted the focus to education as another major life domain. Here, achievement motivation has been identified as one of the major driving forces for progression and success (Schiefele & Schaffner, 2015). On a more concrete level compared to achievement motivation, Achievement Goals have been established as students’ cognitive representations of desired and undesired end states in educational achievement contexts (Elliot & Thrash, 2002; Hulleman, Schrager, Bodmann, & Harackiewicz, 2010). These goals typically focus on mastery, that is learning as much as possible, and performance, that is outperforming others or avoiding being outperformed. So far, Achievement Goals have been mostly conceptualized as domain-specific (Bong, 2001). Although this assumption is supported by previous studies, research on the processes operating behind the domain- specificity is scarce. The dimensional comparisons theory (Möller & Marsh, 2013) has introduced dimensional comparisons as a potential process operating behind the domain- specificity. Dimensional comparisons describe intrapersonal comparisons of characteristics in a domain with characteristics in another domain for the sake of self-evaluation. Previous investigations indicated that dimensional comparison processes are involved in the formation of the domain-specificity of important educational constructs, such as Self-Concept or Test Anxiety. Consequently, our aim in study 3 was to investigate, if dimensional comparison processes operate in the formation of the domain-specificity of Achievement Goals. To this end, we used a sample of N = 381 German ninth and tenth grade students in six German highest-track schools. Results indicate that dimensional comparison processes impact the domain specificity of Achievement Goals. Thus, the results extend our understanding of the domain specificity of Achievement Goals and simultaneously add to the validity of the dimensional comparison theory. In conclusion, the presented scientific work adds to the assessment of motives and goals that are crucially important in various life domains, but especially in work and educational settings. The contributions of this dissertation to the existing literature include (1) the revision of a comprehensive motivational framework (Paper 1), (2) the development and validation of a questionnaire based on this revised framework (Papers 1 & 2), (3) important assessment related insights concerning Achievement Goals in educational settings (Paper 3), and (4) practical implications for coaching and interventions in work and educational settings as two of the most important life domains (Papers 2 & 3). [less ▲]

Detailed reference viewed: 60 (5 UL)
Full Text
See detailEssays in Financial Stability
Gabriele, Carmine UL

Doctoral thesis (2019)

Detailed reference viewed: 66 (6 UL)
Full Text
See detailGraph-based Algorithms for Smart Mobility Planning and Large-scale Network Discovery
Changaival, Boonyarit UL

Doctoral thesis (2019)

Graph theory has become a hot topic in the past two decades as evidenced by the increasing number of citations in research. Its applications are found in many fields, e.g. database, clustering, routing ... [more ▼]

Graph theory has become a hot topic in the past two decades as evidenced by the increasing number of citations in research. Its applications are found in many fields, e.g. database, clustering, routing, etc. In this thesis, two novel graph-based algorithms are presented. The first algorithm finds itself in the thriving carsharing service, while the second algorithm is about large graph discovery to unearth the unknown graph before any analyses can be performed. In the first scenario, the automatisation of the fleet planning process in carsharing is proposed. The proposed work enhances the accuracy of the planning to the next level by taking an advantage of the open data movement such as street networks, building footprints, and demographic data. By using the street network (based on graph), it solves the questionable aspect in many previous works, feasibility as they tended to use rasterisation to simplify the map, but that comes with the price of accuracy and feasibility. A benchmark suite for further research in this problem is also provided. Along with it, two optimisation models with different sets of objectives and contexts are proposed. Through a series of experiment, a novel hybrid metaheuristic algorithm is proposed. The algorithm is called NGAP, which is based on Reference Point based Non-dominated Sorting genetic Algorithm (NSGA-III) and Pareto Local Search (PLS) and a novel problem specific local search operator designed for the fleet placement problem in carsharing called Extensible Neighbourhood Search (ENS). The designed local search operator exploits the graph structure of the street network and utilises the local knowledge to improve the exploration capability. The results show that the proposed hybrid algorithm outperforms the original NSGA-III in convergence under the same execution time. The work in smart mobility is done on city scale graphs which are considered to be medium size. However, the scale of the graphs in other fields in the real-world can be much larger than that which is why the large graph discovery algorithm is proposed as the second algorithm. To elaborate on the definition of large, some examples are required. The internet graph has over 30 billion nodes. Another one is a human brain network contains around 1011 nodes. Apart of the size, there is another aspect in real-world graph and that is the unknown. With the dynamic nature of the real-world graphs, it is almost impossible to have a complete knowledge of the graph to perform an analysis that is why graph traversal is crucial as the preparation process. I propose a novel memoryless chaos-based graph traversal algorithm called Chaotic Traversal (CHAT). CHAT is the first graph traversal algorithm that utilises the chaotic attractor directly. An experiment with two well-known chaotic attractors, Lozi map and Rössler system is conducted. The proposed algorithm is compared against the memoryless state-of-the-art algorithm, Random Walk. The results demonstrate the superior performance in coverage rate over Random Walk on five tested topologies; ring, small world, random, grid and power-law. In summary, the contribution of this research is twofold. Firstly, it contributes to the research society by introducing new study problems and novel approaches to propel the advance of the current state-of-the-art. And Secondly, it demonstrates a strong case for the conversion of research to the industrial sector to solve a real-world problem. [less ▲]

Detailed reference viewed: 56 (9 UL)
Full Text
See detailTOWARDS A MODELLING FRAMEWORK WITH TEMPORAL AND UNCERTAIN DATA FOR ADAPTIVE SYSTEMS
Mouline, Ludovic UL

Doctoral thesis (2019)

Self-Adaptive Systems (SAS) optimise their behaviours or configurations at runtime in response to a modification of their environments or their behaviours. These systems therefore need a deep ... [more ▼]

Self-Adaptive Systems (SAS) optimise their behaviours or configurations at runtime in response to a modification of their environments or their behaviours. These systems therefore need a deep understanding of the ongoing situation which enables reasoning tasks for adaptation operations. Using the model-driven engineering (MDE) methodology, one can abstract this situation. However, information concerning the system is not always known with absolute confidence. Moreover, in such systems, the monitoring frequency may differ from the delay for reconfiguration actions to have measurable effects. These characteristics come with a global challenge for software engineers: how to represent uncertain knowledge that can be efficiently queried and to represent ongoing actions in order to improve adaptation processes? To tackle this challenge, this thesis defends the need for a unified modelling framework which includes, besides all traditional elements, temporal and uncertainty as first-class concepts. Therefore, a developer will be able to abstract information related to the adaptation process, the environment as well as the system itself. Towards this vision, we present two evaluated contributions: a temporal context model and a language for uncertain data. The temporal context model allows abstracting past, ongoing and future actions with their impacts and context. The language, named Ain’tea, integrates data uncertainty as a first-class citizen. [less ▲]

Detailed reference viewed: 73 (5 UL)
Full Text
See detailDeep Neural Networks for Personalized Sentiment Analysis with Information Decay
Guo, Siwen UL

Doctoral thesis (2019)

People have different lexical choices when expressing their opinions. Sentiment analysis, as a way to automatically detect and categorize people’s opinions in text, needs to reflect this diversity. In ... [more ▼]

People have different lexical choices when expressing their opinions. Sentiment analysis, as a way to automatically detect and categorize people’s opinions in text, needs to reflect this diversity. In this research, I look beyond the traditional population-level sentiment modeling and leverage socio-psychological theories to incorporate the concept of personalized modeling. In particular, a hierarchical neural network is constructed, which takes related information from a person’s past expressions to provide a better understanding of the sentiment from the expresser’s perspective. Such personalized models can suffer from the data sparsity issue, therefore they are difficult to develop. In this work, this issue is addressed by introducing the user information at the input such that the individuality from each user can be captured without building a model for each user and the network is trained in one process. The evolution of a person’s sentiment over time is another aspect to investigate in personalization. It can be suggested that recent incidents or opinions may have more effect on the person’s current sentiment than the older ones, and the relativeness between the targets of the incidents or opinions plays a role on the effect. Moreover, psychological studies have argued that individual variation exists in how frequently people change their sentiments. In order to study these phenomena in sentiment analysis, an attention mechanism which is reshaped with the Hawkes process is applied on top of a recurrent network for a user-specific design. Furthermore, the modified attention mechanism delivers a functionality in addition to the conventional neural networks, which offers flexibility in modeling information decay for temporal sequences with various time intervals. The developed model targets data from social platforms and Twitter is used as an example. After experimenting with manually and automatically labeled datasets, it can be found that the input formulation for representing the concerned information and the network design are the two major impact factors of the performance. With the proposed model, positive results have been observed which confirm the effectiveness of including user-specific information. The results reciprocally support the psychological theories through the real-world actions observed. The research carried out in this dissertation demonstrates a comprehensive study of the significance of considering individuality in sentiment analysis, which opens up new perspectives for future research in the area and brings opportunities for various applications. [less ▲]

Detailed reference viewed: 199 (19 UL)
See detailPATHOGENIC ROLE OF PARKINSON’S DISEASE-ASSOCIATED MIRO1 MUTATIONS IN THE MITOCHONDRIAL-ENDOPLASMIC RETICULUM INTERPLAY
Berenguer, Clara UL

Doctoral thesis (2019)

Parkinson´s disease (PD) is a chronic neurodegenerative disorder, in which only 5-10% of the cases are caused by genetic mutations. One of the main pathological hallmarks of PD is the loss of midbrain ... [more ▼]

Parkinson´s disease (PD) is a chronic neurodegenerative disorder, in which only 5-10% of the cases are caused by genetic mutations. One of the main pathological hallmarks of PD is the loss of midbrain dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc) of diseased brains. These DA neurons require large amounts of energy for the maintenance of their pace-making activity and their complex dendritic and axonal arborizations, features that force them to rely on a fully functional mitochondrial network. In this regard, mitochondrial dyshomeostasis is a central factor in PD pathophysiology. Mitochondria are considered the powerhouse of the cells, and they are extremely dynamic organelles that are distributed throughout the entire neuronal body to meet the cellular energy demands. The maintenance of mitochondrial function requires their interaction with other cellular organelles, in particular, the endoplasmic reticulum (ER). Overwhelming evidence indicates that the mitochondrial-ER interface is a potential target of growing importance for the investigation of PD. Several PD-related proteins were found to be involved in the structural maintenance and signaling regulation of mitochondrial-ER contact sites (MERCs). In recent years, myriad studies have identified the mitochondrial GTPase Miro1 as a crucial player in PD pathology. Miro1 protein is not only an adaptor for mitochondrial transport, but also acts as a cytosolic calcium sensor and as an ubiquitination target for the mitochondrial quality control machinery. Moreover, Miro1 can localize to MERCs, where it functions as a regulator of the calcium exchange between both organelles. To date, no genetic link between Miro1 and PD has been identified, and the influence of Miro1 in the regulation of MERCs within the context of neurodegeneration is still underestimated. This current study explored the damaging effect of novel PD-associated heterozygous mutations in RHOT1, the gene encoding Miro1 protein, in a diseased genetic background. We first obtained skin fibroblasts from the affected PD patients harboring Miro1 mutations, which we further differentiated into iPSC-derived neurons. The characterization of the mutations in both patient-derived cellular models unveiled important impairments in mitochondrial calcium homeostasis and sensitivity to calcium stress, associated with alterations in the abundance and functionality of the MERCs. Consequently, downstream pathways to these mechanisms were affected, such as autophagy flux and mitochondrial clearance. From our results, we can conclude that PD-associated mutant Miro1 leads to crucial alterations in MERCs, consequently affecting downstream mechanisms such as calcium homeostasis and mitophagy. These dysregulations might lead to an increased sensitivity to stress and finally cell death. Our findings strongly support the key role of MERCs in the progress of neurodegeneration and establish RHOT1 as a rare genetic risk factor in PD. [less ▲]

Detailed reference viewed: 179 (9 UL)
Full Text
See detailModeling Parkinson's disease using human midbrain organoids
Monzel, Anna Sophia UL

Doctoral thesis (2019)

With increasing prevalence, neurodegenerative disorders present a major challenge for medical research and public health. Despite years of investigation, significant knowledge gaps exist, which impede the ... [more ▼]

With increasing prevalence, neurodegenerative disorders present a major challenge for medical research and public health. Despite years of investigation, significant knowledge gaps exist, which impede the development of disease-modifying therapies. The development of tools to model both physiological and pathological human brains greatly enhanced our ability to study neurological disorders. Brain organoids, derived from human induced pluripotent stem cells (iPSCs), hold unprecedented promise for biomedical research to unravel novel pathological mechanisms of a multitude of brain disorders. As brain proxies, these models bridge the gap between traditional 2D cell cultures and animal models. Owing to their human origin, hiPSC-derived organoids can recapitulate features that cannot be modeled in animals by virtue of differences in species. Parkinson’s disease (PD) is a human-specific neurodegenerative disorder. The major manifestations are the consequence of degenerating dopaminergic neurons (DANs) in the midbrain. The disease has a multifactorial etiology and a multisystemic pathogenesis and pathophysiology. In this thesis, we used state-of-the-art technologies to develop a human midbrain organoid (hMO) model with a great potential to study PD. hMOs were generated from iPSC-derived neural precursor cells, which were pre-patterned to the midbrain/hindbrain region. hMOs contain multiple midbrain-specific cell types, such as midbrain DANs, as well as astrocytes and oligodendrocytes. We could demonstrate features of neuronal maturation such as myelination, synaptic connections, spontaneous electrophysiological activity and neural network synchronicity. We further developed a neurotoxin-induced PD organoid model and set up a high-content imaging platform coupled with machine learning classification to predict neurotoxicty. Patient-derived hMOs display PD-relevant pathomechanisms, indicative of neurodevelopmental deficits. hMOs as novel in vitro models open up new avenues to unravel PD pathophysiology and are powerful tools in biomedical research. [less ▲]

Detailed reference viewed: 123 (4 UL)
Full Text
See detailFeldtest und dynamische Simulation der außenliegenden Wandtemperierung
Schmidt, Christoph Wilhelm UL

Doctoral thesis (2019)

The present work deals in detail with two new, thermally active components for building renovation, namely the external wall tempering (aWT) and the external air tempering (aLT). With the help of these ... [more ▼]

The present work deals in detail with two new, thermally active components for building renovation, namely the external wall tempering (aWT) and the external air tempering (aLT). With the help of these two components, existing buildings can be thermally activated as part of an energetic refurbishment. The installation of the two components is minimally invasive from the outside. Due to the position of the active layer in the wall structure, the use of very low fluid temperatures is possible (low-exergy approach). Initially the theoretical principles for both components were developed and presented in accordance with standard literature for thermoactive component systems. Then characteristic values for the evaluation of the components (efficiency and utilisation rates) were developed and based on the theoretical principles, implementation concepts for both components were subsequently developed. Finally, a large-scale implementation of the two components could be realized on a facade. The aim of the implementation was not only to present the "feasibility" of the componentes but also to generate measurement data for the following considerations. In the course of its development, some sources of error could be identified and a multiplicity of realizations were obtained. For example, for warranty reasons a compromise had to be made regarding the thickness of the plaster for plastering the capillary tube mats. Overall, both components were successfully implemented and put into operation. In coordination to the implementation the system costs of both components were determined. Here, similar values were achieved in the implementation as determined within the framework of sample planning (~70 €/m²). With the help of the measurement data from the field test areas of the two components and two laboratory test benches, suitable modelling approaches could now be developed, verified and finally validated. Then stationary as well as transient measurements were carried out and compared and a good agreement between modelling and measurements could be determined. The comparison between idealized modelling and real-life components, which are under the influence of the (partly not unambiguously attributable) environmental conditions, causes difficulties. For the aWT, a maximum useful heat flow of around 60 W/m² in over-compensatory heating mode was determined. The useful heat flow is defined as the heat flow from the tempering level into the interior of the building. In a low-exergy operating mode, however, ~15 W/m² is more likely. For such external components, the time constants are also relevant; for the aWT these are in the daily range, with dead times of 3-4 hours. At the same time, the thermal activation of the existing structure can make it usable as a storage mass. However, since validated simulation models are available after completion of the measurements, potential estimates and further considerations can be made at simulation level. The simulation studies carried out on the building level show the potential, but also the important "sticking points" of the components. In summary, it can be stated that the aWT is more suitable for binary operation than a kind of base load tempering. Again, the pump power requirement in relation to the thermal input must be taken into account for long running times. The lower the heating requirement of a building, the more likely it is that the aWT can also be used as a independent system. When considering the aWT alone, control strategies adapted to the inertia of the aWT are the key to high cover shares. The combination of aLT and aWT was found to be very suitable for the complete heating of a building. Here, the simulation achieves high cover ratios with low flow temperatures and simple control strategies. Thus the feasibility of the ideas was shown, realistic system costs were determined and the basics were created on model level in order to investigate the further interesting aspects of the components by means of simulations and on the basis of the field test areas. [less ▲]

Detailed reference viewed: 97 (5 UL)
See detailOptimization assisted Designing Mechanical Elements for Direct Metal Laser Sintering
Cao, Thanh Binh UL

Doctoral thesis (2019)

The common question that many mechanical engineers have tried to answer is “how to maximize strength and working reliability of parts during designs while being able to minimize the parts’ weights?” The ... [more ▼]

The common question that many mechanical engineers have tried to answer is “how to maximize strength and working reliability of parts during designs while being able to minimize the parts’ weights?” The associated solutions attach the design methods, which need to be de-veloped to build up the parts. The better solutions are the more reliable and lighter parts could be built up. Hence, the fewer negative impacts on the environment could be yielded and the closer distance we could step towards the more sustainable future. Under the influences of the Fourth Industrial Revolution, many optimization methods have been being developed, aimed at supporting engineers on figuring out the above question. Despite having substantial developments in recent years, both the optimization methods and its applications in the mechanical design field are still far from being fully exploited. To evaluate the potential use of the methods, specific product developments are essentially considered and investigated. The Thesis work particularly dealt with the investigations of the optimization assisted design methods, employed to develop structures of some mechanical elements. These constructed elements were expected to have higher performance than those traditionally designed and were able to be practically produced. For gradually studying and evaluating the processes of design, it was proposed to divide the work into the five separating phases. Within the initial phase, the first scheme of the optimization assisted design was theoretically investigated. Such a scheme was relied on the combination of topology optimization and lattice optimization and was considered in association with the redesigning process of a motorcycle frame. The frame was selected for this starting phase due to the convenient definition of the design volume subjected to the optimizations. By handling the investigations in dealing with (i) the first resonance frequency, (ii) the mass, (iii) the buckling load factor, and (iv) the equivalent stress of the newly designed frame and those of the original one, the potential use of the design approach was revealed. In addition, the investigations pointed out that further studies are essentially taken place to search for more appropriate ways to apply this approach to design novel complex structures. During the next three consecutive phases, the more complicate optimization schemes were proposed and studied. The schemes were composed of three optimization steps, including free shape optimization, topology optimization, as well as lattice optimization. The studies were handled in conjunction with the processes of innovating the hydrogen valve structure, which holds the unexposed design space. Different novel configurations were developed for the valve within these phases, targeting the reduction of mass, the prolongation of fatigue life, as well as the structural compatibility of the designed valves with DMLS. In addition, the design of a test channel for the valve performed via the use of a fatigue based approach was also introduced in one of the three phases. It was aimed at providing a mean to detect multiple early valve’s damages. All of the built structures were then virtually evaluated to point out the effectiveness of the design works. Within the last phase, experimental tests were proposed to carry out. In this phase, the best possible valve structure was selected and was subjected to produce by DMLS along with post-machining afterwards. Upon completion of the fabrications, the in-house fatigue tests were tak-en place for the produced valves until damages or reaching 2E5 cycles. The obtained data of the tests provided further evidence to support the theoretical studies demonstrated in the first four phases. [less ▲]

Detailed reference viewed: 69 (2 UL)
See detailEpistemic Nonconceptualism. Nonconceptual Content and the Justification of Perceptual Beliefs
Orlando, Andy UL

Doctoral thesis (2019)

The questions whether the content of perception is nonconceptual and, if so, whether it can serve as the justificatory basis for perceptual beliefs have been at the epicentre of wide-ranging debates in ... [more ▼]

The questions whether the content of perception is nonconceptual and, if so, whether it can serve as the justificatory basis for perceptual beliefs have been at the epicentre of wide-ranging debates in recent philosophy of mind and epistemology. The present dissertation will set out to answer these matters. It will be argued that the content of perception is not necessarily conceptual, i.e. a specific understanding of nonconceptual content will be laid out and defended. Starting from the presentation and criticism of conceptualism, it will be concluded that the arguments brought forth against nonconceptualism can successfully be met. A specific version of nonconceptualism will be developed on this basis and will serve as the necessary framework for the remainder of the discussion. Flowing from these arguments, this specific version of nonconceptualism will be taken to task by clarifying, analysing and specifying its epistemological commitments and options. Several problem sets will have to be introduced and evaluated, such as the divide between externalists and internalists, the phenomena surrounding epistemic defeaters and examinations pertaining to reasoning, specifically whether experiences could be the output states of inferential processes. These reflections will not only provide a more in-depth investigation of some of the most pressing epistemological questions surrounding nonconceptual content, but will also allow for a seamless transition into the problem of which specific epistemological theory best bears out the epistemological role of nonconceptual content. Specifically, disjunctivism, capacity approaches and phenomenal conservatism will be assessed as to their capacity to vindicate nonconceptual content. Phenomenal conservatism will be identified as the theory that best integrates nonconceptualism. While phenomenal conservatism will thus be defended, the closing sections of the present dissertation will mainly focus on questions surrounding rationality. Indeed, if perception and/ or perceptual experiences could be classified as rational, or more accurately put, if arguments pertaining to the evaluability of a perceptual experience’s aetiology are tenable, it can reasonably be asked whether phenomenal conservatism can satisfactorily meet this challenge. Ultimately, it will be concluded that there is room for a specific notion of nonconceptual content as the justificatory basis for basic perceptual beliefs. [less ▲]

Detailed reference viewed: 148 (4 UL)
See detailSix regards sur la master-classe de piano : phénoménologie et sémiotique de la rencontre musicale
Kim, Seong Jae UL

Doctoral thesis (2019)

In this thesis, I suggest new ways of grasping the affective dimension of musical experience that which traditional semiotics and musicology take little into account. Inspired by a dynamistic modelling ... [more ▼]

In this thesis, I suggest new ways of grasping the affective dimension of musical experience that which traditional semiotics and musicology take little into account. Inspired by a dynamistic modelling approach –which developed from the 1960s and since then has been influential in the domain of semiolinguistic disciplines–, I sketch out the fluctuating phases of semiogenesis within the field of piano masterclasses. The term ‘semiogenesis’ here, is taken in a broad sense, encompassing any deployment of sign-forms, either vague or articulated, diffused or well-defined. Such forms are conceived as being strained between expressiveness and normativity. They are also valorized in that they call the subject to participate in his or her own ‘lines of life’ which, in turn, may come to exist by those forms. A piano masterclass is given by a genuine master to highly accomplished students, both who truly testify to their own lives, to their own ways of ethical feeling, in the search for a unique musical praxis. In recent years, the field of masterclass has begun to attract the attention of the scientific community, especially in areas related to musical teaching and experience, such as psychology, aesthetics and epistemology or even sociology. Yet it is still suboptimal that most of the problems adopted in these frameworks incorporate nothing or so little in terms of the metamorphosis of sensitivity and the play of musical feeling in the characterization of their research objects. Nevertheless, the field of piano masterclass seems to be a particularly interesting and promising object of research in that the horizon of affect is preeminent in all the semiotic activities tied to it. Thus, I have attended several masterclasses in order to closely follow the praxis of the musicians (e.g., active and passive participation in masterclasses, audiovisual recordings, interviews, conversations and debates on music, etc.), in the spirit of making all its genetic depth to the semiotic activity, by approaching it under the perspective of an encountering and an orientation of musical sensibilities. One of the main tasks of my approach in the designing of the descriptions of this particular musical praxis consists in understanding the acoustic, gestural and linguistic phenomena, as giving birth to semiogenetic conditions of the constitution of a musical meaning. In this way, it is a fundamentally descriptive method, inspired by philosophical (Shaftesbury, Kierkegaard, Wittgenstein, Merleau-Ponty) and semiotic (Peirce, Saussure) minds, which joins the semiotic preoccupation from the very initial levels of a microgenesis, and by promoting it immediately into a hermeneutical and existential phenomenology. The perceptive and semiogenetic issues of musical sensitivity allow us to remodel the notion of a musical motive, understood both as a motive-of-praxis and as an existential-motive. I tried to grasp the idea of a certain listening of the musical praxis by finding there a constant passage between an ethical perception, and the search –through the playing of the music and its motives–, for a musical personality engaged in the musical praxis. Such conceptions on motive and personality proved to be fruitful to the extent that they make it possible to suggest a certain ethic of musical feeling, without reducing it to a skill, a psychology or a ritual. I have thus managed to redefine the notion of a musical ‘sign’ by playing up on the 'motival' horizon of this semiotic activity, understood – in the formal and sensitive nature of musical practices– as participation (i.e., desire and commitment to participate) to a certain regime of human existence. In this way, I believe to be paving the way for a new conception of musical praxis, which interweaves aesthetics and ethics. The thesis thus addresses these problems by casting six successive contemplations on it: Piano Masterclass; Feeling, Knowing and Doing; Field and Form; Motive and Form; Lines of Life; Enchantment. [less ▲]

Detailed reference viewed: 61 (15 UL)
See detailThe impact of macro-substrate on micropollutant degradation in activated sludge systems
Christen, Anne UL

Doctoral thesis (2019)

Wastewater treatment plants are designed as a first barrier to reduce xenobiotic emission into rivers. However, they are not sufficient enough to fully prevent environmental harm of emerging substances in ... [more ▼]

Wastewater treatment plants are designed as a first barrier to reduce xenobiotic emission into rivers. However, they are not sufficient enough to fully prevent environmental harm of emerging substances in the water body. Therefore, advanced treatment processes are currently being investigated but their implementation is cost-intensive. The optimisation of the activated sludge treatment to enhance biological micropollutant removal could reduce operating costs and material. Although the impact of operational parameters, such as sludge retention time and hydraulic retention time on the xenobiotic removal have been investigated, the influence of the macro-substrate composition and load on micropollutant elimination causes a high degree of uncertainty. This study focuses on the latter by analysing 15 municipal wastewater treatment plants, where variations in load and composition of the macro-substrate were expected. Assuming that macro-substrate shapes the biomass and triggers their activity, the impact of macro-substrate composition and load on xenobiotic degradation by microorganisms was analysed. It was hypothesised that on the one hand, a high dissolved organic carbon concentration might lead to enhanced xenobiotic degradation for certain substances due to a high microbial activity. The latter is assumed to be caused by a high labile dissolved organic carbon portion and the tendency for a shorter sludge retention time. On the other hand, a low dissolved organic carbon concentration, probably containing a predominant recalcitrant substrate portion, tends to a longer sludge retention time. Consequently, slow-growing and specialised microorganisms may develop, able to degrade certain xenobiotics. As a second question, the contribution of the autotrophic biomass to xenobiotic degradation was tested by inhibiting the autotrophic microorganisms during the degradation test. To additionally test the hypothesis, the impact of a readily biodegradable substrate (acetate) on the xenobiotic degradation was tested and the sensitivity of the fluorescence signal of tryptophan was used to analyse the impact of tryptophan on xenobiotic degradation. Degradation tests focusing on the removal of macro-substrate and micropollutants within 18 hours incubation in the OxiTop® system were performed. The OxiTop® system is known as fast and easy method for organic matter analysis in the wastewater. To assess the macro-substrate composition prior to and after the degradation test, three characterisation methods were applied. Firstly, to determine the labile and the rather recalcitrant portion in the dissolved organic carbon, absorbance was measured at 280 nm and further analysed. This was verified by the characterisation of both portions based on the oxygen consumption measurements. Secondly, to analyse the organic matter concerning its fluorescent properties, excitation-emission scans were run and analysed using the parallel factor analysis approach. Lastly, the chromophoric and fluorescent organic matter was separated via size-exclusion chromatography to investigate the macro-substrate composition. Micropollutant elimination efficiency was followed by measuring initial and final concentrations of the targeted substances using liquid chromatography tandem-mass spectrometry and calculating pseudo-first-order degradation rates. To distinguish between the contribution of the heterotrophic biomass and the total biomass on xenobiotic degradation, allylthiourea was added to inhibit the autotrophic biomass. No significant composition changes of the chromophoric macro-substrate were observed. A higher initial dissolved organic carbon concentration led to higher chromophoric and fluorescent properties. The same was found for the degraded dissolved organic carbon amount and the loss of signal within the chromophoric and fluorescent portions. Variations in the macro-substrate load or rather concentration were tracked. Derived from the oxygen consumption measurements, a prominent labile and non-chromophoric portion was present at higher dissolved organic carbon levels, impacting the microbial activity. However, a characterisation of the non-chromophoric macro-substrate composition was not done within the study. Regarding the micropollutant removal, varying elimination rates were observed. For 4 out of 17 substances, distinct degradation dynamics were found, suggesting a possible impact of the present macro-substrate load. However, no overall impact of the macro- substrate on xenobiotic removal was observed. Atenolol, bezafibrate and propranolol showed a negative correlation with the initial dissolved organic carbon concentration, meaning higher degradation rates at a lower substrate load. This might indicate the presence of specialised microorganisms and a higher microbial diversity. Furthermore, inhibition studies using allylthiourea suggest a contribution of the autotrophic biomass to xenobiotic degradation. Sulfamethoxazole showed a positive trend with the initial dissolved organic carbon concentration, possibly indicating co-metabolic degradation of sulfamethoxazole by the autotrophic and heterotrophic biomass. Thus, it seemed that the removal efficiencies of sulfamethoxazole benefited from higher substrate loads. With respect to the short term experiments with acetate, higher degradation efficiencies were observed for several substances in the presence of acetate. Ketoprofen and bezafibrate showed in all tested wastewaters enhanced removal efficiencies. The tryptophan test indicated the presence of tryptophan in wastewater, but no clear contribution to the xenobiotic degradation was seen. The presented findings substantially contribute to the understanding of the influencing parameters on xenobiotic degradation in activated sludge systems. By using the OxiTop® application for xenobiotic degradation tests, an easy and fast method was established. Absorbance and fluorescence measurements proved to be a sufficient method for characterisation and biodegradability estimation of organic matter, which could be further applied as online measurements on wastewater treatment plants. Thus, the current study will serve as a base for future work investigating the influencing parameters on the xenobiotic degradation pathways and focusing on the optimisation of the biological and advanced treatment process to overcome current limitations. [less ▲]

Detailed reference viewed: 134 (29 UL)
Full Text
See detailTowards an understanding of the language–integration nexus: a qualitative study of forced migrants’ experiences in multilingual Luxembourg
Kalocsanyiova, Erika UL

Doctoral thesis (2019)

This cumulative thesis offers insights into the under-researched area of linguistic integration in multilingual societies. It is a collection of four papers that seek to address key questions such as: How ... [more ▼]

This cumulative thesis offers insights into the under-researched area of linguistic integration in multilingual societies. It is a collection of four papers that seek to address key questions such as: How can people’s existing language resources be validated and used to aid language learning? What are the politics of language and integration in settings of complex linguistic diversity? What role do language ideologies play in their creation and/or perception? What types of individual trajectories emerge? The research reported here is grounded in the Luxembourgish context, which represents an important European focal point for exploring the dynamics of linguistic integration. Taking a qualitative approach informed by linguistic ethnography (Copland & Creese 2015; Pérez-Milans 2016; Rampton 2007a; Rampton et al. 2015; Tusting & Maybin 2007), this work focuses on the language learning and integration experiences of five men who, fleeing war and violence, sought international protection in the Grand Duchy of Luxembourg. Building on theories of multilingual communication (Canagarajah & Wurr 2011), translanguaging (Creese & Blackledge 2010; García & Li Wei 2014) and receptive multilingualism (ten Thije et. al. 2012), the first paper of this thesis considers the affordances of multilingual learning situations in classroom-based language training for forced migrants. The second paper moves on to scrutinise the instrumental and integrative dimensions of language (Ager 2001), as articulated and perceived by the research participants. It exposes the vagueness and contradictory logics of linguistic integration as currently practiced, and throws light on how people with precarious immigration status interpret, experience and act upon ideologies surrounding language and integration (cf. Cederberg 2014; Gal 2006; Kroskrity 2004; Stevenson 2006). The third paper, likewise, directs attention to the controversies and potential unwarranted adverse effects of current linguistic integration policies. Through juxtaposing the trajectories of two forced migrants – who shared similar, multi-layered linguistic repertoires (Blommaert & Backus 2013; Busch 2012, 2017) – this part of the thesis elucidates the embodied efforts, emotions, and constraints inherent in constructing a new (linguistic) belonging in contemporary societies. Taken together, these papers illustrate and expand the discussion about the language–integration nexus. Additionally, by bringing into focus multilingual realities and mobile aspirations, they seek to provide a fresh impetus for research, and contribute to the creation of language policies that recognise a larger range of communicative possibilities and forms of language knowledge (cf. Ricento 2014; Flubacher & Yeung 2016). The thesis also makes a methodological contribution, by demonstrating the value of cross-language qualitative research methods in migration and integration research. It includes a detailed discussion of the complexities of researching in a multilingual context (Holmes et al. 2013; Phipps 2013b), as well as a novel inquiry into the interactional dynamics of an interpreter-mediated research encounter (fourth paper). [less ▲]

Detailed reference viewed: 146 (12 UL)
Full Text
See detailEssays on Monetary Economics and Asset Pricing
Weber, Fabienne UL

Doctoral thesis (2019)

This dissertation consists of three chapters based on three applied theory papers, which all use microfoundations to study mechanisms behind asset prices in the context of monetary policy and financial ... [more ▼]

This dissertation consists of three chapters based on three applied theory papers, which all use microfoundations to study mechanisms behind asset prices in the context of monetary policy and financial stability. Market Fragility and the Paradox of the Recent Stock-Bond Dissonance. The objective of this study is to jointly explain stock prices and bond prices. After the Lehman-Brothers collapse, the stock index has exceeded its pre-Lehman-Brothers peak by 36% in real terms. Seemingly, markets have been demanding more stocks instead of bonds. Yet, instead of observing higher bond rates, paradoxically, bond rates have been persistently negative after the Lehman-Brothers collapse. To explain this paradox, we suggest that, in the post-Lehman-Brothers period, investors changed their perceptions on disasters, thinking that disasters occur once every 30 years on average, instead of disasters occurring once every 60 years. In our asset-pricing calibration exercise, this rise in perceived market fragility alone can explain the drop in both bond rates and price-dividend ratios observed after the Lehman-Brothers collapse, which indicates that markets mostly demanded bonds instead of stocks. Time-Consistent Welfare-Maximizing Monetary Rules. The objective of this study is to jointly explain capital prices, bond prices and money supply/demand. We analyze monetary policy from the perspective that a Central Bank conducts monetary policy serving the ultimate goal of maximizing social welfare, as dictated by a country's constitution. Given recent empirical findings that many households are hand-to-mouth, we study time-consistent welfare-maximizing monetary-policy rules within a neoclassical framework of a cash-in-advance economy with a liquidity-constrained good. The Central Bank performs open-market operations buying government bonds in order to respond to fiscal shocks and to productivity shocks. We formulate the optimal policy as a dynamic Stackelberg game between the Central Bank and private markets. A key goal of optimal monetary policy is to improve the mixture between liquidity constrained and non-liquidity constrained goods. Optimal monetary responses to fiscal shocks aim at stabilizing aggregate consumption fluctuations, while optimal monetary responses to productivity shocks allow aggregate consumption fluctuations to be more volatile. Jump Shocks, Endogenous Investment Leverage and Asset Prices: Analytical Results. The objective of this study is to jointly model leveraging and stock prices in an environment with rare stock-market disaster shocks. Financial intermediaries invest in the stock market using household savings. This investment leveraging, and its extent, affects stock price movements and, in turn, stock-price movements affect investment leveraging. If the price mechanism is unable to absorb a rare stock-market disaster, then with leverage ratios of 20 or more, financial intermediaries can go bankrupt. We model the interplay between leverage ratios and stock prices in an environment with rare stock-market disaster shocks. First we introduce dividend shocks that follow a Poisson jump process to an endowment economy with pure exchange between two types of agents: (i) shareholders of financial intermediaries that invest in the stock market ("experts"), and (ii) savers, who deposit their savings to financial intermediaries (households). Under the assumption that the households and the so called "experts" both have logarithmic utility, we obtain a closed-form solution for the endowment economy. This closed-form solution serves as a guide for numerically solving the model with recursive Epstein-Zin preferences in continuous-time settings. In our extension we introduce production based on capital investments, but with adjustment costs for investment changes. Jump shocks directly hit the productive capital stock, but the way they influence stock returns of productive firms passes through the leveraging channel, which is endogenous. The production economy also has endogenous growth, and investment adjustment costs partly influence the model's stability properties. Importantly, risk has an endogenous component due to leveraging, and this endogenous-risk component influences growth opportunities, bridging endogenous cycles with endogenous growth. This chapter is part of a broader project on financial stability. Future extensions will include an evaluation of the Basel II-III regulatory framework in order to assess their effectiveness and their impact on growth performance. [less ▲]

Detailed reference viewed: 54 (9 UL)
Full Text
See detailThe Development and Utilization of Scenarios in European Union Energy Policy
Scheibe, Alexander UL

Doctoral thesis (2019)

Scenarios are a strategic planning tool, which essentially enables decision-makers to identify future uncertainties and to devise or adjust organizational strategies. Increasingly, scenario building has ... [more ▼]

Scenarios are a strategic planning tool, which essentially enables decision-makers to identify future uncertainties and to devise or adjust organizational strategies. Increasingly, scenario building has been applied as a planning instrument by public policymakers. At the European Union (EU) level, scenarios are widely used in various policy areas and for different purposes. However, the development and utilization of scenarios in policymaking as well as their concrete impact on the decision process remain an under-explored research field. The academic literature focuses on scenarios in the business domain, where they are a well-established strategic planning component. In public policy, however, the development and use of scenarios conceivably differ from the private sector. In the case of the EU, the potential impact of its distinctive multi-stakeholder and multi-level policymaking environment on the development of scenarios is not sufficiently accounted for in the literature. Moreover, it is uncertain how scenarios are situated in the wider EU political context. This thesis seeks to explain how scenarios are developed and utilized in the EU’s policymaking process. To that end, an institutionalized scenario development exercise from the Union’s energy policy (the Ten-Year Network Development Plan, TYNDP) is investigated as a case study. Drawing from empirical evidence primarily based on elite interviews, the research applies a qualitative-interpretative research framework that combines the analytical concepts of policy networks, epistemic communities, and strategic constructivism. The combination facilitates the design of a theoretical model of inner and outer spheres in EU energy policymaking, accounting for both the role of scenarios in policymaking and the impact of political goals on their development. The research concludes that the wider EU political context of the outer sphere shapes the development of scenarios in the inner sphere and determines how they are utilized in the policymaking process. The expectations of political actors frame the technical expertise in the scenario development process. With regard to the application of scenarios in wider public policy, the research demonstrates that the closer the scenario building is to the decision-making process, the stronger the political impact on the scenarios is likely to be. This is because political actors and decision-makers seek to align the scenario outcomes to their respective preferences. [less ▲]

Detailed reference viewed: 88 (10 UL)
Full Text
See detailCONFIDENCE-BASED DECISION-MAKING SUPPORT FOR MULTI-SENSOR SYSTEMS
Neyens, Gilles UL

Doctoral thesis (2019)

We live in a world where computer systems are omnipresent and are connected to more and more sensors. Ranging from small individual electronic assistants like smartphones to complex autonomous robots ... [more ▼]

We live in a world where computer systems are omnipresent and are connected to more and more sensors. Ranging from small individual electronic assistants like smartphones to complex autonomous robots, from personal wearable health devices to professional eHealth frameworks, all these systems use the sensors’ data in order to make appropriate decisions according to the context they measure. However, in addition to complete failures leading to the lack of data delivery, these sensors can also send bad data due to influences from the environment which can sometimes be hard to detect by the computer system when checking each sensor individually. The computer system should be able to use its set of sensors as a whole in order to mitigate the influence of malfunctioning sensors, to overcome the absence of data coming from broken sensors, and to handle possible conflicting information coming from several sensors. In this thesis, we propose a computational model based on a two layer software architecture to overcome this challenge. In a first layer, classification algorithms will check for malfunctioning sensors and attribute a confidence value to each sensor. In the second layer, a rule-based proactive engine will then build a representation of the context of the system and use it along some empirical knowledge about the weaknesses of the different sensors to further tweak this confidence value. Furthermore, the system will then check for conflicting data between sensors. This can be done by having several sensors that measure the same parameters or by having multiple sensors that can be used together to calculate an estimation of a parameter given by another sensor. A confidence value will be calculated for this estimation as well, based on the confidence values of the related sensors. The successive design refinement steps of our model are shown over the course of three experiments. The first two experiments, located in the eHealth domain, have been used to better identify the challenges of such multi-sensor systems, while the third experiment, which consists of a virtual robot simulation, acts as a proof of concept for the semi-generic model proposed in this thesis. [less ▲]

Detailed reference viewed: 179 (7 UL)
Full Text
See detailFrom Persistent Homology to Reinforcement Learning with Applications for Retail Banking
Charlier, Jérémy Henri J. UL

Doctoral thesis (2019)

The retail banking services are one of the pillars of the modern economic growth. However, the evolution of the client’s habits in modern societies and the recent European regulations promoting more ... [more ▼]

The retail banking services are one of the pillars of the modern economic growth. However, the evolution of the client’s habits in modern societies and the recent European regulations promoting more competition mean the retail banks will encounter serious challenges for the next few years, endangering their activities. They now face an impossible compromise: maximizing the satisfaction of their hyper-connected clients while avoiding any risk of default and being regulatory compliant. Therefore, advanced and novel research concepts are a serious game-changer to gain a competitive advantage. In this context, we investigate in this thesis different concepts bridging the gap between persistent homology, neural networks, recommender engines and reinforcement learning with the aim of improving the quality of the retail banking services. Our contribution is threefold. First, we highlight how to overcome insufficient financial data by generating artificial data using generative models and persistent homology. Then, we present how to perform accurate financial recommendations in multi-dimensions. Finally, we underline a reinforcement learning model-free approach to determine the optimal policy of money management based on the aggregated financial transactions of the clients. Our experimental data sets, extracted from well-known institutions where the privacy and the confidentiality of the clients were not put at risk, support our contributions. In this work, we provide the motivations of our retail banking research project, describe the theory employed to improve the financial services quality and evaluate quantitatively and qualitatively our methodologies for each of the proposed research scenarios. [less ▲]

Detailed reference viewed: 56 (12 UL)
Full Text
See detailTowards Optimal Real-Time Bidding Strategies for Display Advertising
Du, Manxing UL

Doctoral thesis (2019)

Detailed reference viewed: 71 (4 UL)
Full Text
See detailDevelopment of an innovative U-shaped steel-concrete composite beam solution: Experimental and numerical studies on the mechanical behaviour
Turetta, Maxime UL

Doctoral thesis (2019)

An innovative solution of steel-concrete composite beam was developed taking into consideration the fire situation and the construction stage. The beam is composed of a U-shaped steel part connected to a ... [more ▼]

An innovative solution of steel-concrete composite beam was developed taking into consideration the fire situation and the construction stage. The beam is composed of a U-shaped steel part connected to a reinforced concrete part. In the construction phase, the beam is supporting the slab and constitutes a formwork for the reinforced concrete part. The U-shaped beam withstands the construction loads without any temporary propping system. When casting concrete, the steel beam is filled at the same time as the slab, this allows considerable time-saving on site. In exploitation stage, the beam behaves as a steel-concrete composite beam. The connection between the two materials is made by welded headed studs on the lower part of the U-shaped beam. In fire situations, the composite beam satisfies conventional fire stability durations due to the longitudinal reinforcements inside the concrete downstand part with sufficient covers. A literature review focuses on modern solutions that fulfils the criteria of the thesis is performed in order to develop an innovative solution optimised. In construction stage, the U-shaped steel beam without restraints is prone to lateral torsional buckling instability. In order to characterise the stability of the beam, a full-scale test is carried out at the Laboratory of the University of Luxembourg. The test clearly showed the lateral torsional buckling of the steel beam. The test results are compared to numerical simulations and analytical studies. A parametrical study, covering 200 geometrical configurations of the U-shaped beam, is carried out to validate the use of the curve "b" for the design of the steel beam for lateral torsional buckling according to Eurocodes 3. In the exploitation phase, once the concrete hardens, the beam has a steel-concrete composite behaviour provided by the shear connection between the two materials. For manufacturing reasons, the connection is located in a zone where the concrete is subjected to tension forces induced by the bending of the beam. The concrete in this zone is potentially cracked, thus the efficiency of the connection and therefore the mechanical steel-concrete composite behaviour is investigated. Another test is therefore carried out in the Laboratory of the University of Luxembourg, this time the specimen tested is made of concrete and steel. The failure mode is a shear mechanism of the composite beam at very large displacements. However, the beam specimen exhibited a real steel-concrete composite behaviour with high ductility, the connection is therefore very efficient. The test results are compared to numerical simulations in order to validate the finite element model developed. From numerical results and test results, an analytical method, based on EN 1994-1-1, is proposed to find the bending resistant of this composite beam by taking into account the partial yield of the side plates of the U-shaped steel section. A global analytical design method is proposed for the developed solution based on the Eurocodes with additional considerations and constructional guidelines. [less ▲]

Detailed reference viewed: 130 (4 UL)
Full Text
See detailLiquid Metals and Liquid Crystals Subject to Flow: From Fundamental Fluid Physics to Functional Fibers
Honaker, Lawrence William UL

Doctoral thesis (2019)

Technology over the past few decades has pushed strongly towards wearable technology, one such form being textiles which incorporate a functional component. There are several ways to produce polymer ... [more ▼]

Technology over the past few decades has pushed strongly towards wearable technology, one such form being textiles which incorporate a functional component. There are several ways to produce polymer fibers on both laboratory and industrial scales, but the implementation of these techniques to spin fibers incorporating a functional heterocore has proven challenging for certain combinations of materials. In general, fiber spinning from polymer solutions, regardless of the method, is a multifaceted process with concerns in chemistry, materials science, and physics, both from fundamental and applied standpoints, requiring balancing of flow parameters (interfacial tension, viscosity, and inertial forces) against solvent extraction. This becomes considerably more complicated when multiple interfaces are present. This thesis explores the concerns involved in the spinning of fibers incorporating functional materials from several standpoints. Firstly, due to the importance of interfacial forces in jet stability, I present a microfluidic interfacial tensiometry technique for measuring the interfacial tension between two immiscible fluids, assembled using glass capillary microfluidics techniques. The advantage of this technique is that it can measure the interfacial tension without reliance on sometimes imprecise external parameters and data, obtaining interfacial tension measurements solely from experimental observations of the deformation of a droplet into a channel and the pressure needed to induce the same. Using the knowledge gained from both microfluidic device assembly and the interfacial tension, I then present the wet spinning of polymer fibers using a glass capillary spinneret. This technique uses a polymer dope flowed along with a coagulation bath tooled to extract solvent, leaving behind a continuous polymer fiber. We were able to spin both pure polymer fibers and elastomer microscale fibers containing a continuous heterocore of a liquid crystal, with the optical properties of the liquid crystal maintained within the fiber. While we were not able to spin fibers of a harder polymer containing a continuous core, either liquid crystalline or of a liquid metal, I present analysis of why the spinning was unsuccessful and analysis that will lead us towards the eventual spinning of such fibers. [less ▲]

Detailed reference viewed: 130 (16 UL)
Full Text
See detailLe jugement par défaut dans l'espace judiciaire européen
Richard, Vincent Jérôme UL

Doctoral thesis (2019)

French judges regularly refuse to enforce foreign judgements rendered by default against a defendant who has not appeared. This finding is also true for other Member States, as many European regulations ... [more ▼]

French judges regularly refuse to enforce foreign judgements rendered by default against a defendant who has not appeared. This finding is also true for other Member States, as many European regulations govern cross-border enforcement of decisions rendered in civil and commercial matters between Member States. The present study examines this problem in order to understand the obstacles to the circulation of default decisions and payment orders in Europe. When referring to the recognition of default judgments, it would be more accurate to refer to the recognition of decisions made as a result of default proceedings. It is indeed this (default) procedure, more than the judgment itself, which is examined by the exequatur judge to determine whether the foreign decision should be enforced. This study is therefore firstly devoted to default procedures and payment order procedures in French, English, Belgian and Luxembourgish laws. These procedures are analysed and compared in order to highlight their differences, be they conceptual or simply technical in nature. Once these discrepancies have been identified, this study turns to private international law in order to understand which elements of the default procedures are likely to hinder their circulation. The combination of these two perspectives makes it possible to envisage a gradual approximation of national default procedures in order to facilitate their potential circulation in the European area of freedom, security and justice. [less ▲]

Detailed reference viewed: 136 (9 UL)
Full Text
See detailMotivation and self-regulation
Grund, Axel UL

Postdoctoral thesis (2019)

Detailed reference viewed: 16 (0 UL)
Full Text
See detailParliamentary involvement in EU affairs during treaty negotiations in a historical comparative perspective: the cases of the Austrian, Finnish and Luxembourgish parliaments
Badie, Estelle Céline UL

Doctoral thesis (2019)

Until recently, studies on the Europeanisation of national parliaments mostly tended to focus on the evolution of their institutional capacities rather than on their actual behaviour in EU affairs. This ... [more ▼]

Until recently, studies on the Europeanisation of national parliaments mostly tended to focus on the evolution of their institutional capacities rather than on their actual behaviour in EU affairs. This thesis seeks to identify variations in behavioural patterns between the Austrian, Finnish and Luxembourgish legislatures. The historical comparative perspective bases mainly on political and societal similarities between the countries. Based on historical and Sociological Institutionalism, the thesis aims to analyse the evolution and motivations of parliamentary involvement in the field of European affairs over a period running from the negotiations on the Treaty establishing a Constitution for Europe until the Treaty on Stability, Coordination and Governance in the EMU. By including both institutional and motivational indicators, the objective consists of identifying the extent to which parliamentary involvement in EU matters has been challenged in the framework of EU treaties and intergovernmental treaties on the EMU. We address the following questions: What institutional and motivational factors influenced parliamentary involvement in EU affairs? What parliamentary initiatives have been taken to improve participation in EU affairs? In which direction did institutional change happen and who triggered it? The present thesis bases primarily on qualitative data, i.e. interviews with parliamentarians, civil servants from parliamentary administrations and parliamentary group collaborators. Thereby we aim to produce empirical in-depth knowledge on actual parliamentary behaviour in each studied country. Thus, the assessment of parliamentary involvement in EU affairs through the lenses of parliamentarians’ motivations and their institutional context helps to investigate the parliamentary “black box”. [less ▲]

Detailed reference viewed: 53 (6 UL)
See detailInvestigation of the immune functions of DJ-1
Zeng, Ni UL

Doctoral thesis (2019)

Detailed reference viewed: 60 (6 UL)
Full Text
See detailEntwicklung und Modellierung eines Hybrid-Solarmodulkollektor-basierten Wärmepumpensystems auf der Basis von CO2 Direktverdampfung in Mikrokanälen
Rullof, Johannes UL

Doctoral thesis (2019)

As early as the end of the 1970s, heat pumps in combination with glycol-based large-area combined radiation environmental heat absorbers were developed as evaporators which, in comparison to forced ... [more ▼]

As early as the end of the 1970s, heat pumps in combination with glycol-based large-area combined radiation environmental heat absorbers were developed as evaporators which, in comparison to forced convection-based air heat pumps, not only used ambient energy but also solar energy as an energy source. However, due to the falling oil prices after the oil crisis, this technology, which was for the most part not yet economical and moreover required large absorber surfaces, could not prevail. Due to the significant re-duction in the heating requirement of new buildings, nowadays much smaller absorber surfaces are needed in combination with a heat pump, which leads to a renewed interest in the combination of heat pumps and absorbers. Above all, the combination of thermal absorber, based on free convection and radiation, and photovoltaic (PV) in one module (PVT module), may be an alternative to forced convection-based air heat pumps. The use of solar energy as the heat source of the heat pump leads one to expect higher coef-ficients of performance by achieving higher evaporation temperatures than conventional forced convection-based air heat pumps. Numerous publications describe the market potential of solar hybrid modules with di-rect evaporation (PVT-direct module), and several theoretical studies describe con-structive approaches and related calculations. However, to date, there is still no practi-cal implementation of a PVT hybrid module in combination with a module-integrated direct evaporation of the natural refrigerant CO2 in microchannels. So far, no experi-mental studies on CO2-PVT-based heat pump systems with direct evaporation have been carried out by research institutions. Thus, the proof of the constructive and func-tional feasibility of a CO2-based PVT-direct module as well as the energetic feasibility of the PVT-based CO2 heat pump system is still a desideratum. The three objectives of this work can be summarized as follows: 1. Development and production of the PVT-direct module for the analysis of the constructional feasibility of the PVT-direct module 2. Experimental investigation of the PVT-direct module for the analysis of both the thermal and electrical functional feasibility of the PVT-direct module 3. Analysis of the energetic feasibility of the PVT-based CO2 heat pump system [less ▲]

Detailed reference viewed: 71 (9 UL)
Full Text
See detailAccess Control Mechanisms Reconsidered with Blockchain Technologies
Steichen, Mathis UL

Doctoral thesis (2019)

Detailed reference viewed: 139 (14 UL)
See detailIMPROVED DESIGN METHODS FOR THE BEARING CAPACITY OF FOUNDATION PILES
Rica, Shilton UL

Doctoral thesis (2019)

Pile foundations are often used for civil structures, both offshore and onshore, which are placed on soft soils. Nowadays, there are many different methods used for the prediction of the pile bearing ... [more ▼]

Pile foundations are often used for civil structures, both offshore and onshore, which are placed on soft soils. Nowadays, there are many different methods used for the prediction of the pile bearing capacity. However, the resulting design values are often different from the values measured at pile load field tests. A reason for this is that there are many pile installation effects and (unknown) soil conditions which influence the pile bearing capacity. Another problem is that for many pile load field tests in the past, the residual stresses at the pile after pile installation, have been ignored unfortunately. This ignoring leads to a measured tip bearing resistance which is lower than the real tip bearing resistance (capacity), and a measured pile shaft friction which is higher than the real pile shaft friction. The main aim of this thesis is, to come to a better understanding of the pile performance and especially the pile bearing capacity. In order to achieve this aim, many numerical loading simulations were computed for small displacements with the Finite Element Model Plaxis and many existing pile design methods have been studied. The pile installation process itself was modelled and simulated with the help of the material point method, MPM, which is able to handle large displacement numerical simulations. The used version of this MPM method was recently developed at the research institute Deltares in the Netherlands. The results from the MPM simulations showed that there is a big difference between the bearing capacity of a pre-installed pile (no installation effect are taken into account) and the bearing capacity of a pile where the installations effects are taken into account. This proves in a numerical way the importance of the pile installation effects on the pile bearing capacity. However, the MPM numerical simulations were done only for jacked piles. Therefore, impact piles, vibrated piles etc., were not simulated. For this reason, there is not a detailed numerical study for the effect of each installation method specific on the pile bearing capacity. The fact that the installation effects, in general, has an important influence on the pile bearing capacity was already proven by field tests and centrifuge tests, and has been published before by several authors. The performed numerical simulations show that during the loading and failure of a pile, a balloon shaped plastic zone develops around the pile tip, which is in fact the failure mechanism. A better understanding of this zone could lead to a better estimation of the pile tip bearing capacity because the size and position of this plastic zone are directly related to the pile tip bearing capacity. Therefore, this plastic zone has been studied for different soil and pile parameters. Also, the influence of each parameter has been studied and discussed. A similar balloon shaped plastic zone was found for both small and large displacement simulations. The tip bearing capacity of a pile is regarded to depend only on the soil in a certain zone around the pile tip. This zone is called the influence zone. The influence zone is found to be similar to the plastic zone of a pile tip. Therefore, the influence of a soft soil layer, near the influence zone of the pile tip, has also been studied. The numerical results have been validated with laboratory tests made by Deltares. The influence zone is roughly from 2 times the pile diameter, 𝐷, above the pile tip, to 5 or 6 times 𝐷 below the pile tip. Laboratory tests, using the direct shear test machine, have been performed in order to define the difference between the soil-pile friction angle and the soil-cone friction angle. The tests were done for different surface roughnesses and for three different sand types. The results were compared with the roughness of the sleeve of the Cone Penetration Test (CPT) apparatus. Based on the numerical simulations and the laboratory tests of Deltares, a new design method has been proposed for the estimation of the pile bearing capacity. This method has as main input value, the CPT results, therefore it is a CPT-based design method. The proposed method has been validated using pile field tests that were performed in Lelystad in the Netherlands. During this research, several axial and lateral pile field tests were performed at the West Coast of Mexico. Their results have been reported and discussed in the appendices. [less ▲]

Detailed reference viewed: 187 (9 UL)
Full Text
See detailDISSECTING GENETIC EPISTASIS IN FAMILIAL PARKINSON’S DISEASE USING A DIGENIC PATIENT-DERIVED STEM CELL MODEL
Hanss, Zoé UL

Doctoral thesis (2019)

Parkinson’s disease (PD) is the second most common neurodegenerative disorder worldwide. 10% of PD patients present a familial form of the disease implicating genetic mutations. A variability in terms of ... [more ▼]

Parkinson’s disease (PD) is the second most common neurodegenerative disorder worldwide. 10% of PD patients present a familial form of the disease implicating genetic mutations. A variability in terms of disease expressivity, severity and penetrance can be observed among familial cases. The idea that the classical one-gene one-trait model may not catch the full picture of genetic contribution to PD pathophysiology is increasingly recognized. Therefore, a polygenic model where multiple genes would influence the disease risk and the phenotypic traits in PD should be investigated. Mutations in PRKN, encoding the E3 ubiquitin-protein ligase Parkin, cause young onset autosomal recessive forms of PD. A variability in terms of clinical presentation and neuropathology have been observed in PD patients carrying mutations in Parkin. On the other hand, mutations in GBA were recently recognized as the most common genetic risk factor for developing PD. The incomplete penetrance of the disease in patients with GBA mutations may implicate other genetic factors. Therefore, it can be hypothesized that the interactions between common PD genes like PRKN and GBA can contribute to the phenotypic heterogeneity observed in PD cases. To explore this hypothesis, we generated patient-derived cellular models from several PD patients carrying pathogenic mutations in either both PRKN and GBA (triallelic models) or in only one of them (bi- or monoallelic models). We developed a novel strategy to gene edit the N370S mutation in GBA via CRISPR-Cas9, without interference with its respective pseudogene, which allows for the dissection of the role of GBA in the context of a PRKN mutation on an isogenic background. We identified a specific α-synuclein homeostasis in the triallelic model. The genetic and pharmacological rescue of GBA in the triallelic model modified the observed α-synuclein phenotype, proving the contribution of GBA to the observed phenotype. We then investigated whether Parkin was contributing to the phenotype. The modulation of Parkin function in the context of a GBA mutation induced a modification of the α-synuclein homeostasis. We therefore concluded that both PRKN and GBA are influencing α-synuclein homeostasis in the triallelic model. Nevertheless, the phenotypic outcome of the co-occurrence of these mutations was not additive nor synergistic. We therefore suggest the existence of an epistatic interaction between mutant GCase and Parkin that would underlie the clinical heterogeneity observed in PD patients carrying these mutations. [less ▲]

Detailed reference viewed: 69 (10 UL)
Full Text
See detailAssessment and Improvement of the Practical Use of Mutation for Automated Software Testing
Titcheu Chekam, Thierry UL

Doctoral thesis (2019)

Software testing is the main quality assurance technique used in software engineering. In fact, companies that develop software and open-source communities alike actively integrate testing into their ... [more ▼]

Software testing is the main quality assurance technique used in software engineering. In fact, companies that develop software and open-source communities alike actively integrate testing into their software development life cycle. In order to guide and give objectives for the software testing process, researchers have designed test adequacy criteria (TAC) which, define the properties of a software that must be covered in order to constitute a thorough test suite. Many TACs have been designed in the literature, among which, the widely used statement and branch TAC, as well as the fault-based TAC named mutation. It has been shown in the literature that mutation is effective at revealing fault in software, nevertheless, mutation adoption in practice is still lagging due to its cost. Ideally, TACs that are most likely to lead to higher fault revelation are desired for testing and, the fault-revelation of test suites is expected to increase as their coverage of TACs test objectives increase. However, the question of which TAC best guides software testing towards fault revelation remains controversial and open, and, the relationship between TACs test objectives’ coverage and fault-revelation remains unknown. In order to increase knowledge and provide answers about these issues, we conducted, in this dissertation, an empirical study that evaluates the relationship between test objectives’ coverage and fault-revelation for four TACs (statement, branch coverage and, weak and strong mutation). The study showed that fault-revelation increase with coverage only beyond some coverage threshold and, strong mutation TAC has highest fault revelation. Despite the benefit of higher fault-revelation that strong mutation TAC provide for software testing, software practitioners are still reluctant to integrate strong mutation into their software testing activities. This happens mainly because of the high cost of mutation analysis, which is related to the large number of mutants and the limitation in the automation of test generation for strong mutation. Several approaches have been proposed, in the literature, to tackle the analysis’ cost issue of strong mutation. Mutant selection (reduction) approaches aim to reduce the number of mutants used for testing by selecting a small subset of mutation operator to apply during mutants generation, thus, reducing the number of analyzed mutants. Nevertheless, those approaches are not more effective, w.r.t. fault-revelation, than random mutant sampling (which leads to a high loss in fault revelation). Moreover, there is not much work in the literature that regards cost-effective automated test generation for strong mutation. This dissertation proposes two techniques, FaRM and SEMu, to reduce the cost of mutation testing. FaRM statically selects and prioritizes mutants that lead to faults (fault-revealing mutants), in order to reduce the number of mutants (fault-revealing mutants represent a very small proportion of the generated mutants). SEMu automatically generates tests that strongly kill mutants and thus, increase the mutation score and improve the test suites. First, this dissertation makes an empirical study that evaluates the fault-revelation (ability to lead to tests that have high fault-revelation) of four TACs, namely statement, branch, weak mutation and strong mutation. The outcome of the study show evidence that for all four studied TACs, the fault-revelation increases with TAC test objectives’ coverage only beyond a certain threshold of coverage. This suggests the need to attain higher coverage during testing. Moreover, the study shows that strong mutation is the only studied TAC that leads to tests that have, significantly, the highest fault-revelation. Second, in line with mutant reduction, we study the different mutant quality indicators (used to qualify "useful" mutants) proposed in the literature, including fault-revealing mutants. Our study shows that there is a large disagreement between the indicators suggesting that the fault-revealing mutant set is unique and differs from other mutant sets. Thus, given that testing aims to reveal faults, one should directly target fault-revealing mutants for mutant reduction. We also do so in this dissertation. Third, this dissertation proposes FaRM, a mutant reduction technique based on supervised machine learning. In order to automatically discriminate, before test execution, between useful (valuable) and useless mutants, FaRM build a mutants classification machine learning model. The features for the classification model are static program features of mutants categorized as mutant types and mutant context (abstract syntax tree, control flow graph and data/control dependency information). FaRM’s classification model successfully predicted fault-revealing mutants and killable mutants. Then, in order to reduce the number of analyzed mutants, FaRM selects and prioritizes fault-revealing mutants based of the aforementioned mutants classification model. An empirical evaluation shows that FaRM outperforms (w.r.t. the accuracy of fault-revealing mutant selection) random mutants sampling and existing mutation operators-based mutant selection techniques. Fourth, this dissertation proposes SEMu, an automated test input generation technique aiming to increase strong mutation coverage score of test suites. SEMu is based on symbolic execution and leverages multiple cost reduction heuristics for the symbolic execution. An empirical evaluation shows that, for limited time budget, the SEMu generates tests that successfully increase strong mutation coverage score and, kill more mutants than test generated by state-of-the-art techniques. Finally, this dissertation proposes Muteria a framework that enables the integration of FaRM and SEMu into the automated software testing process. Overall, this dissertation provides insights on how to effectively use TACs to test software, shows that strong mutation is the most effective TAC for software testing. It also provides techniques that effectively facilitate the practical use of strong mutation and, an extensive tooling to support the proposed techniques while enabling their extensions for the practical adoption of strong mutation in software testing. [less ▲]

Detailed reference viewed: 188 (24 UL)
Full Text
See detailDynamical Modeling Techniques for Biological Time Series Data
Mombaerts, Laurent UL

Doctoral thesis (2019)

The present thesis is articulated over two main topics which have in common the modeling of the dynamical properties of complex biological systems from large-scale time-series data. On one hand, this ... [more ▼]

The present thesis is articulated over two main topics which have in common the modeling of the dynamical properties of complex biological systems from large-scale time-series data. On one hand, this thesis analyzes the inverse problem of reconstructing Gene Regulatory Networks (GRN) from gene expression data. This first topic seeks to reverse-engineer the transcriptional regulatory mechanisms involved in few biological systems of interest, vital to understand the specificities of their different responses. In the light of recent mathematical developments, a novel, flexible and interpretable modeling strategy is proposed to reconstruct the dynamical dependencies between genes from short-time series data. In addition, experimental trade-offs and optimal modeling strategies are investigated for given data availability. Consistent literature on these topics was previously surprisingly lacking. The proposed methodology is applied to the study of circadian rhythms, which consists in complex GRN driving most of daily biological activity across many species. On the other hand, this manuscript covers the characterization of dynamically differentiable brain states in Zebrafish in the context of epilepsy and epileptogenesis. Zebrafish larvae represent a valuable animal model for the study of epilepsy due to both their genetic and dynamical resemblance with humans. The fundamental premise of this research is the early apparition of subtle functional changes preceding the clinical symptoms of seizures. More generally, this idea, based on bifurcation theory, can be described by a progressive loss of resilience of the brain and ultimately, its transition from a healthy state to another characterizing the disease. First, the morphological signatures of seizures generated by distinct pathological mechanisms are investigated. For this purpose, a range of mathematical biomarkers that characterizes relevant dynamical aspects of the neurophysiological signals are considered. Such mathematical markers are later used to address the subtle manifestations of early epileptogenic activity. Finally, the feasibility of a probabilistic prediction model that indicates the susceptibility of seizure emergence over time is investigated. The existence of alternative stable system states and their sudden and dramatic changes have notably been observed in a wide range of complex systems such as in ecosystems, climate or financial markets. [less ▲]

Detailed reference viewed: 74 (20 UL)
See detailIntegrity and Confidentiality Problems of Outsourcing
Pejo, Balazs UL

Doctoral thesis (2019)

Cloud services enable companies to outsource data storage and computation. Resource-limited entities could use this pay-per-use model to outsource large-scale computational tasks to a cloud-service ... [more ▼]

Cloud services enable companies to outsource data storage and computation. Resource-limited entities could use this pay-per-use model to outsource large-scale computational tasks to a cloud-service-provider. Nonetheless, this on-demand network access raises the issues of security and privacy, which has become a primary concern of recent decades. In this dissertation, we tackle these problems from two perspectives: data confidentiality and result integrity. Concerning data confidentiality, we systematically classify the relaxations of the most widely used privacy preserving technique called Differential Privacy. We also establish a partial ordering of strength between these relaxations and enlist whether they satisfy additional desirable properties, such as composition and privacy axioms. Tackling the problem of confidentiality, we design a Collaborative Learning game, which helps the data holders to determine how to set the privacy parameter based on economic aspects. We also define the Price of Privacy to measure the overall degradation of accuracy resulting from the applied privacy protection. Moreover, we develop a procedure called Self-Division, which bridges the gap between the game and real-world scenarios. Concerning result integrity, we formulate a Stackelberg game between outsourcer and outsourcee where no absolute correctness is required. We provide the optimal strategies for the players and perform a sensitivity analysis. Furthermore, we extend the game by allowing the outsourcer no to verify and show its Nash Equilibriums. Regarding integrity verification, we analyze and compare two verification methods for Collaborative Filtering algorithms: the splitting and the auxiliary data approach. We observe that neither methods provides a full solution for the raised problem. Hence, we propose a solution, which besides outperforming these is also applicable to both stage of the algorithms. [less ▲]

Detailed reference viewed: 73 (15 UL)