References of "Doctoral thesis"
     in
Bookmark and Share    
Full Text
See detailMultipath Routing on Anonymous Communication Systems: Enhancing Privacy and Performance
de La Cadena Ramos, Augusto Wladimir UL

Doctoral thesis (2021)

We live in an era where mass surveillance and online tracking against civilians and organizations have reached alarming levels. This has resulted in more and more users relying on anonymous communications ... [more ▼]

We live in an era where mass surveillance and online tracking against civilians and organizations have reached alarming levels. This has resulted in more and more users relying on anonymous communications tools for their daily online activities. Nowadays, Tor is the most popular and widely deployed anonymization network, serving millions of daily users in the entire world. Tor promises to hide the identity of users (i.e., IP addresses) and prevents that external agents disclose relationships between the communicating parties. However, the benefit of privacy protection comes at the cost of severe performance loss. This performance loss degrades the user experience to such an extent that many users do not use anonymization networks and forgo the privacy protection offered. On the other hand, the popularity of Tor has captured the attention of attackers wishing to deanonymize their users. As a response, this dissertation presents a set of multipath routing techniques, both at transport and circuit level, to improve the privacy and performance offered to Tor users. To this end, we first present a comprehensive taxonomy to identify the implications of integrating multipath on each design aspect of Tor. Then, we present a novel transport design to address the existing performance unfairness of the Tor traffic.In Tor, traffic from multiple users is multiplexed in a single TCP connection between two relays. While this has positive effects on privacy, it negatively influences performance and is characterized by unfairness as TCP congestion control gives all the multiplexed Tor traffic as little of the available bandwidth as it gives to every single TCP connection that competes for the same resource. To counter this, we propose to use multipath TCP (MPTCP) to allow for better resource utilization, which, in turn, increases throughput of the Tor traffic to a fairer extend. Our evaluation in real-world settings shows that using out-of-the-box MPTCP leads to 15% performance gain. We analyze the privacy implications of MPTCP in Tor settings and discuss potential threats and mitigation strategies. Regarding privacy, in Tor, a malicious entry node can mount website fingerprinting (WFP) attacks to disclose the identities of Tor users by only observing patterns of data flows.In response to this, we propose splitting traffic over multiple entry nodes to limit the observable patterns that an adversary has access to. We demonstrate that our sophisticated splitting strategy reduces the accuracy from more than 98% to less than 16% for all state-of-the-art WFP attacks without adding any artificial delays or dummy traffic. Additionally, we show that this defense, initially designed against WFP, can also be used to mitigate end-to-end correlation attacks. The contributions presented in this thesis are orthogonal to each other and their synergy comprises a boosted system in terms of both privacy and performance. This results in a more attractive anonymization network for new and existing users, which, in turn, increases the security of all users as a result of enlarging the anonymity set. [less ▲]

Detailed reference viewed: 41 (1 UL)
Full Text
See detailLa controverse constitutionnelle grecque sur l’article 120 § 4 en période de crise. Réflexions sur la compétence controversée du peuple en tant qu’organe de l’État
Mavrouli, Roila UL

Doctoral thesis (2021)

Cette thèse s’intéresse à l’apparition des deux discours doctrinaux grecs durant la période de crise économique de 2008 visant l’(in)constitutionnalité du premier mémorandum d’austérité, suivant les ... [more ▼]

Cette thèse s’intéresse à l’apparition des deux discours doctrinaux grecs durant la période de crise économique de 2008 visant l’(in)constitutionnalité du premier mémorandum d’austérité, suivant les politiques européennes de négociation de la dette publique. Il s’agit de faire apparaître les limites entre le discours du droit, la dogmatique juridique et la science du droit tout en identifiant trois niveaux de langage. La doctrine en tant qu’activité de compréhension, d’explication, de création et de critique du droit se distingue de la connaissance du droit positif. Mais parfois par crainte qu’une vision sociologique du droit ne prive celui-ci de toute prévisibilité, la doctrine se replie sur elle-même en fondant sa « science » et par conséquent prétend à une connaissance de son objet-droit. Ainsi, il s’agit de rechercher si le discours doctrinal pro-mémorandum autant que le discours doctrinal anti-mémorandum ne seraient pas descriptifs, mais exprimeraient des valeurs et énonceraient des prescriptions. Ou bien si la doctrine ne se limitant pas à une activité de connaissance de son objet, elle interprèterait et systématiserait le droit dans son rôle créateur de source complémentaire du droit en dialogue constant avec la jurisprudence et le législateur. Ou bien encore si elle peut être caractérisée par un élément scientifique, à savoir la description critique de l’activité scientifique ou prétendument scientifique à propos du droit. À cet égard, la démarche épistémologique de cette analyse est de montrer que la science juridique, aujourd’hui confrontée à une crise du paradigme positiviste dominant, mène à penser soit la nécessité de changer les dogmes établis soit au fait que l’« anomalie » ne sera pas parvenue à infirmer la fécondité du paradigme en place. [less ▲]

Detailed reference viewed: 55 (4 UL)
Full Text
See detailMining App Lineages: A Security Perspective
Gao, Jun UL

Doctoral thesis (2021)

Direct inter-app code invocation in Android apps and its evolution: The Android ecosystem offers different facilities to enable communication among app components and across apps to ensure that rich ... [more ▼]

Direct inter-app code invocation in Android apps and its evolution: The Android ecosystem offers different facilities to enable communication among app components and across apps to ensure that rich services can be composed through functionality reuse. At the heart of this system is the Inter-component communication (ICC) scheme, which has been largely studied in the literature. Less known in the community is another powerful mechanism that allows for direct inter-app code invocation which opens up for different reuse scenarios, both legitimate or malicious. In this dissertation, we expose the general workflow for this mechanism, which beyond ICCs, enables app developers to access and invoke functionalities (either entire Java classes, methods or object fields) implemented in other apps using official Android APIs. We experimentally showcase how this reuse mechanism can be leveraged to “plagiarize" supposedly-protected functionalities. Typically, we could leverage this mechanism to bypass security guards that a popular video broadcaster has placed for preventing access to its video database from outside its provided app. We further contribute with a static analysis toolkit, named DICIDer, for detecting direct inter-app code invocations in apps. An empirical analysis of the usage prevalence and evolution of this reuse mechanism is then conducted. [less ▲]

Detailed reference viewed: 13 (2 UL)
See detailUnlawful Content Online: Towards A New Regulatory Framework For Online Platforms
Ullrich, Carsten UL

Doctoral thesis (2020)

The thesis reviews the online intermediary liability framework of the E-Commerce Directive (in Articles 12 - 15) along two research questions. 1) Is the current legal framework regulating content ... [more ▼]

The thesis reviews the online intermediary liability framework of the E-Commerce Directive (in Articles 12 - 15) along two research questions. 1) Is the current legal framework regulating content liability of online platforms under the ECD still adequate when it comes to combating illegal content? 2) Are there alternative models for intermediary regulation that are better suited to include internet intermediaries in the fight against illegal content? These questions were formulated against the premises that unlawful content online has been a persisting and growing problem and that the position of online intermediaries today makes enhanced responsibilities on their part necessary. The thesis undertakes to analyse the nature of the enforcement challenges in the EU when trying to engage online platforms under the current liability framework, and charts out an alternative approach to holding online platforms responsible. Chapter 3 reviews the current intermediary framework in the EU and the horizontal challenges of holding internet intermediaries liable. This is analysed against the backdrop of the proliferation of the internet and online platforms, sketched out in the preceding Chapter 2. Due to the ambiguity and outdatedness of the ECD provisions, on the one hand, and different national secondary liability traditions, on the other hand, the liability protections of online platforms have been interpreted and applied differently by EU Member States, and most importantly courts, leading to an uneven and ineffective enforcement landscape. Chapter 4 analyses sectoral provisions that cover different kinds of offences related to unlawful content and their interactions with the ECD and national legislation on intermediary liability. The thesis evaluates enforcement efforts in the areas of defamation, hate speech, terrorist content, copyright, trademarks, product safety and food safety. While none of the national (sectoral) approaches reviewed appear to be effective when trying to enlist intermediaries in the fight against unlawful content, the latter have built up powerful own private enforcement systems that have come to rival and run counter to public interests and fundamental rights. Chapter 5 introduces case studies of online enforcement in the areas of product and food safety, based on interviews conducted with market surveillance authorities in the EU. The specific enforcement system of EU product regulation poses particular challenges, but also offers some useful lessons for the eventual framework proposed in Chapter 6. This system eschews today’s liability cornerstones and the reliance of self-regulatory tools favoured by EU and national legislators so far. Instead it proposes an enhanced responsibility system, based on harmonised technical standards as used in the EU's New Approach regulatory method. Technical standards would define duty of care obligations in the guise of risk management approaches, which focus on defined (sectoral) harms that arise from the business practices of online platforms. They incorporate prospective responsibilities, such as for example safety by design for user onboarding, user empowerment, or (algorithmic) content management, as well as specific retrospective responsibilities relating to e.g. notice and takedown or content identification system. The standard can be adapted to the type of harm/violation, thus taking account of the specific fundamental rights and public interests involved on a sectoral level. [less ▲]

Detailed reference viewed: 15 (10 UL)
Full Text
See detailPressure Sensing with Nematic Liquid Crystal and Carbon Nanotube Networks
Murali, Meenu UL

Doctoral thesis (2020)

The study of colloidal dispersions of nanoparticles in liquid crystals (LCs) is well known. In most of the works, the particles are mixed into the LC to form suspensions with well-dispersed particles ... [more ▼]

The study of colloidal dispersions of nanoparticles in liquid crystals (LCs) is well known. In most of the works, the particles are mixed into the LC to form suspensions with well-dispersed particles. However, when nanoparticles are physically connected to form networks, the overall macroscopic properties of the ensemble are directly linked to the specific properties of the nanoparticles. Carbon nanotubes (CNTs) are excellent electrical conductors possessing extremely high aspect ratio, which results in a very low concentration threshold needed to obtain percolation. Therefore, they form conductive networks with extremely small amounts of CNTs. Another advantage of carbon nanotubes is their capability to transport large current densities without damage by electromigration, maintaining a stable resistance, and having scattering-less paths across several microns. Moreover, the electromechanical properties of CNTs make them an ideal candidate in pressure sensing technology. The doctoral thesis presented here describes two different approaches to integrate and utilise CNTs in an LC matrix. In the first case, we show that a variety of nanoparticles that are dispersed in LC can be attracted and assembled onto a LC defect line generated in a predetermined location, thereby creating a vertical interconnect of nanoparticles. The second consists of CNT sheets mechanically drawn from a CNT forest and an LC cell is then built on top, and the second consists of a template-based assembly of dispersed CNTs onto defect lines in LCs. In this case, we study the electrical and optical properties of CNT sheets in the presence and absence of liquid crystals based on their DC electrical characterization with distributed electrical contacts. Finally, we discuss how these two approaches can be used to successfully fabricate pressure-sensing devices. The pressure response in both these sensors is achieved based on the change in resistance of the CNTs, induced by the structural variations under the external applied pressure. Both the pressure sensors developed here are easy to fabricate, cost-effective, and recoverable owing to the elasticity and softness of the LC. [less ▲]

Detailed reference viewed: 26 (12 UL)
Full Text
See detailBoosting Automated Program Repair for Adoption By Practitioners
Koyuncu, Anil UL

Doctoral thesis (2020)

Automated program repair (APR) attracts a huge interest from research and industry as the ultimate target in automation of software maintenance. Towards realizing this automation promise, the research ... [more ▼]

Automated program repair (APR) attracts a huge interest from research and industry as the ultimate target in automation of software maintenance. Towards realizing this automation promise, the research community has explored various ideas and techniques, which are increasingly demonstrating that APR is no longer fictional. Although literature techniques constantly set new records in fixing a significant fraction of defects within well-established benchmarks, we are not aware of large-scale adoption of APR in practice. Meanwhile, open-source and commercial organizations have started to reflect on the potential of integrating some automated steps in the software development cycle. Actually, the current practice has several development settings that use a number of tools to automate and systematize various tasks such as code style checking, bug detection, and systematic patching. Our work is motivated by this fact. We advocate that systematic and empirical exploration of the current practice that leverage tools to automate debugging tasks would provide valuable insights for rethinking and boosting the APR agenda towards its acceptability by developer communities. We have identified three investigation axes in this dissertation. First, mining software repositories towards understanding code change properties that could be valuable to guide program repair. Second, analyzing communication channels in software development in order to assess to what extent they could be relevant in a real-world program repair scenario. Third, exploring generic concepts of patching in the literature for establishing a common foundation for program repair pipelines that can be integrated with industrial settings. This dissertation makes the following contributions to the community: • An empirical study of tool support in a real development setting providing concrete insights on the acceptance, stability and the nature of bugs being fixed by manually-craft patches vs tool-supported patches and manifests opportunities for improving automated repair techniques. • A novel information retrieval based bug localization approach that learns how to compute the similarity scores of various types of features. • An automated mining strategy to infer fix pattern that can be integrated to automated program repair pipelines. • A practical bug report driven program repair pipeline. [less ▲]

Detailed reference viewed: 39 (8 UL)
Full Text
See detailTertium non datur: Various aspects of value-added (VA) models used as measures of educational effectiveness
Levy, Jessica UL

Doctoral thesis (2020)

Value-added (VA) models are used as measures of educational effectiveness which aim to find the “value” that has been added by teachers or schools to students’ achievement, independent of students’ ... [more ▼]

Value-added (VA) models are used as measures of educational effectiveness which aim to find the “value” that has been added by teachers or schools to students’ achievement, independent of students’ backgrounds. Statistically speaking, teacher or school VA scores are calculated as the part of an outcome variable that cannot be explained by the covariates that are in the VA model (i.e., the residual). Teachers or schools are classified as effective (or ineffective) if they have a positive (or negative) effect on students’ achievement compared to a previously specified norm value. Although VA models have gained popularity in recent years, there is a lack of consensus concerning various aspects of VA scores. The present dissertation aims at shedding light on these aspects, including the state of the art of VA research in the international literature, covariate choice, and model selection for the estimation of VA scores. In a first step, a systematic literature review was conducted, in which 370 studies from 26 countries were classified, focusing on methodological issues (Study 1 of the present dissertation). Results indicated no consensus concerning the applied statistical model type (the majority applied a linear regression, followed by multilevel models). Concerning the covariate choice, most studies used prior achievement as a covariate, cognitive and/or motivational student data were hardly considered, and there was no consensus on the in- or exclusion of students’ background variables. Based on these findings, it was suggested that VA models are better suited to improve the quality of teaching than for accountability and decision-making purposes. Secondly, based on one of the open questions resulting from Study 1 (i.e., covariate choice), the aim of Study 2 was to systematically compare different covariate combinations in the estimation of school VA models. Based on longitudinal data from primary school students participating in the Luxembourg School Monitoring Programme in Grades 1 and 3, three covariate sets were found to be essential when calculating school VA scores with math or language achievement as dependent variables: prior language achievement, prior math achievement, and students’ sociodemographic and sociocultural background. However, the evaluation of individual schools’ effectiveness varied widely depending on the covariate set that was chosen, casting further doubt on the use of VA scores for accountability purposes. Thirdly, the aim of Study 3 was to investigate statistical model selection, as Study 1 showed no consensus on which model types are most suitable for the estimation of VA scores, with the majority of studies applying linear regression or multilevel models. These classical linear models, along with nonlinear models and different types of machine learning models were systematically compared to each other. Covariates were kept constant (based on the results from Study 2) across models. Multilevel models led to the most accurate prediction of students’ achievement. However, as school VA scores varied depending on specific model choices and as these results can be only generalized for a Luxembourgish sample, it was suggested for future research that the model selection process should be made transparent and should include different specifications in order to obtain ranges of potential VA scores. In conclusion, all three studies imply that the application of VA models for decision-making and accountability should be critically discussed and that VA scores should not be used as the only measure for accountability or high-stakes decisions. In addition, it can be concluded that VA scores are more suitable for informative purposes. Thus, the findings from the present dissertation prepare the ground for future research, where schools with stable high VA scores can be part of further investigations (both qualitatively and quantitatively) to study their pedagogical strategies and learn from them. [less ▲]

Detailed reference viewed: 74 (7 UL)
Full Text
See detailA newform theory for Katz modular forms
Mamo, Daniel Berhanu UL

Doctoral thesis (2020)

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old ... [more ▼]

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old space of a uniquely associated Katz newform. We also set up multiplicity one results for Katz eigenforms which have reducible Galois representation. [less ▲]

Detailed reference viewed: 25 (1 UL)
Full Text
See detailA newform theory for Katz modular forms
Mamo, Daniel Berhanu UL

Doctoral thesis (2020)

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old ... [more ▼]

In this thesis, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old space of a uniquely associated Katz newform. We also set up multiplicity one results for Katz eigenforms which have reducible Galois representation. [less ▲]

Detailed reference viewed: 66 (3 UL)
Full Text
See detailImmersions of surfaces into SL(2,C) and into the space of geodesics of Hyperbolic space
El Emam, Christian UL

Doctoral thesis (2020)

This thesis mainly treats two developments of the classical theory of hypersurfaces inside pseudo-Riemannian space forms. The former - a joint work with Francesco Bonsante - consists in the study of ... [more ▼]

This thesis mainly treats two developments of the classical theory of hypersurfaces inside pseudo-Riemannian space forms. The former - a joint work with Francesco Bonsante - consists in the study of immersions of smooth manifolds into holomorphic Riemannian space forms of constant curvature -1 (including SL(2,C) with a multiple of its Killing form): this leads to a Gauss-Codazzi theorem, it suggests an approach to holomorphic transitioning of immersions into pseudo-Riemannian space forms, a trick to construct holomorphic maps into the PSL(2,C)-character variety, and leads to a restatement of Bers theorem. The latter - a joint work with Andrea Seppi - consists in the study of immersions of n-manifolds inside the space of geodesics of the hyperbolic (n+1)-space. We give a characterization, in terms of the para-Kahler structure of this space of geodesics, of the Riemannian immersions which turn out to be Gauss maps of equivariant immersions into the hyperbolic space. [less ▲]

Detailed reference viewed: 11 (0 UL)
Full Text
See detailvan der Waals Dispersion Interactions in Biomolecular Systems: Quantum-Mechanical Insights and Methodological Advances
Stoehr, Martin UL

Doctoral thesis (2020)

Intermolecular interactions are paramount for the stability, dynamics and response of systems across chemistry, biology and materials science. In biomolecules they govern secondary structure formation ... [more ▼]

Intermolecular interactions are paramount for the stability, dynamics and response of systems across chemistry, biology and materials science. In biomolecules they govern secondary structure formation, assembly, docking, regulation and functionality. van der Waals (vdW) dispersion contributes a crucial part to those interactions. As part of the long-range electron correlation, vdW interactions arise from Coulomb-coupled quantum-mechanical fluctuations in the instan- taneous electronic charge distribution and are thus inherently many-body in nature. Common approaches to describe biomolecular systems (i.e., classical molecular mechanics) fail to capture the full complexity of vdW dispersion by adapting a phenomenological, atom-pairwise formalism. This thesis explores beyond-pairwise vdW forces and the collectivity of intrinsic electronic behav- iors in biomolecular systems and discusses their role in the context of biomolecular processes and function. To this end, the many-body dispersion (MBD) formalism parameterized from density-functional tight-binding (DFTB) calculations is used. The investigation of simple molecular solvents with particular focus on water gives insights into the vdW energetics and electronic response properties in liquids and solvation as well as emergent behavior for coarse-grained models. A detailed study of intra-protein and protein–water vdW interactions highlights the role of many-body forces during protein folding and provides a funda- mental explanation for the previously observed “unbalanced” description and over-compaction of disordered protein states. Further analysis of the intrinsic electronic behaviors in explicitly solvated proteins indicates a long-range persistence of electron correlation through the aque- ous environment, which is discussed in the context of protein–protein interactions, long-range coordination and biomolecular regulation and allostery. Based on the example of a restriction enzyme, the potential role of many-body vdW forces and collective electronic behavior for the long-range coordination of enzymatic activity is discussed. Introducing electrodynamic quantum fluctuations into the classical picture of allostery opens up the path to a more holistic view on biomolecular regulation beyond the traditional focus on merely local structural modifications. Building on top of the MBD framework, which describes vdW dispersion within the interatomic dipole-limit, a practical extension to higher-order terms is presented. The resulting Dipole- Correlated Coulomb Singles account for multipolar as well as dispersion-polarization-like contri- butions beyond the random phase approximation by means of first-order perturbation theory over the dipole-coupled MBD state. It is shown that Dipole-Correlated Coulomb Singles become particularly relevant for relatively larger systems and can alter qualitative trends in the long-range interaction under (nano-)confinement. Bearing in mind the frequent presence of confinement in biomolecular systems due to cellular crowding, in ion channels or for interfacial water, this so-far neglected contribution is expected to have broad implications for systems of biological relevance. Ultimately, this thesis introduces a hybrid approach of DFTB and machine learning for the accu- rate description of large-scale systems on a robust, albeit approximate, quantum-mechanical level. The developed DFTB-NN rep approach combines the semi-empirical DFTB Hamiltonian with a deep tensor neural network model for localized many-body repulsive potentials. DFTB- NN rep provides an accurate description of energetic, structural and vibrational properties of a wide range of small organic molecules much superior to standard DFTB or machine learning. Overall, this thesis aims to extend the current view of complex (bio)molecular systems being governed by local, (semi-)classical interactions and develops methodological steps towards an advanced description and understanding including non-local interaction mechanisms enabled by quantum-mechanical phenomena such as long-range correlation forces arising from collective electronic fluctuations. [less ▲]

Detailed reference viewed: 45 (5 UL)
Full Text
See detailEnd-to-end Signal Processing Algorithms for Precoded Satellite Communications
Krivochiza, Jevgenij UL

Doctoral thesis (2020)

The benefits of full frequency reuse in satellite communications consist of increased spectral efficiency, physical layer security, enhanced coverage, and improved Quality of Service. This is possible due ... [more ▼]

The benefits of full frequency reuse in satellite communications consist of increased spectral efficiency, physical layer security, enhanced coverage, and improved Quality of Service. This is possible due to novel digital signal processing techniques for interference mitigation as well as signal predistortion in non-linear high-performance amplifiers. Advanced linear precoding and symbol-level precoding can jointly address the signal processing demands in the next-generation satellite communications. The real-time signal precoding increases the computational complexity handled at the gateway, thus requiring low-complexity high-performance algorithms to be developed. Additionally, extensive in-lab and field tests are required to increase the technology readiness level and industrial adaption rate. In this thesis, we focus on low-complexity precoding design and in-lab validations. We study the state-of-the-art linear and symbol-level precoding techniques and multi-user MIMO test-beds available in the literature. First, we present a novel low-complexity algorithm for sum power minimization precoding design. This technique allows to reduce transmitted power in a multi-beam satellite system and improves the quality of the received signal at user terminals. Next, we demonstrate an FPGA accelerated high-throughput precoding design. The FPGA precoding design is scalable for a different number of beams in the systems and operates in a real-time processing regime using a commercially available software defined radio platform. One of the highlights of this research is the creation of a real-time in-lab precoding test-bed. The test-bed consists of a DVB-S2X precoding enabled gateway prototype, a MIMO channel emulator, and user terminals. By using the radio frequency for transmitting and receiving the precoded signals, we can test the performance of different precoding techniques in realistic scenarios and channel impairments. We demonstrate an end-to-end symbol-level precoded real-time transmission, in which user terminals can acquire and decode the precoded signals showing an increase in performance and throughput. The in-lab validations confirm numerical results conducted alongside in this work. [less ▲]

Detailed reference viewed: 53 (13 UL)
Full Text
See detailMICROGLIA IN PARKINSON´S DISEASE: IDENTITY, HETEROGENEITY AND THEIR CONTRIBUTION TO NEURODEGENERATION
Uriarte Huarte, Oihane UL

Doctoral thesis (2020)

Parkinson´s disease (PD) is the most common movement disorder caused by dopamine deficiency owing to a loss of dopaminergic neurons within the substantia nigra (SN). So far, there is no cure available ... [more ▼]

Parkinson´s disease (PD) is the most common movement disorder caused by dopamine deficiency owing to a loss of dopaminergic neurons within the substantia nigra (SN). So far, there is no cure available, hence understanding the mechanisms by which dopaminergic neurons degenerate is essential for the development of future treatment strategies. Recently, a potential role of neuroinflammation, and especially the activation of microglial cells in PD was suggested, not being secondary to neuronal death, but rather primarily implicated in PD pathogenesis. Hence, we have ventured in to study neuroinflammation and microglia activation in the context of PD using in vivo and in vitro mouse models. Firstly, we addressed microglial heterogeneity in the healthy nigrostriatal pathway, the primary circuit affected in PD. By using single-cell RNA sequencing, we have identified four different microglial immune subsets within the midbrain and the striatum. Notably, we were able to distinguish a microglial subset with an immune alerted phenotype, which was mainly composed of microglial cells from the midbrain. The transcriptomic identity of this subset resembled partially to the one of inflammatory microglia. Additionally, in situ morphological studies, such as 3D reconstruction, revealed that microglia located within the midbrain is less complex than microglia with a striatal origin. Secondly, we studied the potential role of neuroinflammation and microglia in PD progression by using a PD-like mouse model of a-synuclein (a-syn) seeding and spreading. In this study, pre-formed fibrils (PFF) were injected into the mice striatum, and a combined neuropathological and transcriptomic analysis was performed at two time points that have distinct and increasing levels and distribution of a-syn pathology across different brain regions (13 and 90 days post-injection). Interestingly, neuropathological quantifications at 90 days post-injection uncovered that neuroinflammation and microglial reactivity are linked to neurodegeneration. However, pathology neither correlates with neurodegeneration nor with a-syn aggregation. Importantly, at 13 days post-injection, the transcriptomic analysis of the midbrain revealed the dysregulation of several inflammatory pathways and pointed to the overexpression of neurotoxic inflammatory mediators. Furthermore, at this time point, the presence of a-syn oligomers was detected in certain areas of the brain. Subsequently, we hypothesised that at early stages of PD pathogenesis, the presence of a-syn oligomeric forms induces a robust inflammatory response of microglia, which can be further associated with neurodegeneration. Thirdly, to understand if a-syn oligomers are the main inducers of microglial activation, we examined further the microglial inflammatory response to other a-syn conformations, monomers and fibrils (PFF1 and PFF2). For that, BV2 and primary microglial cells were exposed to the a-syn moieties at different concentrations and incubations times. Electron microscopy depicted some heterogeneity across the synthetic a-syn fibrils, suggesting that PFF1 and PFF2 were composed by different structures. Then, microglial reactivity to a-syn monomers and fibrils was investigated by RT-PCR, and no specific response of microglia to a-syn was encountered. Also, only one of the a-syn fibrils, the PFF1, decreased microglial phagocytic activity and reduced the expression of Il1b by microglia after LPS stimulation. Concomitant to the findings in the a-syn seeding and spreading model, we attempted to elucidate the molecular profile of microglia associated with neurodegeneration. In this particular study, RNA-sequencing was performed in isolated microglial cells in an early stage of pathology progression. In contrast with our previous results, no differences in the microglial profile were found between the PFF and the control mice. Lastly, we have investigated potential neuroprotective mechanisms associated with microglial reactivity counter-regulation. Considering previous observations that microglia express dopaminergic receptors, we investigated further whether apomorphine, a dopamine agonist with anti-oxidant properties, could govern microglial activation. The effect of apomorphine enantiomers was analysed within primary microglia cultures that were activated by exposure to mutated A53T monomeric a-syn. Herein, we demonstrated that microglial activation can be dampened by apomorphine, via the recruitment of Nrf2 to the nucleus, which results in a decreased release of proinflammatory mediators, such as TNFa or PGE2. Taken together, this study provides an additional characterisation of neuroinflammation and microglial cells in the context of PD, which ultimately contributes to a better understanding of their relationship with neurodegeneration. [less ▲]

Detailed reference viewed: 54 (9 UL)
Full Text
See detailFoundations of an Ethical Framework for AI Entities: the Ethics of Systems
Dameski, Andrej UL

Doctoral thesis (2020)

The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations ... [more ▼]

The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what theory can explain the process of moral reasoning, decision and action, for AI entities in virtual, simulated and real-life moral scenarios? This thesis answers these two research questions with its two main contributions to the field of AI ethics, a substantial (ethico-philosophical) and a methodological contribution. The substantial contribution is a coherent and novel theory named Ethics of Systems Framework, as well as a possible inception of a new field of study: ethics of systems. The methodological contribution is the creation of its main methodological tool, the Ethics of Systems Interface. The second part of the research effort was focused on testing and demonstrating the capacities of the Ethics of Systems Framework and Interface in modeling and managing moral scenarios in which AI and other entities participate. Further work can focus on building on top of the foundations of the Framework provided here, increasing the scope of moral theories and simulated scenarios, improving the level of detail and parameters to reflect real-life situations, and field-testing the Framework on actual AI systems. [less ▲]

Detailed reference viewed: 96 (5 UL)
Full Text
See detailInstruction Coverage for Android App Testing and Tuning
Pilgun, Aleksandr UL

Doctoral thesis (2020)

For many people, mobile apps have already become an indispensable part of modern life. Apps entertain, educate, assist us in our daily routines and help us connect with others. However, the advanced ... [more ▼]

For many people, mobile apps have already become an indispensable part of modern life. Apps entertain, educate, assist us in our daily routines and help us connect with others. However, the advanced capabilities of modern devices running the apps and sensitive user data make mobile devices also an attractive attack target. To get access to sensitive data, adversaries tend to conceal malicious functionality in freely distributed legitimately-looking apps. The problem of low-quality and malicious apps, spreading at an enormous scale, is especially relevant for one of the biggest software repositories – Google Play. The Android apps distributed through this platform undergo a validation process by Google. However, that is insufficient to confirm their good nature. To identify dangerous apps, novel frameworks for testing and app analysis are being developed by the Android community. Code coverage is one of the most common metrics for evaluating the effectiveness of these frameworks, and it is used as an internal metric to guide code exploration in some of them. However, when analyzing apps without source code, the Android community relies mostly on method coverage since there are no reliable tools for measuring finer-grained code coverage in 3rd-party Android app testing. Another stumbling block for testing frameworks is the inability to test an app exhaustively. While code coverage measurement can indicate an improvement in testing, it is neither possible to reach 100% coverage nor to identify the maximum reachable coverage value for the app. Despite testing, the app still contains high amounts of not executed code, which makes it impossible to confirm the absence of potentially malicious code in the part of the app that has not been tested. The existing static debloating approaches aim at app size minimization rather than security and simply debloat not reachable code. However, there is currently no approach to debloat apps based on dynamic analysis information, i.e. to cut out not-executed code. In this dissertation, we solve these two problems by, first, proposing an efficient approach and a tool to measure code coverage at the instruction level, and second, a dynamic binary shrinking methodology for deleting not executed code from the app. We support our solutions by the following contributions: - An instrumentation approach to measure code coverage at the instruction level. Our technique instruments smali representation of Android bytecode to allow code coverage measurement at the finest level. - An implementation of the instrumentation approach. ACVTool is a self-contained package containing 4K lines of Python code. It is publicly available and can be integrated into different testing frameworks. - An extensive empirical evaluation that shows the high reliability and versatility of our approach. ACVTool successfully executes on 96.9% of apps from our dataset, introduces a negligible instrumentation time and runtime overheads, and its results are complaint to the results of JaCoCo (source code coverage) and Ella (method coverage) tools. - A detailed study on the influence of code coverage metric granularity on automated testing. We demonstrate the usefulness of ACVTool for automated testing techniques that rely on code coverage data in their operation. - A dynamic debloating approach based on ACVTool instruction coverage. We propose Dynamic Binary Shrinking System, a novel methodology created to shrink 3rd-party Android apps towards observed benign functionality on executed code. - An implementation of the dynamic debloating technique incorporated into the ACVCut tool. The tool demonstrates the viability of the Dynamic Shrinking System on two examples. It allows us to cut out not executed code and, thus, provide 100% instruction coverage on explored app behaviors. [less ▲]

Detailed reference viewed: 36 (4 UL)
Full Text
See detailScalable Control of Asynchronous Boolean Networks
Su, Cui UL

Doctoral thesis (2020)

Direct cell reprogramming has been garnering attention for its therapeutic potential for treating the most devastating diseases characterised by defective cells or a deficiency of certain cells. It is ... [more ▼]

Direct cell reprogramming has been garnering attention for its therapeutic potential for treating the most devastating diseases characterised by defective cells or a deficiency of certain cells. It is capable of reprogramming any kind of abundant cells in the body into the desired cells to restore functions of the diseased organ. It has shown promising benefits for clinical applications, such as cell and tissue engineering, regenerative medicine and drug discovery. A major obstacle in the application of direct cell reprogramming lies in the identification of effective reprogramming factors. Experimental approaches are usually laborious, time-consuming and enormously expensive. Mathematical modelling of biological systems paves the way to study mechanisms of biological processes and identify therapeutic targets with computational reasoning and tools. Among several modelling frameworks, Boolean networks have apparent advantages. They provide a qualitative description of biological systems and thus evade the parametrisation problem, which often occurs in quantitative models. In this thesis, we focus on the identification of reprogramming factors based on asynchronous Boolean networks. This problem is equivalent to the control of asynchronous Boolean networks: finding a subset of nodes, whose perturbations can drive the dynamics of the network from the source state (the initial cell type) to the target attractor (the desired cell type). Before diving into the control problems, we first develop a near-optimal decomposition method and use this method to improve the scalability of the decomposition-based method for attractor detection. The new decomposition-based attractor detection method can identify all the exact attractors of the network efficiently, such that we can select the proper attractors corresponding to the initial cell type and the desired cell type as the source and target attractors and predict the key nodes for the conversion. Depending on whether the source state is given or not, we can have two control problems: source-target control and target control. We develop several methods to solve the two problems using different control strategies. All the methods are implemented in our software CABEAN. Given a control problem, CABEAN can provide a rich set of realistic solutions that manipulate the dynamics in different ways, such that biologists can select suitable ones to validate with biological experiments. We believe our works can contribute to a better understanding of the regulatory mechanisms of biological processes and greatly facilitate the development of direct cell reprogramming. [less ▲]

Detailed reference viewed: 78 (5 UL)
Full Text
See detailA multifaceted formal analysis of end-to-end encrypted email protocols and cryptographic authentication enhancements
Vazquez Sandoval, Itzel UL

Doctoral thesis (2020)

Largely owing to cryptography, modern messaging tools (e.g., Signal) have reached a considerable degree of sophistication, balancing advanced security features with high usability. This has not been the ... [more ▼]

Largely owing to cryptography, modern messaging tools (e.g., Signal) have reached a considerable degree of sophistication, balancing advanced security features with high usability. This has not been the case for email, which however, remains the most pervasive and interoperable form of digital communication. As sensitive information (e.g., identification documents, bank statements, or the message in the email itself) is frequently exchanged by this means, protecting the privacy of email communications is a justified concern which has been emphasized in the last years. A great deal of effort has gone into the development of tools and techniques for providing email communications with privacy and security, requirements that were not originally considered. Yet, drawbacks across several dimensions hinder the development of a global solution that would strengthen security while maintaining the standard features that we expect from email clients. In this thesis, we present improvements to security in email communications. Relying on formal methods and cryptography, we design and assess security protocols and analysis techniques, and propose enhancements to implemented approaches for end-to-end secure email communication. In the first part, we propose a methodical process relying on code reverse engineering, which we use to abstract the specifications of two end-to-end security protocols from a secure email solution (called pEp); then, we apply symbolic verification techniques to analyze such protocols with respect to privacy and authentication properties. We also introduce a novel formal framework that enables a system's security analysis aimed at detecting flaws caused by possible discrepancies between the user's and the system's assessment of security. Security protocols, along with user perceptions and interaction traces, are modeled as transition systems; socio-technical security properties are defined as formulas in computation tree logic (CTL), which can then be verified by model checking. Finally, we propose a protocol that aims at securing a password-based authentication system designed to detect the leakage of a password database, from a code-corruption attack. In the second part, the insights gained by the analysis in Part I allow us to propose both, theoretical and practical solutions for improving security and usability aspects, primarily of email communication, but from which secure messaging solutions can benefit too. The first enhancement concerns the use of password-authenticated key exchange (PAKE) protocols for entity authentication in peer-to-peer decentralized settings, as a replacement for out-of-band channels; this brings provable security to the so far empirical process, and enables the implementation of further security and usability properties (e.g., forward secrecy, secure secret retrieval). A second idea refers to the protection of weak passwords at rest and in transit, for which we propose a scheme based on the use of a one-time-password; furthermore, we consider potential approaches for improving this scheme. The hereby presented research was conducted as part of an industrial partnership between SnT/University of Luxembourg and pEp Security S.A. [less ▲]

Detailed reference viewed: 94 (6 UL)
See detailEmotion Regulation and Perceived Competence in Dyslexia and ADHD: Analyzing Predictors of Academic and Mental Health Outcomes in Adolescents
Battistutta, Layla UL

Doctoral thesis (2020)

Youths with dyslexia and ADHD are at risk for developing not only academic but also mental health problems. As these negative outcomes are however not found equally among all adolescents with dyslexia or ... [more ▼]

Youths with dyslexia and ADHD are at risk for developing not only academic but also mental health problems. As these negative outcomes are however not found equally among all adolescents with dyslexia or ADHD, this dissertation aimed at getting a better understanding of certain predictors and/or consequences of two mediating self-regulating mechanisms. Whereas study 1 focused on perceived competence as an important contributor to academic success or failure, studies 2, 3 and 4 analyzed the role of emotion regulation (ER) in the development of psychopathological symptoms. Study 1 showed that within a group of adolescents with dyslexia, adolescents with a late diagnosis hold lower general and academic perceived competency beliefs, with potential negative outcomes for academia. Study 2 gave a first insight into ER in dyslexia and revealed that while dyslexia might not be directly associated with ER difficulties, higher ADHD symptoms contribute to more ER difficulties not only in youths with clinical ADHD but also in youths with dyslexia. These findings were taken a step further in study 3, which showed that ER difficulties mediate the association between ADHD symptoms and further anxiety, depression and conduct disorder symptoms for youths with dyslexia, ADHD and comorbid dyslexia/ADHD. Moreover, it was demonstrated in study 4 that underlying working memory deficits, (and to a lesser extent) attentional control and inhibitory deficits are linked with ADHD symptoms, which in turn are associated with ER difficulties and further anxiety and depression symptoms. The findings are discussed within the larger context of perceived competence, ER as well as academic and psycho-social outcomes, and potential implications for the conceptualization, diagnosis, prevention and treatment of these disorders are considered. [less ▲]

Detailed reference viewed: 64 (0 UL)
Full Text
See detailJOINT DESIGN OF USER SCHEDULING AND PRECODING IN WIRELESS NETWORKS: A DC PROGRAMMING APPROACH
Bandi, Ashok UL

Doctoral thesis (2020)

These scenarios are of relevance and are already being considered in current and upcoming standards including 4G and 5G. This thesis begins by presenting the necessity of the joint design of scheduling ... [more ▼]

These scenarios are of relevance and are already being considered in current and upcoming standards including 4G and 5G. This thesis begins by presenting the necessity of the joint design of scheduling and precoding for the aforementioned scenario in detail in chapter 1. Further, the coupled nature of scheduling and precoding that prevails in many other designs is discussed. Following this, a detailed survey of the literature dealing with the joint design is presented. In chapter 2, the joint design of scheduling and precoding in the unicast scenario for multiuser MISO downlink channels for network functionality optimization considering sum-rate, Max-min SINR, and power. Thereafter, different challenges in terms of the problem formulation and subsequent reformulations for different metrics are discussed. Different algorithms, each focusing on optimizing the corresponding metric, are proposed and their performance is evaluated through numerical results. In chapter 3, the joint design of user grouping, group scheduling, user scheduling, and precoding is considered for MGMC. Differently to chapter 2, the optimization of a novel metric called multicast energy efficiency (MEE) is considered. This new paradigm for joint design in MGMC poses several additional challenges that can not be dealt with by the design in chapter 2. Therefore, towards addressing these additional challenges, a novel algorithm is proposed for MEE maximization and its efficacy is presented through simulations. In chapters 2 and 3, the joint design is considered within a given transmit slot and temporal design is not considered. In chapter 4, the joint design scheduling and precoding are considered over a block of multiple time slots for a unicast scenario. Differently to single slot design, the multi-slot joint design facilitates to address users' latency directly in terms of time slots. Noticing this, joint design across multiple slots is considered with the objective of minimizing the number of slots to serve all the users subject to users' QoS and latency constraints. Further, this multi-slot joint design problem is modeled as a structured group sparsity problem. Finally, by rendering the problem as a DC, high-quality stationary points are obtained through an efficient CCP based algorithm. In chapter 5, the joint scheduling and precoding schemes proposed in previous chapters are applied to satellite systems. Finally, The thesis concludes with the main research findings and the identification of new research challenges in chapter 6. [less ▲]

Detailed reference viewed: 46 (8 UL)
See detailProblems in nonequilibrium fluctuations across scales: A path integral approach
Cossetto, Tommaso UL

Doctoral thesis (2020)

In this thesis we study stochastic systems evolving with Markov jump processes. In a first work we discuss different representations of the stochastic evolution: the master equation, the generalized ... [more ▼]

In this thesis we study stochastic systems evolving with Markov jump processes. In a first work we discuss different representations of the stochastic evolution: the master equation, the generalized Langevin equation, and their path integrals. The description is used to derive the generating functions for out of equilibrium observables, together with the typical approximation techniques. In a second work the path integral is used to enforce thermodynamic consistency across scales. The description of identical units with all-to-al interactions is reduced from a micro- to a meso- to a macroscopic level. A suitable scaling of the dynamics and of the thermodynamic observables allows to preserve the thermodynamical structure at the different levels. In a third work we focus on the large deviation properties of chemical networks. The path integral allows to compute the dominant trajectories that constitute macroscopic fluctuations. For bi-stable systems the existence of multiple macroscopic contributions results in a phase transition for the macroscopic current. In a fourth work we study the response of such chemical currents to external perturbations. Out of equilibrium the system can display negative differential response, a feature that offers different strategies to minimize external or internal disturbances. Finally, in a fifth work, we start from a quantum system where part of the system can be traced out to act as multiple reservoirs at different temperatures. Using the Schwinger-Keldysh contour and Green's functions we can obtain the generating function for the different parts of the hamiltonian. The statistics of thermodynamic observables is accessible even in the strong coupling regime, while the semi-classical approximation is in agreement with the classical counterpart. [less ▲]

Detailed reference viewed: 16 (1 UL)
Full Text
See detailHomomorphic encryption and multilinear maps based on the approximate-GCD problem
Lima Pereira, Hilder Vitor UL

Doctoral thesis (2020)

Cryptographic schemes are constructed on top of problems that are believed to be hard. In particular, recent advanced schemes, as homomorphic primitives and obfuscators, use the approximate greatest ... [more ▼]

Cryptographic schemes are constructed on top of problems that are believed to be hard. In particular, recent advanced schemes, as homomorphic primitives and obfuscators, use the approximate greatest common divisor (AGCD) problem, which is simple to describe and easy to implement, since it does not require complex algebraic structures nor hard-to-sample probability distributions. However, in spite of its simplicity, the AGCD problem generally yields inefficient schemes, usually with large ciphertext expansion. In this thesis, we analyze the AGCD problem and several existing variants thereof and propose a new attack on the multi-prime AGCD problem. Then, we propose two new variants: 1. The vector AGCD problem (VAGCD), in which AGCD instances are represented as vectors and randomized with a secret random matrix; 2. The polynomial randomized AGCD problem (RAGCD), that consists of representing AGCD samples as polynomials and randomizing them with a secret random polynomial. We show that these new variants cannot be easier than the original AGCD problem and that all the known attacks, when adapted to the VAGCD and the RAGCD problem, are more expensive both in terms of time and of memory, allowing us then to chose smaller parameters and to improve the efficiency of the schemes using the AGCD as the underlying problem. Thus, by combining techniques from multilinear maps and indistinguishability obfuscation with the VAGCD problem, we provide the first implementation of a N-party non-interactive key exchange resistant against all known attacks. Still aiming to show that the VAGCD problem can lead to performance improvements in cryptographic primitives, we use it to construct a homomorphic encryption scheme that can natively and efficiently operate with vectors and matrices. For instance, for 100 bits of security, we can perform a sequence of 128 homomorphic products between 128-dimensional vectors and 128x128 matrices in less than one second. We also use our scheme in two applications: homomorphic evaluation of nondeterministic finite automata and a Naïve Bayes classifier. Finally, using the RAGCD problem, we construct a new homomorphic scheme for polynomials and we propose new fast bootstrapping procedures for fully homomorphic scheme (FHE) over the integers. Therewith, we can for the first time bootstrap AGCD-based FHE schemes in less than one second in a common personal computer. For the best of our knowledge, only FHE schemes based on the LWE problem had subsecond bootstrapping procedures, while AGCD-based schemes required several seconds or even minutes to be bootstrapped. [less ▲]

Detailed reference viewed: 44 (3 UL)
Full Text
See detailAnalysis, Detection, and Prevention of Cryptographic Ransomware
Genç, Ziya Alper UL

Doctoral thesis (2020)

Cryptographic ransomware encrypts files on a computer system, thereby blocks access to victim’s data, until a ransom is paid. The quick return in revenue together with the practical difficulties in ... [more ▼]

Cryptographic ransomware encrypts files on a computer system, thereby blocks access to victim’s data, until a ransom is paid. The quick return in revenue together with the practical difficulties in accurately tracking cryptocurrencies used by victims to perform the ransom payment, have made ransomware a preferred tool for cybercriminals. In addition, exploiting zero-day vulnerabilities found in Windows Operating Systems (OSs), the most widely used OS on desktop computers, has enabled ransomware to extend its threat and have detrimental effects at world-wide level. For instance, WannaCry and NotPetya have affected almost all countries, impacted organizations, and the latter alone caused damage which costs more than $10 billion. In this thesis, we conduct a theoretical and experimental study on cryptographic ransomware. In the first part, we explore the anatomy of a ransomware, and in particular, analyze the key management strategies employed by notable families. We verify that for a long-term success, ransomware authors must acquire good random numbers to seed Key Derivation Functions (KDFs). The second part of this thesis analyzes the security of the current anti-ransomware approaches, both in academic literature and real-world systems, with the aim to anticipate how such future generations of ransomware will work, and in order to start planning on how to stop them. We argue that among them, there will be some which will try to defeat current anti-ransomware; thus, we can speculate over their working principles by studying the weak points in the strategies that six of the most advanced anti-ransomware currently implements. We support our speculations with experiments, proving at the same time that those weak points are in fact vulnerabilities and that the future ransomware that we have imagined can be effective. Next, we analyze existing decoy strategies and discuss how they are effective in countering current ransomware by defining a set of metrics to measure their robustness. To demonstrate how ransomware can identify existing deception-based detection strategies, we implement a proof-of-concept decoy-aware ransomware that successfully bypasses decoys by using a decision engine with few rules. We also discuss existing issues in decoy-based strategies and propose practical solutions to mitigate them. Finally, we look for vulnerabilities in antivirus (AV) programs which are the de facto security tool installed at computers against cryptographic ransomware. In our experiments with 29 consumer-level AVs, we discovered two critilcal vulnerabilities. The first one consists in simulating mouse events to control AVs, namely to send them mouse “clicks” to deactivate their protection. We prove that 14 out of 29 AVs can be disabled in this way, and we call this class of attacks Ghost Control. The second one consists in controlling whitelisted applications, such as Notepad, by sending them keyboard events (such as “copy-and-paste”) to perform malicious operations on behalf of the malware. We prove that the anti-ransomware protection feature of AVs can be bypassed if we use Notepad as a “puppet” to rewrite the content of protected files as a ransomware would do. Playing with the words, and recalling the cat-and-mouse game, we call this class of attacks Cut-and-Mouse. In the third part of the thesis, we propose a strategy to mitigate cryptographic ransomware attacks. Based on our insights from the first part of the thesis, we present UShallNotPass which works by controlling access to secure randomness sources, i.e., Cryptographically Secure Pseudo-Random Number Generator (CSPRNG) Appliction Programming Interfaces (APIs). We tested UShallNotPass against 524 real-world ransomware samples, and observe that UShallNotPass stops 94% of them, including WannaCry, Locky, CryptoLocker and CryptoWall. Remarkably, it also nullifies NotPetya, the offspring of the family which so far has eluded all defenses. Next, we present NoCry, which shares the same defense strategy but implements an improved architecture. We show that NoCry is more secure (with components that are not vulnerable to known attacks), more effective (with less false negatives in the class of ransomware addressed) and more efficient (with minimal false positive rate and negligible overhead). To confirm that the new architecture works as expected, we tested NoCry against a new set of 747 ransomware samples, of which, NoCry could stop 97.1%, bringing its security and technological readiness to a higher level. Finally, in the fourth part, we present the potential future of the cryptographic ransomware. We identify new possible ransomware targets inspired by the cybersecurity incidents occurred in real-world scenarios. In this respect, we described possible threats that ransomware may pose by targeting critical domains, such as the Internet of Things and the Socio-Technical systems, which will worrisomely amplify the effectiveness of ransomware attacks. Next, we looked into whether ransomware authors re-use the work of others, available at public platforms and repositories, and produce insecure code (which might enable to build decryptors). By methodically reverse-engineering malware executables, we have found that, out of 21 ransomware samples, 9 contain copy-paste code from public resources. From this fact, we recall critical cases of code disclosure in the recent history of ransomware and, reflect on the dual-use nature of this research by arguing that ransomware are components in cyber-weapons. We conclude by discussing the benefits and limits of using cyber-intelligence and counter-intelligence strategies that could be used against this threat. [less ▲]

Detailed reference viewed: 96 (6 UL)
Full Text
See detailMachine Learning Techniques for Suspicious Transaction Detection and Analysis
Camino, Ramiro Daniel UL

Doctoral thesis (2020)

Financial services must monitor their transactions to prevent being used for money laundering and combat the financing of terrorism. Initially, organizations in charge of fraud regulation were only ... [more ▼]

Financial services must monitor their transactions to prevent being used for money laundering and combat the financing of terrorism. Initially, organizations in charge of fraud regulation were only concerned about financial institutions such as banks. However, nowadays, the Fintech industry, online businesses, or platforms involving virtual assets can also be affected by similar criminal schemes. Regardless of the differences between the entities mentioned above, malicious activities affecting them share many common patterns. This dissertation's first goal is to compile and compare existing studies involving machine learning to detect and analyze suspicious transactions. The second goal is to synthesize methodologies from the last goal for tackling different use cases in an organized manner. Finally, the third goal is to assess the applicability of deep generative models for enhancing existing solutions. In the first part of the thesis, we propose an unsupervised methodology for detecting suspicious transactions applied to two case studies. One is related to transactions from a money remittance network, and the other is related to a novel payment network based on distributed ledger technologies. Anomaly detection algorithms are applied to rank user accounts based on recency, frequency, and monetary features. The results are manually validated by domain experts, confirming known scenarios and finding unexpected new cases. In the second part, we carry out an analogous analysis employing supervised methods, along with a case study where we classify Ethereum smart contracts into honeypots and non-honeypots. We take features from the source code, the transaction data, and the funds' flow characterization. The proposed classification models proved to generalize well to unseen honeypot instances and techniques and allowed us to characterize previously unknown techniques. In the third part, we analyze the challenges that tabular data brings into the domain of deep generative models, a particular type of data used to represent financial transactions in the previous two parts. We propose a new model architecture by adapting state-of-the-art methods to output multiple variables from mixed types distributions. Additionally, we extend the evaluation metrics used in the literature to the multi-output setting, and we show empirically that our approach outperforms the existing methods. Finally, in the last part, we extend the work from the third part by applying the presented models to enhance classification tasks from the second part, commonly containing a severe class imbalance. We introduce the multi-input architecture to expand models alongside our previously proposed multi-output architecture. We compare three techniques to sample from deep generative models defining a transparent and fair large-scale experimental protocol and interesting visual analysis tools. We showed that general machine learning detection and visualization techniques could help address the fraud detection domain's many challenges. In particular, deep generative models can add value to the classification task given the imbalanced nature of the fraudulent class, in exchange for implementation and time complexity. Future and promising applications for deep generative models include missing data imputation and sharing synthetic data or data generators preserving privacy constraints. [less ▲]

Detailed reference viewed: 49 (4 UL)
Full Text
See detailAN NLP-BASED FRAMEWORK TO FACILITATE THE DERIVATION OF LEGAL REQUIREMENTS FROM LEGAL TEXTS
Sleimi, Amin UL

Doctoral thesis (2020)

Information systems in several regulated domains (e.g., healthcare, taxation, labor) must comply with the applicable laws and regulations. In order to demonstrate compliance, several techniques can be ... [more ▼]

Information systems in several regulated domains (e.g., healthcare, taxation, labor) must comply with the applicable laws and regulations. In order to demonstrate compliance, several techniques can be used for assessing that such systems meet their specified legal requirements. Since requirements analysts do not have the required legal expertise, they often rely on the advisory of legal professionals. Hence, this paramount activity is expensive as it involves numerous professionals. Add to this, the communication gap between all the involved stakeholders: legal professionals, requirements analysts and software engineers. Several techniques attempt to bridge this communication gap by streamlining this process. A promising way to do so is through the automation of legal semantic metadata extraction and legal requirements elicitation from legal texts. Typically, one has to search legal texts for the relevant information for the IT system at hand, extract the legal requirements entailed by these legal statements that are pertinent to the IT system, and validate the conclusiveness and correctness of the finalized set of legal requirements. Nevertheless, the automation of legal text processing raises several challenges, especially when applied to IT systems. Existing Natural Language Processing (NLP) techniques are not built to handle the peculiarities of legal texts. On the one hand, NLP techniques are far from perfect in handling several linguistic phenomena such as anaphora, word sense disambiguation and delineating the addressee of the sentence. Add to that, the performance of these NLP techniques decreases when applied to foreign languages (other than English). On the other hand, legal text is far from being identical to the formal language used in journalism. We note that the most prominent NLP techniques are developed and tested against a selection of newspapers articles. In addition, legal text introduces cross-references and legalese that are paramount to proper legal analysis. Besides, there is still some work to be done concerning topicalization, which we need to consider for the relevance of legal statements. Existing techniques for streamlining the compliance checking of IT systems often rely on code-like artifacts with no intuitive appeal to legal professionals. Subsequently, one has no practical way to double-check with legal professionals that the elicited legal requirements are indeed correct and complete regarding the IT system at hand. Further, manually eliciting the legal requirements is an expensive, tedious and error-prone activity. The challenge is to propose a suitable knowledge representation that can be easily understood by all the involved stakeholders but at the same time remains cohesive and conclusive enough to enable the automation of legal requirements elicitation. In this dissertation, we investigate to which extent one can automate legal processing in the Requirements Engineering context. We focus exclusively on legal requirements elicitation for IT systems that have to conform to prescriptive regulations. All our technical solutions have been developed and empirically evaluated in close collaboration with a government entity. [less ▲]

Detailed reference viewed: 35 (13 UL)
Full Text
See detailLearning of Control Behaviours in Flying Manipulation
Manukyan, Anush UL

Doctoral thesis (2020)

Machine learning is an ever-expanding field of research with a wide range of potential applications. It has been increasingly used in different robotics tasks enhancing their autonomy and intelligent ... [more ▼]

Machine learning is an ever-expanding field of research with a wide range of potential applications. It has been increasingly used in different robotics tasks enhancing their autonomy and intelligent behaviour. This thesis presents how machine learning techniques can enhance the decision-making ability for control tasks in aerial robots as well as amplify the safety, thus broadly improving their autonomy levels. The work starts with the development of a lightweight approach for identifying degradations of UAV hardware-related components, using traditional machine learning methods. By analysing the flight data stream from a UAV following a predefined mission, it predicts the level of degradation of components at early stages. In that context, real-world experiments have been conducted, showing that such approach can be used as a safety system during different experiments, where the flight path of the vehicle is defined a priori. The next objective of this thesis is to design intelligent control policies for flying robots with highly nonlinear dynamics, operating in continuous state-action setting, using model-free reinforcement learning methods. To achieve this objective, first, the nuances and potentials of reinforcement learning have been investigated. As a result, numerous insights and strategies have been pointed out for crafting efficient reward functions that lead to successful learning performance. Finally, a learning-based controller is provided for controlling a hexacopter UAV with 6-DoF, to perform stable navigation and hovering actions by directly mapping observations to low-level motor commands. To increase the complexity of the given objective, the degrees of freedom of the robotic platform is upgraded to 7-DoF, using a flying manipulation as learning agent. In this case, the agent learns to perform a mission composed of take-off, navigation, hovering and end-effector positioning tasks. Later, to demonstrate the effectiveness of the proposed controller and its ability to handle higher number of degrees of freedom, the flying manipulation has been extended to a robotic platform with 8-DoF. To overcome several challenges of reinforcement learning, the RotorS Gym experimental framework has been developed, providing a safe and close to real simulated environment for training multirotor systems. To handle the increasingly growing complexity of learning tasks, the Cyber Gym Robotics platform has been designed, which extends the RotorS Gym framework by several core functionalities. For instance, it offers an additional mission controller that allows to decompose complex missions into several subtasks, thus accelerating and facilitating the learning process. Yet another advantage of the Cyber Gym Robotics platform is its modularity which allows to seamlessly switch both, learning algorithms as well as agents. To validate these claims, real-world experiments have been conducted, demonstrating that the model trained in the simulation can be transferred onto a real physical robot with only minor adaptations. [less ▲]

Detailed reference viewed: 85 (4 UL)
Full Text
See detailDiscursive Input/Output Logic: Deontic Modals, and Computation
Farjami, Ali UL

Doctoral thesis (2020)

Detailed reference viewed: 87 (2 UL)
Full Text
See detailStaging the Nation in an Intermediate Space: Cultural Policy in Luxembourg and the State Museums (1918-1974)
Spirinelli, Fabio UL

Doctoral thesis (2020)

Cultural policy has been analysed from various perspectives, ranging from sociology over cultural studies to political science. Historians have also been interested in cultural policy, but they have ... [more ▼]

Cultural policy has been analysed from various perspectives, ranging from sociology over cultural studies to political science. Historians have also been interested in cultural policy, but they have barely reflected on a theoretical framework. In addition, cultural policy has not been thoroughly researched in Luxembourg. The present thesis aims to contribute to this gap and examines how national cultural policy in Luxembourg evolved from the 1920s to the early 1970s. It investigates the presence of the national idea in cultural policy, and possible tensions and connections between the idea of the nation and the use or inclusion of foreign cultural references. Drawing on the concept of Zwischenraum (intermediate space) coined by the historian Philip Ther, the study considers Luxembourg as a nationalised intermediate space with the tensions that this status entails. Furthermore, it investigates how the State Museums, particularly the history section, evolved in the cultural policy context. To analyse the evolution of cultural policy, three interconnected aspects are considered: structures, actors and discourses. Three main periods are considered in a chronological fashion: the interwar period marked by efforts of nation-building and an increasingly interventionist state; the Nazi occupation of Luxembourg (1940-1944), when the idea of an independent nation-state was turned into its opposite; the post-war period until the early 1970s, subdivided into an immediate post-war period marked by restitution and reconstruction, and the 1950s and the 1960s characterised by a state-administrator and a conservative cultural policy. These periods, however, are not always neatly separable and reveal continuities. For each period, the State Museums are analysed in their cultural policy context: from their construction in the age of nation-building, over their ambiguous situation during Nazi occupation, to their new missions in the post-war period. [less ▲]

Detailed reference viewed: 99 (11 UL)
Full Text
See detailFrom Secure to Usable and Verifiable Voting Schemes
Zollinger, Marie-Laure UL

Doctoral thesis (2020)

Elections are the foundations of democracy. To uphold democratic principles, researchers have proposed systems that ensure the integrity of elections. It is a highly interdisciplinary field, as it can be ... [more ▼]

Elections are the foundations of democracy. To uphold democratic principles, researchers have proposed systems that ensure the integrity of elections. It is a highly interdisciplinary field, as it can be studied from a technical, legal or societal points of view. While lawyers give a legal framework to the voting procedures, security researchers translate these rules into technical properties that operational voting systems must satisfy, notably privacy and verifiability. If Privacy aims to protect vote-secrecy and provide coercion-resistance to the protocol, Verifiability allows voters to check that their vote has been taken into account in the general outcome, contributing to the assurance of the integrity of the elections. To satisfy both properties in a voting system, we rely on cryptographic primitives such as encryption, signatures, commitments schemes, or zero-knowledge proofs, etc. Many protocols, paper-based or electronic-based, have been designed to satisfy these properties. Although the security of some protocols, and their limits, have been analysed from a technical perspective, the usability has often been shown to have very low rates of effectiveness. The necessary cryptographic interactions have already shown to be one contributor to this problem, but the design of the interface could also contribute by misleading voters. As elections typically rarely happen, voters must be able to understand the system they use quickly and mostly without training, which brings the user experience at the forefront of the designed protocols. In this thesis, the first contribution is to redefine privacy and verifiability in the context of tracker-based verifiable schemes. These schemes, using a so-called tracking number for individual verification, need additional user steps that must be considered in the security evaluation. These security definitions are applied to the boardroom voting protocol F2FV used by the CNRS, and the e-voting protocol Selene, both use a tracker-based procedure for individual verifiability. We provide proofs of security in the symbolic model using the Tamarin prover. The second contribution is an implementation of the Selene protocol as a mobile and a web application, tested in several user studies. The goal is to evaluate the usability and the overall user experience of the verifiability features, as well as their understanding of the system through the evaluation of mental models. The third contribution concerns the evaluation of the voters' understanding of the coercion mitigation mechanism provided by Selene, through a unique study design using game theory for the evaluation of voters. Finally, the fourth contribution is about the design of a new voting scheme, Electryo, that is based on the Selene verification mechanisms but provides a user experience close to the standard paper-based voting protocols. [less ▲]

Detailed reference viewed: 123 (6 UL)
Full Text
See detailEssays on Tax Competition
Paulus, Nora UL

Doctoral thesis (2020)

Detailed reference viewed: 37 (9 UL)
Full Text
See detailAnalyzing and Improving Very Deep Neural Networks: From Optimization, Generalization to Compression
Oyedotun, Oyebade UL

Doctoral thesis (2020)

Learning-based approaches have recently become popular for various computer vision tasks such as facial expression recognition, action recognition, banknote identification, image captioning, medical image ... [more ▼]

Learning-based approaches have recently become popular for various computer vision tasks such as facial expression recognition, action recognition, banknote identification, image captioning, medical image segmentation, etc. The learning-based approach allows the constructed model to learn features, which result in high performance. Recently, the backbone of most learning-based approaches are deep neural networks (DNNs). Importantly, it is believed that increasing the depth of DNNs invariably leads to improved generalization performance. Thus, many state-of-the-art DNNs have over 30 layers of feature representations. In fact, it is not uncommon to find DNNs with over 100 layers in the literature. However, training very DNNs that have over 15 layers is not trivial. On one hand, such very DNNs generally suffer optimization problems. On the other hand, very DNNs are often overparameterized such that they overfit the training data, and hence incur generalization loss. Moreover, overparameterized DNNs are impractical for applications that require low latency, small Graphic Processing Unit (GPU) memory for operation and small memory for storage. Interestingly, skip connections of various forms have been shown to alleviate the difficulty of optimizing very DNNs. In this thesis, we propose to improve the optimization and generalization of very DNNs with and without skip connections by reformulating their training schemes. Specifically, the different modifications proposed allow the DNNs to achieve state-of-the-art results on several benchmarking datasets. The second part of the thesis presents the theoretical analyses of DNNs without and with skip connections based on several concepts from linear algebra and random matrix theory. The theoretical results obtained provide new insights into why DNNs with skip connections are easy to optimize, and generalize better than DNNs without skip connections. Ultimately, the theoretical results are shown to agree with practical DNNs via extensive experiments. The third part of the thesis addresses the problem of compressing large DNNs into smaller models. Following the identified drawbacks of the conventional group LASSO for compressing large DNNs, the debiased elastic group least absolute shrinkage and selection operator (DEGL) is employed. Furthermore, the layer-wise subspace learning (SL) of latent representations in large DNNs is proposed. The objective of SL is learning a compressed latent space for large DNNs. In addition, it is observed that SL improves the performance of LASSO, which is popularly known not to work well for compressing large DNNs. Extensive experiments are reported to validate the effectiveness of the different model compression approaches proposed in this thesis. Finally, the thesis addresses the problem of multimodal learning using DNNs, where data from different modalities are combined into useful representations for improved learning results. Different interesting multimodal learning frameworks are applied to the problems of facial expression and object recognition. We show that under the right scenarios, the complementary information from multimodal data leads to better model performance. [less ▲]

Detailed reference viewed: 106 (5 UL)
Full Text
See detailData Analytics and Consensus Mechanisms in Blockchains
Feher, Daniel UL

Doctoral thesis (2020)

Blockchains, and especially Bitcoin have soared in popularity since their inceptions. This thesis furthers our knowledge of blockchains and their uses. First, we analyze transaction linkability in the ... [more ▼]

Blockchains, and especially Bitcoin have soared in popularity since their inceptions. This thesis furthers our knowledge of blockchains and their uses. First, we analyze transaction linkability in the privacy preserving cryptocurrency Zcash, based on the currency minting transactions (mining). Using predictable usage patterns and clustering heuristics on mining transactions, an attacker can link to publicly visible addresses in over 84% of the privacy preserving transactions. Then, we further analyze privacy issues for the privacy-oriented cryptocurrency Zcash. We study privacy preserving transactions and show ways to fingerprint user transactions, including active attacks. We introduce two new attacks, which we call the Danaan-gift attack and the Dust attack. Then, we investigate the generic landscape and hierarchy of miners as exemplified by Ethereum and Zcash. Both chains used application-specific integrated circuit (ASIC) resistant proofs-of-work which favor GPU mining in order to keep mining decentralized. This, however, has changed with the introduction of ASIC miners for these chains. This transition allows us to develop methods that might detect hidden ASIC mining in a chain (if it exists), and to study how the introduction of ASICs affects the decentralization of mining power. Finally, we describe how an attacker might use public blockchain information to invalidate miners' privacy, deducing the mining hardware of individual miners and their mining rewards. Then, we analyze the behavior of cryptocurrency exchanges on the Bitcoin blockchain, and compare the results to the exchange volumes reported by the same exchanges. We show, that in multiple cases these two values are close to each other, which confirms the integrity of their reported volumes. Finally, we present a heuristic to try to classify large clusters of addresses in the blockchain, and whether these clusters are controlled by an exchange. Finally, we describe how to couple reputation systems with distributed consensus protocols to provide a scalable permissionless consensus protocol with a low barrier of entry, while still providing strong resistance against Sybil attacks for large peer-to-peer networks of untrusted validators. We introduce the reputation module ReCon, which can be laid on top of various consensus protocols such as PBFT or HoneyBadger. The protocol takes external reputation ranking as input and then ranks nodes based on the outcomes of consensus rounds run by a small committee, and adaptively selects the committee based on the current reputation. [less ▲]

Detailed reference viewed: 85 (8 UL)
Full Text
See detailA Systems biology approach to elucidate the contribution of alpha-synuclein to early in vitro phenotypes of Parkinson’s disease
Modamio Chamarro, Jennifer UL

Doctoral thesis (2020)

Although Parkinson's disease (PD was first described more than two hundred years ago, the clinical treatment options remain limited to symptom alleviation. Consequently, understanding the underlying ... [more ▼]

Although Parkinson's disease (PD was first described more than two hundred years ago, the clinical treatment options remain limited to symptom alleviation. Consequently, understanding the underlying molecular mechanisms is vital for the development of new therapeutic strategies. Most cases of PD are associated with toxic aggregations of the alpha-synuclein (α-syn) protein. However, the physiological and pathological mechanisms of α-syn aggregation are not entirely understood. One main reason for this knowledge gap is the lack of models that properly recapitulate the pathology in a human-midbrain-like context. Organoid models have emerged as an attractive model system that covers key aspects of in vivo tissue and organ complexity. Here, we present an optimized organoid protocol, which recapitulates features of the human midbrain. These human midbrain organoids (hMOs) present reduced levels of cell death in the core, while exhibiting reduced variability and increased viability. Their smaller size also allowed the implementation of a time-efficient image analysis technique. By using the protocol mentioned above, we generated hMOs from patient-derived induced pluripotent stem cells (iPSCs harboring a triplication of the SNCA gene (3xSNCA. 3xSNCAexhibited twice the levels of α-syn protein compared to wild type (WT) hMOs. Transcriptionalanalysis of 3xSNCA hMOs showed upregulation of PD- and SNCA-associated genes, as wellas transcriptional deregulations in neurogenesis, cell death, proliferation, and synapse formation. The analysis of cellular phenotypes in patient-specific hMOs supported these genetic observations. 3xSNCA hMOs presented reduced proliferation, cell death and reduced synapse count in mature organoids. Furthermore, 3xSNCA hMOs showed a reduced total number of neurons and impaired astrocytic differentiation. In addition, analysis of transcriptional and metabolomic data showed deregulation in metabolic pathways. To further analyze and explain our results, we used the latest human metabolic reconstruction (Recon3D) to generate an in silico model. The results presented here are a systematic analysis of patient-specific phenotypes in midbrain organoids from individuals with a triplication in the SNCA gene, which represent a starting point for further approaches to develop therapies. [less ▲]

Detailed reference viewed: 64 (6 UL)
Full Text
See detailSecurity and Privacy of Blockchain Protocols and Applications
Tikhomirov, Sergei UL

Doctoral thesis (2020)

Bitcoin is the first digital currency without a trusted third party. This revolutionary protocol allows mutually distrusting participants to agree on a single common history of transactions. Bitcoin nodes ... [more ▼]

Bitcoin is the first digital currency without a trusted third party. This revolutionary protocol allows mutually distrusting participants to agree on a single common history of transactions. Bitcoin nodes pack transactions into blocks and link those in a chain (the blockchain). Hash-based proof-of-work ensures that the blockchain is computationally infeasible to modify. Bitcoin has spawned a new area of research at the intersection of computer science and economics. Multiple alternative cryptocurrencies and blockchain projects aim to address Bitcoin's limitations. This thesis explores the security and privacy of blockchain systems. In Part I, we study the privacy of Bitcoin and the major privacy-focused cryptocurrencies. In Chapter 2, we explore the peer-to-peer (P2P) protocols underpinning cryptocurrencies. In Chapter 3, we show how a network adversary can link transactions issued by the same node. We test the efficiency of this novel attack in real networks, successfully linking our own transactions. Chapter 4 studies the privacy characteristics of mobile cryptocurrency wallets. We discover that most wallets do not follow the best practices aimed at protecting users' privacy. Part II is dedicated to the Lightning Network (LN). Bitcoin's architecture emphasizes security but severely limits transaction throughput. The LN is a prominent Bitcoin-based protocol that aims to alleviate this issue. It performs low-latency transactions off-chain but leverages Bitcoin's security guarantees for dispute resolution. We introduce the LN and outline the history of off-chain protocols in Chapter 5. Then, in Chapter 6, we introduce a probing attack that allows an adversary to discover user balances in the LN. Chapter 7 estimates the likelihood of various privacy attacks on the LN. In Chapter 8, we describe a limitation on the number of concurrent LN payments and quantify its effects on transaction throughput. Part III explores the security and privacy of Ethereum smart contracts. Bitcoin's language for defining spending conditions is intentionally restricted. Ethereum is a blockchain network allowing for more programmability. Ethereum users can write programs in a Turing-complete high-level language called Solidity. These programs, called smart contracts, are stored on-chain along with their state. Chapter 9 outlines the history of blockchain-based programming. Chapter 10 describes Findel — a Solidity-based declarative domain-specific language for financial contracts. In Chapter 11, we classify the vulnerabilities in real-world Ethereum contracts. We then present SmartCheck — a static analysis tool for bug detection in Solidity programs. Finally, Chapter 12 introduces an Ethereum-based cryptographic protocol for privacy-preserving regulation compliance. [less ▲]

Detailed reference viewed: 211 (14 UL)
See detailFor a New Hermeneutics of Practice in Digital Public History: Thinkering with memorecord.uni.lu
Lucchesi, Anita UL

Doctoral thesis (2020)

This thesis is built upon an experimental study of doing digital public history. I aim to study the digital interferences of the digital component on the historiographic operation as a whole. While the ... [more ▼]

This thesis is built upon an experimental study of doing digital public history. I aim to study the digital interferences of the digital component on the historiographic operation as a whole. While the fields of digital and public history are advancing fast with abundant work on development and application of new methodologies, tools and approaches, the discipline of history is still lagging behind in terms of theoretical reflection on the new practices emerging from it. Researchers have been exploring alternative forms of source criticism, storytelling and publications for years now, yet the greatest attention still goes to the outputs, while little criticism, if any, is devoted to the process of doing digital work. By building and analysing a digital public history platform, this research aims to make a contribution in this direction. To do so, the research takes a fully hands-on approach and offers an evaluation of digital methods that to great extent emerge from practice and the researcher’s first-hand experience with the digital. The empirical study consisted of investigating memories of Italian and Portuguese immigrants in Luxembourg through the establishment of a collaboratively shaped digital memory platform. The process of building the Memorecord platform, activating the crowdsourcing through social media and analysing the born-digital data originated from this collection informed the theoretical reflection of this thesis. While in the more practical layer, hands-on work and collaboration were highlighted, from the more speculative layer, the main theoretical contribution verse on the hybridisation of old and practices and capacities synthesized in the emergence of a hermeneutics of practice, derived from the heuristics gesture of creative and playful experimentation, (i.e. thinkering) around the digital tools and methods. This specific hermeneutical approach may function as a visibility broker, assisting historians in the process of unveiling the unspoken and implicit aspects of historical inquiry in the digital age. Hermeneutics of practice, hence, should facilitate the identification of the digital interferences we encounter throughout the research process and improve the researcher’s readiness to face the new research conditions placed by the digital component. If a new style of reasoning of/about/in/within digital and digital public history should be stabilised, hermeneutics of practice could become an important procedure to ensure historical objectivity in 21st Century. [less ▲]

Detailed reference viewed: 47 (2 UL)
Full Text
See detailLogically Centralized Security for Software-Defined Networking
Kreutz, Diego UL

Doctoral thesis (2020)

Software-Defined Networking (SDN) decouples the control and data planes of traditional networks, logically centralizing the functional properties of the network in the SDN controller. While this ... [more ▼]

Software-Defined Networking (SDN) decouples the control and data planes of traditional networks, logically centralizing the functional properties of the network in the SDN controller. While this centralization brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. Until now, SDN research has essentially been concerned with the functional side, despite some specific works relating to non-functional properties like ‘security’, ‘dependability’, or ‘quality of service’. Security is an essential non-functional property of SDN. The lack of reliable security-by-design mechanisms can quickly lead to the compromise of the entire network. For instance, most of the current security mechanisms in SDN controllers lead to exploitable vulnerabilities that allow adversaries to easily control or even shut down the entire control plane. The growing concern regarding insider threats substantially amplifies the problem. The reason lies in the fact that current Software-Defined Networks (SDNs) (e.g., OpenFlow-enabled networks) rely on weak protection mechanisms. To address these crucial security issues in the SDN control plane, it is necessary, though not sufficient, that we start by securely identifying, authenticating, and authorizing all devices before allowing them to become part of the network. Though SDN security is the central tenet of this thesis, we believe that the problem is much more generic. In essence, there is still a lack of a systematic approach to ensuring such relevant non-functional properties as security, dependability, or quality of service. Current approaches are mostly ad-hoc and piecemeal, which has led to efficiency and effectiveness problems. This reflection led us to claim that the successful enforcement of non-functional properties as a pillar of SDN robustness calls for a systematic approach. We further advocate, for its materialization, the re-iteration of the successful formula behind SDN– ‘logical centralization’. In consequence, we propose ANCHOR, a subsystem architecture for SDN that promotes the logical centralization of non-functional properties. We start by presenting the general concept and architectural principles, suggesting how they can satisfactorily enhance the current state of the art with regard to any non-functional property (security, dependability, performance, quality of service, etc.). We claim and justify that centralizing such mechanisms is vital for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and finally, better foster the resilience of the architecture itself. We focus on ‘security’ as a use case in the rest of the thesis, discussing the specialization of the ANCHOR architecture to logically-centralized enforcement of security properties. However, by presenting a principled solution to the main problem of the thesis (SDN security), we also show the effectiveness of the general ANCHOR concept, opening avenues for further research on its extension to other desirable non-functional properties, such as dependability and Quality of Service (QoS). We identify the current security gaps in SDNs, and investigate the adequate security mechanisms that should populate the architecture middleware, globally and consistently. ANCHOR sets out to provide — in a homogeneous manner to all controllers and forwarding devices — essential security mechanisms such as strong entropy, resilient pseudo-random generators, secure device registration, association and recommendation, amongst other crucial services. We present the design of those mechanisms and protocols. With the objective of promoting generalized use of encryption and authentication in the control plane, we additionally propose and describe a secure control plane communication infrastructure, Keep It Simple and Secure (KISS), based on a novel lightweight mechanism for generating cryptographic secrets — integrated Device Verification Value (iDVV). iDVV can be used in a number of ways, in a number of protocols, and outperforms widely used alternatives. In the context of this thesis, the KISS infrastructure is set up by ANCHOR and used to ensure the security of interactions amongst it, controllers and forwarding devices. Being conceptually logically-centralized, ANCHOR presents a single-point-of-failure (SPoF) challenge, which we address, through incremental measures, some of which can be selectively present in concrete designs. As a baseline, we harden the design, by endowing it with robust functions in the different modules. We increase assurance by discussing and informally proving correctness of all mechanisms and algorithms, and we also formally verify the main algorithms through a proof-assistant. By only using symmetric cryptography, we make the system Post-Quantum Secure (PQS). We also embed measures to achieve Perfect Forward Secrecy (PFS) in all algorithms, protecting pre-compromise communications in the presence of successful attacks. Finally, for higher criticality systems, we take additional algorithmic and architectural measures to mitigate the effects of possible security failures. We provide for Post-Compromise Security (PCS) through the semi-automatic restart of operation after a full compromise of ANCHOR. We present as well a design of resilience mechanisms — the continued prevention of failure/compromise by automatic means — through fail-fast recovery techniques. The prototypes’ implementation aspects and the evaluation of the two fundamental pieces of our work (ANCHOR and KISS) are performed in the respective chapters. The above-mentioned discussion and informal proof of correctness of all mechanisms and algorithms is given in appendices. We also formally machine- verified the main algorithms. [less ▲]

Detailed reference viewed: 99 (7 UL)
Full Text
See detailMOMENT AND LONGITUDINAL RESISTANCE FOR COMPOSITE BEAMS BASED ON STRAIN LIMITED DESIGN METHOD
Zhang, Qingjie UL

Doctoral thesis (2020)

The bending and longitudinal shear design of composite beams of steel and concrete follows often the plastic design method, which is a simplification based on rectangle stress blocks. The application of ... [more ▼]

The bending and longitudinal shear design of composite beams of steel and concrete follows often the plastic design method, which is a simplification based on rectangle stress blocks. The application of the plastic design method requires cross-section to have enough rotation capacity allowing most parts of the critical cross-section reach plastic at failure. There are different types of compact composite beams, such as the slim-floor beams. For them, the neutral axis position often gets deeper at failure, which reduces the rotation capacity and brings questions to the bending resistance and longitudinal shear design according to the plastic design resistance. For a composite beam with deep neutral axis position, advanced numerical methods such as strain-limited design and FEM simulations can provide more accurate results than the plastic cross-section resistance. However, they are challenging to perform for general design engineers. In this work, simplified non-linear strain-limited design approaches, a strain-limited design software "SL.com" and an Abaqus add-in "CivilLab" have been developed to simplify the numerical calculations. They have also been applied in other chapters of this work to check the conventional plastic design results and to provide simplified design rules through parametric studies. With full shear connection, a deep neutral axis position in composite beam under sagging bending may cause an important part of the steel section not to reach plastic at concrete failure. In this case, plastic bending resistance calculated based on rectangle stress blocks can result in an overestimation of the resistance and therefore leads to unsafe design. Thus, according to EN1994-1-1 [22], a reduction factor β on plastic bending resistance (Mpl,Rd) needs to be applied for cross-sections with steel grade S420 and S460 and the relative compression zone height (zpl/h) is over 0.15. However, with the developments in industry as well as the second generation of Eurocode, this reduction factor still needs to be updated to consider new types of composite beams and wider ranges of steel grades. While the conventional plastic design method has its limitations and only applicable when the beam cross-section has enough rotation capacity to allow full plastic development, the more advanced strain-limited numerical calculation and FEM can be used for a much wider range regardless of the position of the neutral axis. The investigations in this dissertation through comparing the plastic bending resistance with advanced numerical calculation results, have confirmed that besides the cross-sections with high steel grades (S420, S460), also certain cross-sections with lower steel grades can have an overestimated plastic bending moment resistance. At least this effect is more important for compact cross-section types such as slim-floor sections or composite beams with asymmetrical structural steel profiles or with a small concrete slab effective width. Therefore vast amount of parametric studies based on strain-limited method and FEM have been developed to check the topics, such as limitation of plastic design methods for different types of composite beams. Furthermore new reduction β functions on Mpl,Rd for engineering practice considering much wider variates of composite beam cross-sections have been deviated. For the design with partial shear connection, the partial shear diagram developed based on plastic analysis has been widely used. As discussed above, the plastic design may not be suitable when the position of neutral axis is too deep, similar problems can occur for the partial shear diagram. This problem is especially significant for slim-floor beams, for which due to the compact cross-section, the relative compression zone height (zpl/h) is usually much higher than conventional composite beams. Thus the limitation of using the partial shear diagram for slim-floor beams is provided, and additional simplified engineering design rules are proposed. Plastic development inside the cross-section increases the longitudinal shear force in the plastic zones, furthermore with ductile shear connectors and respecting the minimum degree of shear connection, the non-linear redistribution of longitudinal shear force allows equal distance arrangement of shear connectors by the conventional design. For which, the full plastic development of the cross-section allowing plastic bending moment resistance and ductile shear connectors allowing non-linear longitudinal shear force distribution are the two fundamental conditions. The deep neutral axis position brings questions directly to the first assumption, as full plastic development of crosssection may not be able to reach. Thus the impact of a deep neutral axis position in the composite beams on longitudinal shear force distribution has been analysed. For which, the influence of plastic development inside beam cross-sections on longitudinal shear force with full shear interaction is theoretically explained. The different stages of nonlinear distribution of longitudinal shear force due to shear connectors are investigated through FEM parametric studies. Based on the theoretical and numerical calculation, the design suggestions of composite beams with deep neutral axis position are given. [less ▲]

Detailed reference viewed: 43 (5 UL)
See detailThermal conductivity enhancement of graphene nanoplatelet/epoxy composites - Covalent functionalization with nitrene chemistry for reducing the interfacial thermal resistance
Depaifve, Sébastien Fabian L UL

Doctoral thesis (2020)

Polymer composites with high thermal conductivity are in strong demand for efficient thermal management in many modern applications such as electronics, batteries, aerospace structural materials, LED ... [more ▼]

Polymer composites with high thermal conductivity are in strong demand for efficient thermal management in many modern applications such as electronics, batteries, aerospace structural materials, LED lightings, etc. Nanocarbon fillers have recently attracted a lot of interest due to their extremely high intrinsic thermal conductivity. Nevertheless, the effective thermal conductivity achieved with nanocarbon-polymer composites is below the expectations. In particular, at low fillers loading due to the large interfacial thermal resistance at the nanocarbon-polymer interface. Covalent functionalization of nanocarbons has been suggested to reduce the interfacial thermal resistance in nanocarbon-polymer composites. However, large scale covalent functionalization of nanocarbons is usually achieved with harsh oxidizing conditions, causing a dramatic decrease of the intrinsic thermal conductivity of the nanocarbon fillers. In this thesis, we developed and optimized a non-disruptive covalent functionalization for graphene nanoplatelets (GNP), based on nitrene chemistry. We achieved unprecedented functionalization yields. The fillers functionalized by nitrene chemistry produced a significant thermal conductivity enhancement (TCE) compared to pristine and oxidized fillers. However, increasing the chain length or introducing heteroatoms in the functional chain afforded reduced performances. In parallel, we developed an innovative combination of SEM and µCT analyses to afford an unprecedented description of nanocarbon-polymer composites. This allowed us to elucidate the contradictory results, reported in the literature, on the influence of the aggregation level and the geometrical parameters of the fillers on the TCE. In this thesis we propose a novel and detailed description of the parameters responsible of TCE in GNP-epoxy composites. Moreover, we demonstrate that covalent functionalization of GNP by nitrene chemistry reduces the interfacial thermal resistance in epoxy composites and improves the thermal conductivity. [less ▲]

Detailed reference viewed: 47 (2 UL)
Full Text
See detailAutomated, Requirements-based Security Testing of Web-oriented Software Systems
Mai, Xuan Phu UL

Doctoral thesis (2020)

Motivation and Context. Modern Internet-based services (e.g., home-banking, personal-training, healthcare) are delivered through Web-oriented software systems which run on multiple and different devices ... [more ▼]

Motivation and Context. Modern Internet-based services (e.g., home-banking, personal-training, healthcare) are delivered through Web-oriented software systems which run on multiple and different devices including computers, mobile devices, wearable devices, and smart TVs. They manage and exchange users’ personal data such as credit reports, locations, and health status. Therefore, the security of the system and its data are of crucial importance. Unfortunately, from security requirements elicitation to security testing, there are a number of challenges to be addressed to ensure the security of Web-oriented software systems. First, existing practices for capturing security requirements do not rely on templates that ensure the specification of requirements in a precise, structured, and unambiguous manner. Second, security testing is usually performed either manually or is only partially automated. Most of existing security testing automation approaches focus only on specific vulnerabilities (e.g., buffer overflow, code injection). In addition, they suffer from the oracle problem, i.e., they cannot determine that the software does not meet its security requirements, except when it leads to denial of service or crashes. For this reason, security test automation is usually partial and only addresses the generation of inputs and not the verification of outputs. Though, in principle, solutions for the automated verification of functional requirements might be adopted to automatically verify security requirements, a number of concerns remain to be addressed. First, there is a lack of studies that demonstrate their applicability, in the context of security testing. Second, the oracle problem remains an open problem in many aspects of software testing research, not only security testing. In the context of functional testing, metamorphic testing has shown to be a viable solution to address the oracle problem; however, it has never been studied in the context of security testing. Contributions. In this dissertation, we propose a set of approaches to address the above-mentioned challenges. (1) To model security requirements in a structured and analyzable manner, we propose a use case modeling approach that relies on a restricted natural language and a template already validated in the context of functional testing. It introduces the concepts of security use case specifications (i.e., what the system is supposed to do) and misuse case specifications (i.e., malicious user behaviours that the system is supposed to prevent). Moreover, we propose a template for capturing guidelines for the mitigation of security threats. (2) To verify that systems meet their security requirements, we propose an approach to automatically generate security test cases from misuse use case specifications. More precisely, we propose a natural language programming solution that automatically generates executable security test cases and test inputs from misuse case specifications in natural language. (3) To address the oracle problem, we propose a metamorphic testing solution for Web-oriented software systems. The solution relies on a predefined set of metamorphic relations that capture (a) how an attacker likely alters a valid input to exploit a vulnerable system and (b) how the output of the system should change as a result of the attack if the system meet its security requirements. Our solution relies on Web-crawlers to automatically identify the valid inputs to be used for testing. (4) We identify a set of testability guidelines to facilitate the adoption of the proposed approaches in software projects. The identified guidelines indicate (a) which types of vulnerabilities can be addressed through the solutions proposed in this dissertation and (b) which design solutions should be integrated into the system to enable effective test automation. [less ▲]

Detailed reference viewed: 94 (5 UL)
See detailInvestigation of condensation process inside inclined tube
Zhang, Yu UL

Doctoral thesis (2020)

Generation III+ reactor designs partially rely on passive safety systems. It aims to increase the plant safety standards and to reduce investment costs. Passive Decay Heat Removal Systems, such as the ... [more ▼]

Generation III+ reactor designs partially rely on passive safety systems. It aims to increase the plant safety standards and to reduce investment costs. Passive Decay Heat Removal Systems, such as the Emergency Condenser (EC) of the KERENA reactor design, plays an important role in the safety in nuclear power plants. As part of the emergency cooling chain, EC removes the decay heat from the reactor pressure vessel and transfers it to the flooding pool. For the successful design of the EC, the reliable prediction of the condensation heat transfer inside inclined pipes is one of the important factors. One-dimensional (1D) codes, such as ATHLET, RELAP and TRACE, are widely used today by engineers to predict the thermal hydraulic behavior of the system in nuclear power plant. However, state-of-the-art 1D codes are mainly validated for active components, and the qualification of passive systems is still a remaining problem. The goal of this thesis therefore is to investigate the condensation phenomena in EC using current advanced 1D code ATHLET (Analysis of Thermal-hydraulics of Leaks and Transients). The performance of ATHLET for prediction of condensation in slightly inclined tube was assessed and the results showed that the standard models in ATHLET code have significant deficiencies on the prediction of the condensation heat transfer coefficients. Thus, the new empirical model has been derived using experimental data from COSMEA (COndenSation test rig for flow Morphology and hEAt transfer studies) tests, condensation experiments for flow morphology and heat transfer studies in a single slightly inclined tube conducted by HZDR (Helmholtz-Zentrum Dresden-Rossendorf) and data sourced from literature. The new model was developed using Machine Learning – Regression Analysis methodology in MATLAB, which consists of upper liquid film condensation and bottom convective condensation. It was further implemented in ATHLET with Python programming language and the modified ATHLET code was used to calculate the COSMEA experiments. The post-calculation results were compared to experiments in three aspects: heat flux, condensation rate and void fraction along the whole pipe. The outcomes showed that the modified ATHLET code can be used to recalculate the relevant heat transfer values of experiments under different pressure and mass flow rate conditions. [less ▲]

Detailed reference viewed: 59 (5 UL)
See detailESSAYS ON ASSET PRICING AND MARKET AUCTIONS
Kaiser, Gabriel UL

Doctoral thesis (2020)

Detailed reference viewed: 122 (14 UL)
Full Text
See detailDesign and Verification of Specialised Security Goals for Protocol Families
Smith, Zachary Daniel UL

Doctoral thesis (2020)

Communication Protocols form a fundamental backbone of our modern information networks. These protocols provide a framework to describe how agents - Computers, Smartphones, RFID Tags and more - should ... [more ▼]

Communication Protocols form a fundamental backbone of our modern information networks. These protocols provide a framework to describe how agents - Computers, Smartphones, RFID Tags and more - should structure their communication. As a result, the security of these protocols is implicitly trusted to protect our personal data. In 1997, Lowe presented ‘A Hierarchy of Authentication Specifications’, formalising a set of security requirements that might be expected of communication protocols. The value of these requirements is that they can be formally tested and verified against a protocol specification. This allows a user to have confidence that their communications are protected in ways that are uniformly defined and universally agreed upon. Since that time, the range of objectives and applications of real-world protocols has grown. Novel requirements - such as checking the physical distance between participants, or evolving trust assumptions of intermediate nodes on the network - mean that new attack vectors are found on a frequent basis. The challenge, then, is to define security goals which will guarantee security, even when the nature of these attacks is not known. In this thesis, a methodology for the design of security goals is created. It is used to define a collection of specialised security goals for protocols in multiple different families, by considering tailor-made models for these specific scenarios. For complex requirements, theorems are proved that simplify analysis, allowing the verification of security goals to be efficiently modelled in automated prover tools. [less ▲]

Detailed reference viewed: 32 (1 UL)
Full Text
See detailSmart Electrical and Thermal Energy Supply for Nearly Zero Energy Buildings
Rafii-Tabrizi, Sasan UL

Doctoral thesis (2020)

The European Union (EU) intends to reduce the greenhouse gas emissions to 80-95 % below 1990 levels by 2050. To achieve this goal, the EU focuses on higher energy efficiency mainly within the building ... [more ▼]

The European Union (EU) intends to reduce the greenhouse gas emissions to 80-95 % below 1990 levels by 2050. To achieve this goal, the EU focuses on higher energy efficiency mainly within the building sector and a share of renewable energy sources (RES) of around 30 % in gross final energy consumption by 2030. In this context, the concept of nearly zero-energy buildings (nZEB) is both an emerging and relevant research area. Balancing energy consumption with on-site renewable energy production in a cost-effective manner requires to develop suitable energy management systems (EMS) using demandside management strategies. This thesis develops an EMS using certainty equivalent (CE) economic model predictive control (EMPC) to optimally operate the building energy system with respect to varying electricity prices. The proposed framework is a comprehensive mixed integer linear programming model that uses suitable linearised grey box models and purely data-driven model approaches to describe the system dynamics. For this purpose, a laboratory prototype is available, which is capable of covering most building-relevant types of energy, namely thermal and electrical energy. Thermal energy for space heating, space cooling and domestic hot water is buffered in thermal energy storage systems. A dual source heat pump provides thermal energy for space heating and domestic hot water, whereas an underground ice storage covers space cooling. The environmental energy sources of the heat pump are ice storage or wind infrared sensitive collectors. The collectors are further used to regenerate the ice storage. Photovoltaic panels produce electrical energy which can be stored in a battery storage system. The electrical energy system is capable of selling and buying electricity from the public power grid. The laboratory test bench interacts with a virtual building model which is integrated into the building simulation software TRNSYS Simulation Studio. The EMS prototype is tested and validated on the basis of various simulations and under close to real-life laboratory conditions. The different test scenarios are generated using the typical day approach for each season. [less ▲]

Detailed reference viewed: 66 (17 UL)
See detailHealth, Well-Being and Health Behavior among Immigrant Adolescents in Social Context
Kern, Matthias Robert UL

Doctoral thesis (2020)

This dissertation is guided by an overarching interest in integrating social-environmental factors into models of immigrant adolescent health, well-being and health behavior. While such ecological models ... [more ▼]

This dissertation is guided by an overarching interest in integrating social-environmental factors into models of immigrant adolescent health, well-being and health behavior. While such ecological models are enjoying increasing popularity within health research as a whole, research concerned with immigrant adolescents in particular, has, as of yet, paid only little attention to social-environmental factors. To address this gap in the literature, informed by an ecological perspective, the current dissertation focusses on the role of social-environmental factors in immigrant adolescent health, well-being and health behavior. All of the studies compiled in this dissertation seek to exemplarily illustrate the relevance of one of the investigated social contexts (school-class, receiving country, origin country) by assessing the role that particular factors pertaining to the context play for a particular health related outcome. [less ▲]

Detailed reference viewed: 44 (11 UL)
Full Text
See detailGeneralized Langevin equations and memory effects in non-equilibrium statistical physics
Meyer, Hugues UL

Doctoral thesis (2020)

The dynamics of many-body complex processes is a challenge that many scientists from various fields have to face. Reducing the complexity of systems involving a large number of bodies in order to reach a ... [more ▼]

The dynamics of many-body complex processes is a challenge that many scientists from various fields have to face. Reducing the complexity of systems involving a large number of bodies in order to reach a simple description for observables captur- ing the main features of the process is a difficult task for which different approaches have been proposed over the past decades. In this thesis we introduce new tools to describe the coarse-grained dynamics of arbitrary observables in non-equilibrium processes. Following the projection operator formalisms introduced first by Mori and Zwanzig, and later on by Grabert, we first derive a non-stationary Generalized Langevin Equation that we prove to be valid in a wide spectrum of cases. This includes in particular driven processes as well as explicitly time-dependent observ- ables. The equation exhibits a priori memory effects, controlled by a so-called non- stationary memory kernel. Because the formalism does not provide extensive infor- mation about the memory kernel in general, we introduce a set of numerical meth- ods aimed at evaluating it from Molecular Dynamics simulation data. These proce- dures range from simple dimensionless estimations of the strength of the memory to the determination of the entire kernel. Again, the methods introduced are very general and require as input a small number of quantities directly computable from numerical of experimental timeseries. We finally conclude this thesis by using the projection operator formalisms to derive an equation of motion for work and heat in dissipative processes. This is done in two different ways, either by using well-known integral fluctuation theorems, or by explicitly splitting the dynamics into adiabatic and dissipative parts. [less ▲]

Detailed reference viewed: 87 (4 UL)
See detailPOST-COMMUNIST EUROPE AND THE THEORY OF RECOGNITION: MODERN STRUGGLES FOR RECOGNITION OF THE CROATIAN PEOPLE
Bebić, Džoen Dominique UL

Doctoral thesis (2020)

Political Philosophy has been analysing various political scenarios and governments for centuries. It is therefore very surprising that contemporary political philosophy has not contributed much to the ... [more ▼]

Political Philosophy has been analysing various political scenarios and governments for centuries. It is therefore very surprising that contemporary political philosophy has not contributed much to the analysis of the social experiences of the Croatian people during the war and events leading up to the war from 1990 to 1995 compared to other sciences. This research thus envisions to use the philosophical theory of recognition (and reification) to fulfil the extremely difficult task of analysing the social experiences and struggles for recognition of the Croatian people from the Habsburg monarchy until the end of the war of the Former Yugoslavia. As recognition as a concept was elaborated by various philosophers, this research presents different concepts of recognition developed by Rousseau, Fichte, Hegel, and Taylor while focusing on the recognition theory conceptualised by Honneth. This allows for the reconstruction of the evolution of the theory of recognition and the presentation of the interconnectedness of these different interpretations. Through the presentations of the different interpretations and the subsequent arguments and illustrations demonstrating their inadequacies to grasp the complex social experiences of the Croatian people throughout the different time periods, only Honneth’s theory of recognition can ultimately be used to capture and contextualise all the different forms of disrespect the Croatian people faced as well as the struggles for social appreciation of the Croatian culture and language and legal recognition of their right of political participation followed as a result of the deeply felt psychological consequences of the endured long-term disrespects. The extremely violent war of the Former Yugoslavia, however, falls out of what recognition theory can offer. This is where Honneth’s interpretation of Lukács reification theory comes to the fore. Using Honneth’s interpretation of reification, the different social experiences of the Croatian people during the war on the Croatian territory (1990-1992) and the territory of Bosnia-Herzegovina (1992-1995) are analysed and examined through the three forms of reification namely intersubjective reification of people, objective reification of their environment and self-reification of the perpetrators. As the relations between the Serbian and Croatian population of Croatia and Bosnia-Herzegovina remain rather tense to this day, this research also adopts Honneth’s conditions for peace and reconciliation between two states. While taking into account the valuable and important attempts of peace and reconciliation in the region, this research tries to offer an additional path of reconciliation between the Serbian and Croatian state and people in Croatia and in Bosnia-Herzegovina. A future joint research between representatives of the nations part of the Former Yugoslavia would allow for an objectification of the existentially subjective experiences of all the different nations and subsequently also offer a new path of reconciliation in the region. [less ▲]

Detailed reference viewed: 71 (20 UL)
Full Text
See detailConstant curvature surfaces and volumes of convex co-compact hyperbolic manifolds
Mazzoli, Filippo UL

Doctoral thesis (2020)

We investigate the properties of various notions of volume for convex co-compact hyperbolic 3-manifolds, and their relations with the geometry of the Teichmüller space. We prove a first-order variation ... [more ▼]

We investigate the properties of various notions of volume for convex co-compact hyperbolic 3-manifolds, and their relations with the geometry of the Teichmüller space. We prove a first-order variation formula for the dual volume of the convex core, as a function over the space of quasi-isometric deformations of a convex co-compact hyperbolic 3-manifold. For quasi-Fuchsian manifolds, we show that the dual volume of the convex core is bounded from above by a linear function of the Weil-Petersson distance between the pair of hyperbolic structures on the boundary of the convex core. We prove that, as we vary the convex co-compact structure on a fixed hyperbolic 3-manifold with incompressible boundary, the infimum of the dual volume of the convex core coincides with the infimum of the Riemannian volume of the convex core. We study various properties of the foliation by constant Gaussian curvature surfaces (k-surfaces) of convex co-compact hyperbolic 3-manifolds. We present a description of the renormalized volume of a quasi-Fuchsian manifold in terms of its foliation by k-surfaces. We show the existence of a Hamiltonian flow over the cotangent space of Teichmüller space, whose flow lines corresponds to the immersion data of the k-surfaces sitting inside a fixed hyperbolic end, and we determine a generalization of McMullen’s Kleinian reciprocity, again by means of the constant Gaussian curvature surfaces foliation. [less ▲]

Detailed reference viewed: 141 (11 UL)
Full Text
See detailBlockchain-enabled Traceability and Immutability for Financial Applications
Khan, Nida UL

Doctoral thesis (2020)

􏰔􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁The dissertation explores the efficacy of exploiting the transparency and immutability characteristics of blockchain platforms in a financial ecosystem. It elaborates on blockchain ... [more ▼]

􏰔􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁The dissertation explores the efficacy of exploiting the transparency and immutability characteristics of blockchain platforms in a financial ecosystem. It elaborates on blockchain technology employing a succinct approach, which serves as the foundation to comprehend the contributions of the present research work. The dissertation gives a verified mathematical model, derived using Nash equilibrium, to function as a framework for blockchain governance. The work elucidates the design, implementation and evaluation of a management plane to monitor and manage blockchain-based decentralized applications. The dissertation also solves the problem of data privacy by the development and evaluation of a management plane for differential privacy preservation through smart contracts. Further, the research work discusses the compliance of the privacy management plane to GDPR using a permissioned blockchain platform. The dissertation is a pioneer in conducting an implementation-based, comparative and an exploratory analysis of tokenization of ethical investment certificates. The dissertation also verifies the utility of blockchain to solve some prevalent issues in social finance. It accomplishes this through the development and testing of a blockchain-based donation application. A qualitative review of the economic impact of blockchain-based micropayments has also been conducted. The discussion on the economic impact also includes a proposition for extending the access of blockchain-based financial services to the underbanked and unbanked people. The work concludes with a hypothetical model of a financial ecosystem, depicting the deployment of the major contributions of this dissertation. 􏰃􏰝􏰃􏰄􏰧􏰁􏰈􏰧􏰨 􏰞􏰇􏰉􏰈􏰏􏰆􏰅􏰁􏰇􏰈􏰆􏰕􏰨 􏰏􏰁􏰂􏰄􏰉􏰊􏰅􏰁􏰋􏰃 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛 􏰩􏰁􏰅􏰓 􏰅􏰓􏰃 􏰊􏰇􏰅􏰃􏰈􏰅􏰁􏰆􏰕 􏰅􏰇 􏰖􏰇􏰝􏰊􏰕􏰃􏰅􏰃􏰕􏰛 􏰇􏰋􏰃􏰄􏰓􏰆􏰉􏰕 􏰅􏰓􏰃 􏰃􏰪􏰁􏰂􏰅􏰁􏰈􏰧 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰁􏰈􏰞􏰄􏰆􏰂􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃􏰍 􏰚􏰓􏰃 􏰫􏰈􏰆􏰈􏰖􏰃 􏰂􏰃􏰖􏰅􏰇􏰄 􏰁􏰂 􏰕􏰃􏰆􏰏􏰁􏰈􏰧 􏰁􏰈 􏰅􏰓􏰃 􏰏􏰃􏰊􏰕􏰇􏰛􏰝􏰃􏰈􏰅 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰅􏰇 􏰄􏰃􏰂􏰇􏰕􏰋􏰃 􏰃􏰈􏰏􏰃􏰝􏰁􏰖 􏰁􏰂􏰂􏰉􏰃􏰂 􏰄􏰃􏰕􏰆􏰅􏰃􏰏 􏰅􏰇 􏰅􏰄􏰆􏰈􏰂􏰊􏰆􏰄􏰃􏰈􏰖􏰛􏰨 􏰅􏰓􏰁􏰄􏰏􏰘􏰊􏰆􏰄􏰅􏰛 􏰞􏰄􏰆􏰉􏰏 􏰆􏰂 􏰩􏰃􏰕􏰕 􏰆􏰂 􏰅􏰁􏰝􏰃􏰘 􏰖􏰇􏰈􏰂􏰉􏰝􏰁􏰈􏰧 􏰆􏰈􏰏 􏰃􏰪􏰊􏰃􏰈􏰂􏰁􏰋􏰃 􏰅􏰄􏰆􏰈􏰂􏰆􏰖􏰅􏰁􏰇􏰈􏰂􏰍 􏰚􏰓􏰃 􏰫􏰈􏰆􏰈􏰖􏰃 􏰂􏰃􏰖􏰅􏰇􏰄 􏰓􏰆􏰂 􏰈􏰇􏰅 􏰙􏰃􏰃􏰈 􏰆􏰙􏰕􏰃 􏰅􏰇 􏰂􏰃􏰄􏰋􏰃 􏰅􏰓􏰃 􏰉􏰈􏰏􏰃􏰄􏰙􏰆􏰈􏰗􏰃􏰏 􏰆􏰈􏰏 􏰉􏰈􏰙􏰆􏰈􏰗􏰃􏰏 􏰊􏰇􏰊􏰉􏰕􏰆􏰅􏰁􏰇􏰈􏰂 􏰇􏰞 􏰅􏰓􏰃 􏰩􏰇􏰄􏰕􏰏 􏰂􏰇 􏰞􏰆􏰄􏰍 􏰬􏰭􏰮􏰜􏰀􏰘􏰯􏰰 􏰁􏰈 􏰱􏰲􏰱􏰲 􏰓􏰆􏰂 􏰁􏰈􏰏􏰁􏰖􏰆􏰅􏰃􏰏 􏰅􏰓􏰆􏰅 􏰆 􏰂􏰅􏰄􏰇􏰈􏰧 􏰏􏰁􏰧􏰁􏰅􏰆􏰕 􏰁􏰈􏰞􏰄􏰆􏰂􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃 􏰁􏰂 􏰈􏰃􏰃􏰏􏰃􏰏 􏰅􏰇 􏰙􏰃 􏰆􏰙􏰕􏰃 􏰅􏰇 􏰏􏰃􏰆􏰕 􏰙􏰃􏰅􏰅􏰃􏰄 􏰩􏰁􏰅􏰓 􏰏􏰁􏰂􏰄􏰉􏰊􏰅􏰁􏰇􏰈 􏰆􏰈􏰏 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰖􏰆􏰈 􏰊􏰕􏰆􏰛 􏰆 􏰊􏰁􏰋􏰇􏰅􏰆􏰕 􏰄􏰇􏰕􏰃 􏰁􏰈 􏰃􏰂􏰅􏰆􏰙􏰕􏰁􏰂􏰓􏰁􏰈􏰧 􏰅􏰓􏰁􏰂 􏰏􏰁􏰧􏰁􏰅􏰆􏰕 􏰁􏰈􏰞􏰄􏰆􏰂􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰃􏰪􏰊􏰕􏰇􏰄􏰃􏰂 􏰅􏰓􏰃 􏰃􏰳􏰖􏰆􏰖􏰛 􏰇􏰞 􏰃􏰪􏰊􏰕􏰇􏰁􏰅􏰁􏰈􏰧 􏰅􏰓􏰃 􏰅􏰄􏰆􏰈􏰂􏰊􏰆􏰄􏰃􏰈􏰖􏰛 􏰆􏰈􏰏 􏰁􏰝􏰝􏰉􏰅􏰆􏰙􏰁􏰕􏰁􏰅􏰛 􏰖􏰓􏰆􏰄􏰆􏰖􏰅􏰃􏰄􏰁􏰂􏰅􏰁􏰖􏰂 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰊􏰕􏰆􏰅􏰞􏰇􏰄􏰝􏰂 􏰁􏰈 􏰆 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰃􏰖􏰇􏰂􏰛􏰂􏰅􏰃􏰝􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰃􏰕􏰆􏰙􏰇􏰄􏰆􏰅􏰃􏰂 􏰇􏰈 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛 􏰃􏰝􏰊􏰕􏰇􏰛􏰁􏰈􏰧 􏰆 􏰂􏰉􏰖􏰖􏰁􏰈􏰖􏰅 􏰆􏰊􏰊􏰄􏰇􏰆􏰖􏰓􏰨 􏰩􏰓􏰁􏰖􏰓 􏰂􏰃􏰄􏰋􏰃􏰂 􏰆􏰂 􏰅􏰓􏰃 􏰞􏰇􏰉􏰈􏰏􏰆􏰅􏰁􏰇􏰈 􏰅􏰇 􏰖􏰇􏰝􏰊􏰄􏰃􏰓􏰃􏰈􏰏 􏰅􏰓􏰃 􏰖􏰇􏰈􏰅􏰄􏰁􏰙􏰉􏰅􏰁􏰇􏰈􏰂 􏰇􏰞 􏰅􏰓􏰃 􏰊􏰄􏰃􏰂􏰃􏰈􏰅 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗􏰍 􏰔􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰧􏰇􏰋􏰘 􏰃􏰄􏰈􏰆􏰈􏰖􏰃 􏰁􏰂 􏰆􏰈 􏰉􏰈􏰄􏰃􏰂􏰇􏰕􏰋􏰃􏰏 􏰞􏰆􏰖􏰃􏰅 􏰇􏰞 􏰅􏰓􏰃 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛􏰨 􏰁􏰈 􏰅􏰓􏰃 􏰆􏰙􏰂􏰃􏰈􏰖􏰃 􏰇􏰞 􏰩􏰓􏰁􏰖􏰓􏰨 􏰅􏰓􏰃 􏰏􏰁􏰂􏰅􏰄􏰁􏰙􏰉􏰅􏰃􏰏 􏰈􏰃􏰅􏰩􏰇􏰄􏰗 􏰖􏰆􏰈 􏰞􏰆􏰁􏰕 􏰅􏰇 􏰞􏰉􏰈􏰖􏰅􏰁􏰇􏰈 􏰇􏰊􏰅􏰁􏰝􏰆􏰕􏰕􏰛 􏰇􏰄 􏰃􏰋􏰃􏰈 􏰂􏰅􏰆􏰕􏰕􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰧􏰁􏰋􏰃􏰂 􏰆 􏰋􏰃􏰄􏰁􏰫􏰃􏰏 􏰝􏰆􏰅􏰓􏰃􏰝􏰆􏰅􏰁􏰖􏰆􏰕 􏰝􏰇􏰏􏰃􏰕􏰨 􏰏􏰃􏰘 􏰄􏰁􏰋􏰃􏰏 􏰉􏰂􏰁􏰈􏰧 􏰑􏰆􏰂􏰓 􏰃􏰴􏰉􏰁􏰕􏰁􏰙􏰄􏰁􏰉􏰝􏰨 􏰅􏰇 􏰞􏰉􏰈􏰖􏰅􏰁􏰇􏰈 􏰆􏰂 􏰆 􏰞􏰄􏰆􏰝􏰃􏰩􏰇􏰄􏰗 􏰞􏰇􏰄 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰧􏰇􏰋􏰃􏰄􏰈􏰆􏰈􏰖􏰃􏰍 􏰚􏰓􏰃 􏰉􏰂􏰆􏰧􏰃 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰁􏰈 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰇􏰄􏰧􏰆􏰈􏰁􏰵􏰆􏰅􏰁􏰇􏰈􏰂 􏰃􏰈􏰅􏰆􏰁􏰕􏰂 􏰅􏰓􏰃 􏰄􏰃􏰴􏰉􏰁􏰄􏰃􏰝􏰃􏰈􏰅 􏰇􏰞 􏰆 􏰂􏰇􏰉􏰈􏰏 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰂􏰅􏰄􏰆􏰅􏰃􏰧􏰛 􏰅􏰇 􏰇􏰙􏰘 􏰞􏰉􏰂􏰖􏰆􏰅􏰃 􏰅􏰓􏰃 􏰖􏰇􏰝􏰊􏰕􏰃􏰪􏰁􏰅􏰁􏰃􏰂 􏰇􏰞 􏰅􏰓􏰃 􏰅􏰃􏰖􏰓􏰈􏰇􏰕􏰇􏰧􏰛 􏰆􏰈􏰏 􏰄􏰃􏰊􏰕􏰁􏰖􏰆􏰅􏰃 􏰅􏰓􏰃 􏰆􏰏􏰝􏰁􏰈􏰁􏰂􏰅􏰄􏰆􏰅􏰁􏰋􏰃 􏰞􏰉􏰈􏰖􏰅􏰁􏰇􏰈􏰂 􏰃􏰪􏰁􏰂􏰅􏰁􏰈􏰧 􏰁􏰈 􏰕􏰃􏰧􏰆􏰖􏰛 􏰂􏰛􏰂􏰅􏰃􏰝􏰂􏰍 􏰚􏰓􏰃 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗 􏰃􏰕􏰉􏰖􏰁􏰏􏰆􏰅􏰃􏰂 􏰅􏰓􏰃 􏰏􏰃􏰂􏰁􏰧􏰈􏰨 􏰁􏰝􏰊􏰕􏰃􏰝􏰃􏰈􏰅􏰆􏰅􏰁􏰇􏰈 􏰆􏰈􏰏 􏰃􏰋􏰆􏰕􏰉􏰆􏰅􏰁􏰇􏰈 􏰇􏰞 􏰆 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰊􏰕􏰆􏰈􏰃 􏰅􏰇 􏰝􏰇􏰈􏰁􏰅􏰇􏰄 􏰆􏰈􏰏 􏰝􏰆􏰈􏰆􏰧􏰃 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰏􏰃􏰖􏰃􏰈􏰅􏰄􏰆􏰕􏰁􏰵􏰃􏰏 􏰆􏰊􏰊􏰕􏰁􏰖􏰆􏰅􏰁􏰇􏰈􏰂􏰍 􏰶􏰄􏰁􏰋􏰆􏰖􏰛 􏰇􏰞 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰏􏰆􏰅􏰆 􏰁􏰂 􏰃􏰪􏰅􏰄􏰃􏰝􏰃􏰕􏰛 􏰁􏰝􏰊􏰇􏰄􏰅􏰆􏰈􏰅 􏰞􏰇􏰄 􏰆􏰈􏰛 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰇􏰄􏰧􏰆􏰈􏰁􏰵􏰆􏰅􏰁􏰇􏰈 􏰙􏰉􏰅 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰨 􏰙􏰛 􏰋􏰁􏰄􏰅􏰉􏰃 􏰇􏰞 􏰁􏰅􏰂 􏰁􏰈􏰓􏰃􏰄􏰃􏰈􏰅 􏰁􏰈􏰞􏰄􏰆􏰂􏰘 􏰅􏰄􏰉􏰖􏰅􏰉􏰄􏰃􏰨 􏰊􏰇􏰂􏰃􏰂 􏰆 􏰖􏰓􏰆􏰕􏰕􏰃􏰈􏰧􏰃 􏰁􏰈 􏰅􏰓􏰁􏰂 􏰄􏰃􏰂􏰊􏰃􏰖􏰅􏰍 􏰚􏰓􏰁􏰂 􏰁􏰂 􏰆􏰕􏰂􏰇 􏰇􏰈􏰃 􏰇􏰞 􏰅􏰓􏰃 􏰇􏰋􏰃􏰄􏰆􏰄􏰖􏰓􏰁􏰈􏰧 􏰞􏰆􏰖􏰅􏰇􏰄􏰂 􏰅􏰓􏰆􏰅 􏰝􏰆􏰗􏰃􏰂 􏰉􏰂􏰆􏰧􏰃 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰴􏰉􏰃􏰂􏰅􏰁􏰇􏰈􏰆􏰙􏰕􏰃 􏰞􏰄􏰇􏰝 􏰅􏰓􏰃 􏰷􏰉􏰄􏰇􏰊􏰃􏰆􏰈 􏰸􏰀􏰶􏰎 􏰊􏰃􏰄􏰂􏰊􏰃􏰖􏰅􏰁􏰋􏰃􏰍 􏰶􏰄􏰁􏰋􏰆􏰖􏰛􏰘􏰊􏰄􏰃􏰂􏰃􏰄􏰋􏰁􏰈􏰧 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰊􏰕􏰆􏰅􏰞􏰇􏰄􏰝􏰂 􏰓􏰆􏰋􏰃 􏰆􏰕􏰂􏰇 􏰊􏰄􏰇􏰋􏰃􏰏 􏰅􏰇 􏰙􏰃 􏰋􏰉􏰕􏰈􏰃􏰄􏰆􏰙􏰕􏰃 􏰅􏰇 􏰏􏰆􏰅􏰆 􏰙􏰄􏰃􏰆􏰖􏰓􏰃􏰂􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰂􏰇􏰕􏰋􏰃􏰂 􏰅􏰓􏰁􏰂 􏰊􏰄􏰇􏰙􏰕􏰃􏰝 􏰙􏰛 􏰁􏰋 􏰀􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰂􏰉􏰊􏰃􏰄􏰋􏰁􏰂􏰇􏰄􏰌 􏰀􏰄􏰍 􏰎􏰆􏰏􏰉 􏰐􏰅􏰆􏰅􏰃 􏰑􏰁􏰏􏰆 􏰒􏰓􏰆􏰈 􏰅􏰓􏰃 􏰏􏰃􏰋􏰃􏰕􏰇􏰊􏰝􏰃􏰈􏰅 􏰆􏰈􏰏 􏰃􏰋􏰆􏰕􏰉􏰆􏰅􏰁􏰇􏰈 􏰇􏰞 􏰆 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰊􏰕􏰆􏰈􏰃 􏰞􏰇􏰄 􏰏􏰁􏰹􏰃􏰄􏰃􏰈􏰅􏰁􏰆􏰕 􏰊􏰄􏰁􏰋􏰆􏰖􏰛 􏰊􏰄􏰃􏰂􏰃􏰄􏰋􏰆􏰅􏰁􏰇􏰈 􏰅􏰓􏰄􏰇􏰉􏰧􏰓 􏰂􏰝􏰆􏰄􏰅 􏰖􏰇􏰈􏰅􏰄􏰆􏰖􏰅􏰂􏰍 􏰚􏰓􏰃 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗 􏰏􏰁􏰂􏰖􏰉􏰂􏰂􏰃􏰂 􏰅􏰓􏰃 􏰖􏰇􏰝􏰊􏰕􏰁􏰆􏰈􏰖􏰃 􏰇􏰞 􏰅􏰓􏰃 􏰊􏰄􏰁􏰋􏰆􏰖􏰛 􏰝􏰆􏰈􏰆􏰧􏰃􏰝􏰃􏰈􏰅 􏰊􏰕􏰆􏰈􏰃 􏰅􏰇 􏰸􏰀􏰶􏰎 􏰉􏰂􏰁􏰈􏰧 􏰆 􏰊􏰃􏰄􏰝􏰁􏰂􏰂􏰁􏰇􏰈􏰃􏰏 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰊􏰕􏰆􏰅􏰞􏰇􏰄􏰝􏰍 􏰚􏰓􏰃 􏰈􏰃􏰩 􏰃􏰖􏰇􏰈􏰇􏰝􏰁􏰖 􏰊􏰆􏰄􏰆􏰏􏰁􏰧􏰝 􏰃􏰈􏰞􏰇􏰄􏰖􏰃􏰏 􏰙􏰛 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰨 􏰏􏰃􏰖􏰃􏰈􏰅􏰄􏰆􏰕􏰁􏰵􏰃􏰏 􏰫􏰈􏰆􏰈􏰖􏰃􏰨 􏰊􏰄􏰃􏰂􏰃􏰈􏰅􏰂 􏰈􏰇􏰋􏰃􏰕 􏰖􏰓􏰆􏰕􏰘 􏰕􏰃􏰈􏰧􏰃􏰂 􏰆􏰈􏰏 􏰉􏰈􏰊􏰄􏰃􏰖􏰃􏰏􏰃􏰈􏰅􏰃􏰏 􏰂􏰅􏰄􏰆􏰅􏰃􏰧􏰁􏰖 􏰆􏰏􏰋􏰆􏰈􏰅􏰆􏰧􏰃􏰂􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈 􏰁􏰂 􏰆 􏰊􏰁􏰇􏰈􏰃􏰃􏰄 􏰁􏰈 􏰖􏰇􏰈􏰏􏰉􏰖􏰅􏰁􏰈􏰧 􏰆􏰈 􏰁􏰝􏰊􏰕􏰃􏰝􏰃􏰈􏰅􏰆􏰅􏰁􏰇􏰈􏰘􏰙􏰆􏰂􏰃􏰏􏰨 􏰖􏰇􏰝􏰊􏰆􏰄􏰆􏰅􏰁􏰋􏰃 􏰆􏰈􏰏 􏰆􏰈 􏰃􏰪􏰊􏰕􏰇􏰄􏰆􏰅􏰇􏰄􏰛 􏰆􏰈􏰆􏰕􏰛􏰂􏰁􏰂 􏰇􏰞 􏰅􏰇􏰗􏰃􏰈􏰁􏰵􏰆􏰅􏰁􏰇􏰈 􏰇􏰞 􏰃􏰅􏰓􏰁􏰖􏰆􏰕 􏰁􏰈􏰋􏰃􏰂􏰅􏰝􏰃􏰈􏰅 􏰖􏰃􏰄􏰅􏰁􏰫􏰖􏰆􏰅􏰃􏰂􏰍 􏰐􏰇􏰖􏰁􏰆􏰕 􏰫􏰈􏰆􏰈􏰖􏰃 􏰝􏰆􏰄􏰗􏰃􏰅􏰂 􏰓􏰆􏰋􏰃 􏰙􏰃􏰃􏰈 􏰏􏰃􏰋􏰃􏰕􏰇􏰊􏰁􏰈􏰧 􏰁􏰈 􏰷􏰉􏰄􏰇􏰊􏰃 􏰆􏰈􏰏 􏰆􏰄􏰃 􏰋􏰁􏰃􏰩􏰃􏰏 􏰆􏰂 􏰂􏰉􏰂􏰅􏰆􏰁􏰈􏰆􏰙􏰕􏰃 􏰫􏰘 􏰈􏰆􏰈􏰖􏰃 􏰙􏰛 􏰅􏰓􏰃 􏰷􏰉􏰄􏰇􏰊􏰃􏰆􏰈 􏰬􏰇􏰝􏰝􏰁􏰂􏰂􏰁􏰇􏰈􏰍 􏰚􏰓􏰃 􏰄􏰃􏰂􏰃􏰆􏰄􏰖􏰓 􏰩􏰇􏰄􏰗 􏰋􏰃􏰄􏰁􏰫􏰃􏰂 􏰅􏰓􏰃 􏰉􏰅􏰁􏰕􏰁􏰅􏰛 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈 􏰅􏰇 􏰂􏰇􏰕􏰋􏰃 􏰂􏰇􏰝􏰃 􏰊􏰄􏰃􏰋􏰆􏰕􏰃􏰈􏰅 􏰁􏰂􏰂􏰉􏰃􏰂 􏰁􏰈 􏰂􏰇􏰖􏰁􏰆􏰕 􏰫􏰈􏰆􏰈􏰖􏰃􏰍 􏰚􏰓􏰃 􏰩􏰇􏰄􏰗 􏰆􏰖􏰖􏰇􏰝􏰊􏰕􏰁􏰂􏰓􏰃􏰂 􏰅􏰓􏰁􏰂 􏰅􏰓􏰄􏰇􏰉􏰧􏰓 􏰅􏰓􏰃 􏰏􏰃􏰋􏰃􏰕􏰇􏰊􏰝􏰃􏰈􏰅 􏰆􏰈􏰏 􏰅􏰃􏰂􏰅􏰁􏰈􏰧 􏰇􏰞 􏰆 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰏􏰇􏰈􏰆􏰅􏰁􏰇􏰈 􏰆􏰊􏰊􏰕􏰁􏰖􏰆􏰅􏰁􏰇􏰈􏰍 􏰠 􏰴􏰉􏰆􏰕􏰁􏰅􏰆􏰅􏰁􏰋􏰃 􏰄􏰃􏰋􏰁􏰃􏰩 􏰇􏰞 􏰅􏰓􏰃 􏰃􏰖􏰇􏰈􏰇􏰝􏰁􏰖 􏰁􏰝􏰊􏰆􏰖􏰅 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰝􏰁􏰖􏰄􏰇􏰊􏰆􏰛􏰝􏰃􏰈􏰅􏰂 􏰓􏰆􏰂 􏰆􏰕􏰂􏰇 􏰙􏰃􏰃􏰈 􏰖􏰇􏰈􏰏􏰉􏰖􏰅􏰃􏰏􏰍 􏰚􏰓􏰃 􏰏􏰁􏰂􏰖􏰉􏰂􏰂􏰁􏰇􏰈 􏰇􏰈 􏰅􏰓􏰃 􏰃􏰖􏰇􏰈􏰇􏰝􏰁􏰖 􏰁􏰝􏰊􏰆􏰖􏰅 􏰆􏰕􏰂􏰇 􏰁􏰈􏰖􏰕􏰉􏰏􏰃􏰂 􏰆 􏰊􏰄􏰇􏰊􏰇􏰂􏰁􏰅􏰁􏰇􏰈 􏰞􏰇􏰄 􏰃􏰪􏰅􏰃􏰈􏰏􏰁􏰈􏰧 􏰅􏰓􏰃 􏰆􏰖􏰖􏰃􏰂􏰂 􏰇􏰞 􏰙􏰕􏰇􏰖􏰗􏰖􏰓􏰆􏰁􏰈􏰘􏰙􏰆􏰂􏰃􏰏 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰂􏰃􏰄􏰋􏰁􏰖􏰃􏰂 􏰅􏰇 􏰅􏰓􏰃 􏰉􏰈􏰘 􏰏􏰃􏰄􏰙􏰆􏰈􏰗􏰃􏰏 􏰆􏰈􏰏 􏰉􏰈􏰙􏰆􏰈􏰗􏰃􏰏 􏰊􏰃􏰇􏰊􏰕􏰃􏰍 􏰚􏰓􏰃 􏰩􏰇􏰄􏰗 􏰖􏰇􏰈􏰖􏰕􏰉􏰏􏰃􏰂 􏰩􏰁􏰅􏰓 􏰆 􏰓􏰛􏰊􏰇􏰅􏰓􏰃􏰅􏰁􏰖􏰆􏰕 􏰝􏰇􏰏􏰃􏰕 􏰇􏰞 􏰆 􏰫􏰈􏰆􏰈􏰖􏰁􏰆􏰕 􏰃􏰖􏰇􏰂􏰛􏰂􏰅􏰃􏰝􏰨 􏰏􏰃􏰊􏰁􏰖􏰅􏰁􏰈􏰧 􏰅􏰓􏰃 􏰏􏰃􏰊􏰕􏰇􏰛􏰝􏰃􏰈􏰅 􏰇􏰞 􏰅􏰓􏰃 􏰝􏰆􏰺􏰇􏰄 􏰖􏰇􏰈􏰅􏰄􏰁􏰙􏰉􏰅􏰁􏰇􏰈􏰂 􏰇􏰞 􏰅􏰓􏰁􏰂 􏰏􏰁􏰂􏰂􏰃􏰄􏰅􏰆􏰅􏰁􏰇􏰈􏰍 [less ▲]

Detailed reference viewed: 364 (5 UL)
Full Text
See detailBlockchain Technology for Data Sharing in the Banking Sector
Norvill, Robert UL

Doctoral thesis (2020)

Detailed reference viewed: 47 (3 UL)
Full Text
See detailInkjet-printed piezoelectric films for transducers
Godard, Nicolas UL

Doctoral thesis (2020)

Lead zirconate titanate (PZT) thin films are a popular choice for piezoelectric devices such as microelectromechanical systems, micro-pumps, micro-mirrors or energy harvesters. Various fabrication ... [more ▼]

Lead zirconate titanate (PZT) thin films are a popular choice for piezoelectric devices such as microelectromechanical systems, micro-pumps, micro-mirrors or energy harvesters. Various fabrication techniques exist for the deposition of PZT in the form of thin films. Physical vapor deposition (PVD) methods are particularly cost-intensive, as they require vacuum conditions and expensive infrastructure. Fabrication costs can be decreased by the use of chemical solution deposition (CSD), where the metal precursors are dispersed in a solvent medium and coated onto a substrate. Thermal treatments convert the liquid precursor into a functional solid film. Spin coating is a conventional coating technique allowing for the deposition of homogeneous layers over large-area substrates. However, it is inherently wasteful, as most of the precursor material is spun off the substrate in the coating process. In addition, as spin coating results in complete coverage of the substrate, layer patterning requires lithography, which adds up extra steps and costs to the overall process. Inkjet printing is an additive manufacturing technique that has the potential to address both of these issues, thus further decreasing manufacturing costs and the associated ecological footprint. The working principle of inkjet printing can be described as the deposition of individual ink droplets at digitally determined locations on the substrate surface, which then merge into a continuous film. Inkjet printing is compatible with CSD processing of PZT thin films, as demonstrated by the previous works in the field. However, the adaptation of standard CSD processing for inkjet printing comes with several challenges, which have to be considered to obtain state-of-the-art functional PZT layers. In the present work, we explore several issues related to the processing of PZT thin films by inkjet printing and we provide possible solutions to address them, in a way that had not been described yet by the state of the art. In particular, we describe a novel strategy that uses inkjet-printed alkanethiolate-based self-assembled monolayers for direct patterning of PZT thin films on platinized silicon. Then, we present a systematic study of the pyrolysis step of the process, which enabled us to print dense and textured layers with state-of-the-art electrical properties. We also developed a proof-of-concept piezoelectric energy harvesting device based on inkjet-printed PZT films. Finally, we unveil a comparative study where we identified an alternative solvent for CSD processing of PZT thin films. [less ▲]

Detailed reference viewed: 62 (4 UL)
Full Text
See detailThe compatibility of ISDS provisions in bilateral investment treaties and the Energy Charter Treaty with EU law
de Boeck, Michael Karel Marc UL

Doctoral thesis (2020)

Since the 1960’s, EU Member States concluded a vast network of international investment agreements (IIA’s). Such treaties typically offer substantive investment protection standards and investor-state ... [more ▼]

Since the 1960’s, EU Member States concluded a vast network of international investment agreements (IIA’s). Such treaties typically offer substantive investment protection standards and investor-state arbitration provisions (ISDS). It is disputed whether those treaties conflict with EU law and whether they can still be relied on against EU Member States and by whom. It is submitted that the relationship between international investment law and EU law can only be understood by clearly separating the issues of international validity and applicability of those treaties from their effects in the EU legal order. The first is determined by international conflict norms, while the second is determined only by reference to EU law itself. The thesis therefore adopts three parts. The first approaches the interaction between EU law and the IIA’s through the lens of the conflict norms of international law. After considering the framework of “harmonious interpretation” and “successive treaty” conflicts, the thesis concludes that despite the existence of a certain overlap or conflict between the EU treaties and IIA’s, the ISDS provisions of IIA’s remain valid and applicable under international law. Investors can therefore continue to rely on them against EU Member States. In a second part, the thesis considers the status and legal value of IIA’s in the EU legal order in light of the internal developments of EU law. The post-Lisbon transfer of competence on Foreign Direct Investment to the EU raised many questions with few answers. The second part therefore sets out the framework of EU law conflict norms and the legal effect of the ECT and BITs in the EU legal order. It is concluded that the BITs enjoy only limited recognition in the EU legal order through article 351 TFEU, which is however bounded by the notion of the autonomy of EU law. In the third part, the thesis examines whether the ISDS provisions of the BITs and the ECT are compatible with the autonomy of EU law. After constructing the role, meaning and requirements of autonomy in relation to international dispute resolution, the thesis concludes that the ISDS provisions in the ECT and BITs are incompatible with the autonomy of EU law. Thus, the ISDS provisions cannot be applied in the EU legal order, but remain valid internationally. [less ▲]

Detailed reference viewed: 61 (1 UL)
Full Text
See detailA Real-World Flexible Job Shop Scheduling Problem With Sequencing Flexibility: Mathematical Programming, Constraint Programming, and Metaheuristics
Tessaro Lunardi, Willian UL

Doctoral thesis (2020)

In this work, the online printing shop scheduling problem is considered. This challenging real scheduling problem, that emerged in the nowadays printing industry, corresponds to a flexible job shop ... [more ▼]

In this work, the online printing shop scheduling problem is considered. This challenging real scheduling problem, that emerged in the nowadays printing industry, corresponds to a flexible job shop scheduling problem with sequencing flexibility that includes several complicating specificities such as resumable operations, periods of unavailability of the machines, sequence-dependent setup times, partial overlapping between operations with precedence constraints, fixed operations, among others. In the present work, a mixed integer linear programming model, a constraint programming model, and heuristic methods such as local search and metaheuristics for the minimization of the makespan are presented. Modeling the problem is twofold. On the one hand, the problem is precisely defined. On the other hand, the capabilities and limitations of a commercial software for solving the models are analyzed. Numerical experiments show that the commercial solver is able to optimally solve only a fraction of the small-sized instances when considering the mixed integer linear programming formulation. While considering the constraint programming formulation of the problem, medium-sized instances are optimally solved, and feasible solutions for large-sized instances of the problem are found. Ad-hoc heuristic methods, such as local search and metaheuristic approaches that fully exploit the structure of the problem, are proposed and evaluated. Based on a common representation scheme and neighborhood function, trajectory and populational metaheuristics are considered. Extensive numerical experiments with large-sized instances show that the proposed metaheuristic methods are suitable for solving practical instances of the problem; and that they outperform the half-heuristic-half-exact off-the-shelf constraint programming solver. Numerical experiments with classical instances of the flexible job shop scheduling problem show that the introduced methods are also competitive when applied to this particular case. [less ▲]

Detailed reference viewed: 156 (27 UL)
Full Text
See detailLehrerInnenprofessionalisierung in einer digitalen, videogestützten Lernumgebung
Arimond, Ruth Annemarie UL

Doctoral thesis (2020)

The sustainable use of digital media and the conception of effective task designs are relevant aspects for the learning outcome in the training of future teachers. To model professional action competence ... [more ▼]

The sustainable use of digital media and the conception of effective task designs are relevant aspects for the learning outcome in the training of future teachers. To model professional action competence, video-supported learning opportunities are increasingly used today to train professional perception and reflexive confrontation with new knowledge, convictions and experiences from teaching. So far, however, there is a lack of empirical studies that systematically take into account effective design elements of innovative learning environments for the promotion of reflexive practice, such as blended learning, social video learning and ePortfolio [less ▲]

Detailed reference viewed: 113 (3 UL)
See detailFatigue and fracture of rubber: Accelerated and experimentally validated phase-field damage models
Loew, Pascal Juergen UL

Doctoral thesis (2020)

Rubbers behave very particularly. Anyone who has stretched a rubber band knows that large elastic deformations over 400% can be attained with a minimal force. In order to utilize the full potential of the ... [more ▼]

Rubbers behave very particularly. Anyone who has stretched a rubber band knows that large elastic deformations over 400% can be attained with a minimal force. In order to utilize the full potential of the material and to improve the performance of a product, it is imperative to accurately model the material's failure. This thesis focuses on the development, experimental validation and application of a fatigue damage model for rubber. Cohesive zone models or nodal enrichment strategies, which treat cracks as sharp discontinuities, require a priori knowledge of the crack path or are limited in their ability to handle complex crack phenomena like branching and coalescence. On the other hand, the results of standard continuum damage models are affected by the mesh size. Phase-field damage models avoid sharp discontinuities by adding a smooth damage process zone to the crack. The width of this zone is controlled by a length scale parameter. Because of this pure continuum description, the mentioned complex phenomena are simulated without additional effort. Furthermore, the introduction of the length scale ensures mesh-independence during strain softening. Despite these advantages, phase-field models to describe the failure of rubber parts are still limited. Firstly, most published works focus only on monotonic loading. Fatigue damage of rubber has never been considered in a phase-field model. Secondly, the computational burden is too large so that only examples with limited practical relevance can be simulated. Thirdly, there is insufficient experimental validation in the literature and the process of parameter identification is not adequately addressed. For instance, the selection of the length scale parameter is often arbitrary. This thesis collects three works that have been presented to the scientific community in an effort to overcome the mentioned problems. Because the fracture resistance of rubbers is a function of the loading rate, the first work presents a rate-dependent phase-field damage model for rubber and finite strains. Rate-dependency is considered in the constitutive description of the bulk as well as in the damage driving force. All the material parameters are identified from experiments. Particular attention is paid to the length scale parameter, which is calibrated by means of local strain measurements close to the crack tip obtained via digital image correlation. The second work extends the phase-field damage model so that fatigue failure can be predicted. For this purpose, an additional fatigue damage source depending on an accumulated load history variable is introduced. The thermodynamical consistency is demonstrated by measuring the energy storage and dissipation of the various model components. Dedicated fatigue experiments are conducted in order to identify additional (fatigue) parameters. The extended model reproduces Woehler curves and Paris theory for fatigue crack growth. Using explicit and implicit cycle jump schemes, the third work focuses on the reduction of the computation time. A finite number of load cycles is simulated and the results for the next cycles are extrapolated. By alternating simulations and jumps until the component failure is reached, the total number of simulated cycles is significantly reduced, with respect to the full simulations. As the size of the cycle jump governs the acceleration of the simulations, but also the numerical stability, an adaptive cycle jump scheme for the implicit acceleration framework is proposed. Consequently, no manual adjustment of the step size is necessary. Additional experiments validate both the numerical model and the identified material parameters. Finally, the fatigue phase-field damage model is used in two industry-relevant examples demonstrating how this technology creates immediate benefits in product development. [less ▲]

Detailed reference viewed: 43 (4 UL)
See detailWAVEFORM DESIGN FOR AUTOMOTIVE JOINT RADAR-COMMUNICATION SYSTEM
Dokhanchi, Sayed Hossein UL

Doctoral thesis (2020)

Detailed reference viewed: 124 (21 UL)
Full Text
See detailEssays in risk and experimental finance
Xanalatou, Sotiria UL

Doctoral thesis (2020)

Detailed reference viewed: 39 (4 UL)
Full Text
See detailCultural Psychological Re-Formulation of Ego-Defence into Ego-Construction
Mihalits, Dominik Stefan UL

Doctoral thesis (2020)

The developing self is a complex concept that recurrently occupies a variety of academic disciplines, and that is yet to be clarified from a holistically, transdisciplinary standpoint. For instance ... [more ▼]

The developing self is a complex concept that recurrently occupies a variety of academic disciplines, and that is yet to be clarified from a holistically, transdisciplinary standpoint. For instance, psychoanalytical theories offer detailed insight into the intraindividual psycho-dynamics of personal development. Cultural psychological theories, on the other hand, stress a culture’s influence on a person’s day-to-day development and advance a detailed account of semiotic, i.e., culturally mediated, sign construction that underlies psychological process-es and results from it at the same time. Importantly, and what a cultural psychological standpoint therefore offers, is a view on culture that withdraws from conceiving it as an own entity (e.g., cannot be calculated as an external factor), but views it as deeply entangled with the formation of personality development. Both theory strands thus each complexly address ‘sides of the same coin’, namely the phenomenon of the developing self, but have not yet been systematically linked with each other from a holistic perspective. Therefore, this thesis addresses the question, how an integrative perspective of psychoanalytical psychodynamics can be synthesized with cultural psychological metatheory on development. More precisely, I theoretically explore how psychoanalytical theories of ego defence mechanisms help furthering an analysis of ego construction. By using the concept of ego construction, I argue that cultural psychological construction processes that are entangled with people’s engagement with their culturally laden environments can further elaborate psychoanalytical theories on ego defence. To approach ego defence, this project departs from Freudian psychoanalytic theory. It draws on the differentiation between needs and wishes that leads to an inner tension where defence mechanisms help in understanding the tension upon delayed gratification. Pushing beyond this traditional perspective and by assuming a high entanglement of needs and wishes, the defence needs to be recognized as an ongoing process, conceptualized as a continuous and recurring process rather than as mechanisms. It is a central conclusion of this Ph.D. project that, therefore, concepts of defence must leave their descriptive level to overcome the problem of cause and effect, allowing an understanding of development as open psychodynamic and cultural system. [less ▲]

Detailed reference viewed: 155 (7 UL)
See detailCultural identity and values in intergenerational movement: The multicultural case of the Grand-Duchy of Luxembourg
Barros Coimbra, Stephanie UL

Doctoral thesis (2020)

Migration flows have led to an increase in questions about the multiple cultural influences on individuals. The resulting demographic changes raise in many host societies essential questions related to ... [more ▼]

Migration flows have led to an increase in questions about the multiple cultural influences on individuals. The resulting demographic changes raise in many host societies essential questions related to national belonging, and thus to cultural identity and value systems. While migrating to a new cultural environment, migrant individuals face several challenges and they have to negotiate several developmental tasks using self-regulatory strategies, with correspondingly different psychological outcomes. These issues become even more important in a country such as Luxembourg with a high migrant proportion (47%; Statec, 2019). Little is still known about how second-generation adults who have grown up in immigrant families negotiate the double cultural identity, and about their value profiles compared to the local populations of the country of origin and receiving country. Four diverse subsamples were used out of the broader IRMA project pool, depending on the different objectives of the four studies. In total, N = 506 participants from three cultural subgroups ( LU natives, PT migrants in Luxembourg & PT natives in Portugal) participated in the quantitative part of the IRMA project ( LuN: n = 179; PtM: n = 209 & PtN: n = 118), and N = 20 took part in the qualitative part (n = 10 PtM dyads & n = LuN dyads). Study 1 highlighted the importance of the migration experience as a life-disruptive event that has impacts on individual and family cultures, as well as value systems during the life of migrant families. Study 2, looking specifically at PT migrant families, found a generational gap in terms of adult children’s higher attachment to the receiving culture as well as stronger tendencies towards a compatible identity orientation compared to their respective parents. However, the qualitative part of Study 2 revealed ambivalent feelings about double cultural belonging amongst the Portuguese second-generation adult children. Study 3 therefore focused on the latter and identified four ways of dealing with double cultural frames - blended, alternated, separated (Phinney & Devich-Navarro, 1997) - and expanded the model by identifying a fourth cluster of ambivalent cultural identity. In addition, the survey analyzed how their cultural identity profiles enabled them to achieve personally or socially meaningful goals and values. Blended biculturals used mainly primary regulatory control strategies, which were linked to the most positive psychological outcomes (higher self-esteem & well-being, and low acculturation stress). The ambivalent cluster was the least successful in terms of psychological outcomes (low self-esteem & well-being, and high acculturative stress) using both primary and secondary compensatory regulatory strategies. Study 4, an intercultural comparison between two family generations – one adult child and two elder parents – within three different cultural subgroups – LuN, PtM and PtN – aimed to better disentangle the effects of family, culture and immigration, and thus investigate the different cultural influences and messages reflected in the processes of transmission of values and value profiles. Overall, findings of Study 4 revealed the existence of an intergenerational gap between elder parents and their respective adult children; the presence of a cultural gap between the three cultural subgroups studied, which could be explained by both culture of origin and migration, with specifically an acculturation gap in the sub-sample of Portuguese migrants; and a moderate relative intergenerational transmission across cultures. The latter thus allows for a certain cultural persistence and continuity of a society and its cultural system (Trommsdorff et al., 2004) while allowing for cultural flexibility over generations that could be important for family identity and beneficial for well-being far more than a mere exact reproduction of values over generations (Barni & Donato, 2018). [less ▲]

Detailed reference viewed: 113 (14 UL)
Full Text
See detailOPTICAL DEFECT SPECTROSCOPY IN CUINS2 THIN FILMS AND SOLAR CELLS
Lomuscio, Alberto UL

Doctoral thesis (2020)

Pure-sulphide Cu(In,Ga)S2 solar cells have reached certified power conversion efficiency as high as 15.5 %. While this record performance has been achieved by growing the semiconducting absorber at very ... [more ▼]

Pure-sulphide Cu(In,Ga)S2 solar cells have reached certified power conversion efficiency as high as 15.5 %. While this record performance has been achieved by growing the semiconducting absorber at very high temperature with a copper deficient composition, all other previous records were based on chalcopyrite films deposited under Cu excess. Still, this world record is far from the theoretical power conversion achievable in single junction solar cell for this semiconductor (about 30 %), which has a tunable band gap between 1.5 and 2.4 eV. This thesis aims to gain insight into the optoelectronic properties of this semiconductor, particularly CuInS2, looking at their variation as a function of the deposition temperature and of the absorber composition. The investigations are carried out mainly by photoluminescence (PL) spectroscopy, which allows to measure the quasi Fermi level splitting (QFLS), that is an upper limit of the maximum open circuit voltage (VOC) an absorber is capable of. PL spectroscopy is used to get insights onto the electronic defects as well, both the shallow ones, which contribute to the doping, and the deep ones, which enhance non-radiative recombination. By increasing the Cu content in the as-grown compositions, the morphology and microstructure of the thin films improve, as they show larger grains and less structural defects than films deposited with Cu deficiency. The composition affects the QFLS as well, which is significantly higher for sample deposited under Cu excess, in contrast to the observations in selenide chalcopyrite. The increment of the process temperature leads to an improvement of the QFLS too, although absorbers grown in Cu deficiency are less influenced, likely because of a lower sodium content in the high-temperature glass used as substrate. The QFLS increase correlates with the lowering of a deep defect related band, which manifests itself with a peak maximum at around 0.8 eV in room temperature PL spectra. In literature, the low efficiencies exhibited by Cu(In,Ga)S2–based solar cells are often attributed to interface problems at the p-n junction, i.e. at the absorber-buffer layer interface. In this work, the comparison of the QFLS and VOC of pure sulphides CIGS with those measured on selenides clearly points out that the lower efficiencies exhibited by the former are caused also by the intrinsic lower optoelectronic quality of Cu(In,Ga)S2 films. To shed light on the electronic structure, high quality CuInS2 films are deeply investigated by means of low temperature PL. Four shallow defects are detected: one shallow donor at about 30 meV from the conduction band and three shallow acceptors at about 105, 145 and 170 meV from the valence band. The first of these acceptors dominates the band edge luminescence of sample grown with composition close to the stoichiometry, whereas the second deeper acceptor is characteristic of absorbers deposited in Cu rich regime. The deepest of these acceptors seems to be present over a wide range of compositions, although its luminescence is observable only for slight Cu-poor samples with sodium incorporation during the deposition. The quality of the examined films allows the observations of phonon coupling of these shallow defects for the first time in this semiconductor. All these observations on shallow defects and their phonon coupling behaviour allowed to revise the defect model for this semiconductor. The findings of this thesis reveal the strong similarity of the shallow defects structure with selenium based compounds. On the other hand, the presence of deep defects in CuInS2 strongly limits the optoelectronic quality of the bulk material, causing the gap in power conversion efficiencies compared to low-band gap Cu(In,Ga)Se2 solar cells, which show efficiencies above 23%. [less ▲]

Detailed reference viewed: 190 (23 UL)
Full Text
See detailInvestigating the potential of investing in fine stringed instruments as an alternative investment asset
Ortiz-Munoz, Angela UL

Doctoral thesis (2020)

Often seen as a passion project or part of a philanthropic venture, rare and fine stringed instruments offer an exciting option to diversify one’s investment portfolio while providing an opportunity for ... [more ▼]

Often seen as a passion project or part of a philanthropic venture, rare and fine stringed instruments offer an exciting option to diversify one’s investment portfolio while providing an opportunity for an exceptional long-term investment. Though, historically rare violins have not been widely recognized as assets for investment, this category is gaining interest due to its steady increase in value, a lively international market and a finite and diminishing supply. This study demonstrates that fine stringed instruments offer a steady increase of approximately 3,7-6,9% return per annum with a dramatic percentage increase since the 80’s. In this thesis, the stringed instrument public auction and private dealer markets are reviewed, the price dynamics are studied, and some fundamental intra market specific limitations are tackled in order to observe the true underlying returns of this asset. In order to build solid conclusions, the largest fine stringed instrument auction database has been developed encompassing the period from 1850 until today, although for the analysis a focus is put in the period from the 1980’s until today, as it is when the demand and, consequently the market for violins boomed. [less ▲]

Detailed reference viewed: 131 (8 UL)
Full Text
See detailOn the practical security of white-box cryptography
Wang, Junwei UL

Doctoral thesis (2020)

Cryptography studies how to secure communications and information. The security of a cryptosystem depends on the secrecy of the underlying key. White-box cryptography explores methods to hide a ... [more ▼]

Cryptography studies how to secure communications and information. The security of a cryptosystem depends on the secrecy of the underlying key. White-box cryptography explores methods to hide a cryptographic key into some software deployed in the real world. Classical cryptography only assumes that the adversary accesses the target cryptographic primitive in a black-box manner in which she can only observe or manipulate the input and output of the primitive, but cannot know or tamper with its internal details. The gray-box model further allows an adversary to exploit key- dependent sensitive information leaked from the execution of physical implementations. All sorts of side-channel attacks exploit some physical information leakage, such as the power consumption of the device. The white-box model considers the worst-case scenario in which the adversary has complete control over the software and its execution environment. The goal of white-box cryptography is to securely implement a cryptographic primitive against such a powerful adversary. Although the scientific community has proposed some candidate solutions to build white-box cryptography, all have proven ineffective. Consequently, this problem has remained open for almost two decades since the concept was introduced. The continuous growth in market demand and the emerging potential applications have driven the industry to deploy secretly-designed proprietary solutions. Al- though this paradigm of achieving security through obscurity contradicts the widely accepted Kerckhoffs' principle in cryptography, this is currently the only option for white-box cryptography. Security experts have reported how gray-box attacks could be used to extract keys from several publicly available white-box implementations. In a gray-box attack, the adversary adapts side-channel analysis techniques to the white-box context, i.e., to target computation traces made of noise-free run- time information instead of the noisy physical leakage. Gray-box attacks are generic since they do not require any a priori knowledge of the implementation and hence avoid costly reverse engineering. Some non-publicly scrutinized industrial white- box schemes in the market are believed to be under the threat of gray-box attacks. This thesis focuses on the analysis and improvement of gray-box attacks and the associated countermeasures for white-box cryptography. We first provide an in- depth analysis of why gray-box attacks are capable of breaking the classical white- box design which is based on table encodings. Next, we introduce a new gray-box at- tack named linear decoding analysis and show that linearly encoding sensitive information is insufficient to protect the cryptographic software. Afterward, we describe how to combine state-of-the-art countermeasures to resist gray-box attacks and comprehensively elaborate on the (in)effectiveness of these combined countermeasures in terms of computation complexity. Finally, we introduce a new attack technique that exploits the data-dependency of the targeted implementation to substantially lower the complexity of the existing gray-box attacks on white-box cryptography. In addition to the theoretical analyses and new attack techniques introduced in this thesis, we report some attack experiments against practical white-box implementations. In particular, we could break the winning implementations of two consecutive editions of the well-known WhibOx white-box cryptography competition. [less ▲]

Detailed reference viewed: 110 (19 UL)
Full Text
See detailDemountable composite beams: Analytical calculation approaches for shear connections with multilinear load-slip behaviour
Kozma, Andras UL

Doctoral thesis (2020)

The work carried out throughout the thesis focused on the behaviour of demountable composite beams in order to facilitate the integration of steel-concrete composite construction into the concept of ... [more ▼]

The work carried out throughout the thesis focused on the behaviour of demountable composite beams in order to facilitate the integration of steel-concrete composite construction into the concept of circular economy. There are several hindrances in the way of reuse when considering traditional composite structures. One of them is the method that the current construction practice applies for connecting the concrete deck to the steel beam. The traditionally applied welded studs are advantageous in the terms of structural performance; however, they do not provide the ability of dismounting. In order to overcome this issue, different demountable shear connection types were investigated that use pretensioned bolted connections. The investigations included laboratory experiments in the means of push-out tests and full-scale beam-tests. The experiments were complemented by numerical simulations and parametric studies. The experiments showed that the developed shear connections have highly a nonlinear load-slip behaviour. When these types of connections are applied in a composite beam, the nonlinearity of the shear connection causes a nonlinear load-deflection response already in the elastic phase. Analytical equations were derived for the description of the elastic properties of composite beams with nonlinear shear connection. For the calculation of the elastic deflections an iterative procedure was developed. This method is capable of capturing the nonlinear load-deflection response. With the developed iterative method, the elastic deflections can be determined with a similar accuracy by using spreadsheet calculations as by using nonlinear finite element simulations. Due to the highly nonlinear behaviour of the tested shear connections the basic assumptions of Eurocode 4 for the determination of the plastic moment resistance of composite beams with partial shear connection are not valid anymore. The code does not enable the use of equidistant shear connector spacing and the design needs to be conducted using fully elastic analysis. This would make the use of demountable shear connections complicated and uneconomic. In the face of these issues, the probability of the practical application of demountable and reusable composite structures would be very low. On the other hand, experiments and numerical simulations show that composite beams can develop plasticity even if a non-ductile shear connection is applied. In order to overcome these issues, a new calculation method was developed for the prediction of the plastic moment resistance of demountable composite beams. A simplified method was proposed based on the developed procedure by defining an effective shear resistance for the demountable shear connections. The effective shear resistance allows the current calculation method to be extended for demountable shear connections. In this way, the benefits of composite construction can be maintained while providing the possibility of reuse. [less ▲]

Detailed reference viewed: 126 (30 UL)
Full Text
See detailGenetic regulators of ventral midbrain gene expression and nigrostriatal circuit integrity
Gui, Yujuan UL

Doctoral thesis (2020)

Complex traits are a fundamental feature of diverse organisms. Understanding the genetic architecture of a complex trait is arduous but paramount because heterogeneity is prevalent in populations and ... [more ▼]

Complex traits are a fundamental feature of diverse organisms. Understanding the genetic architecture of a complex trait is arduous but paramount because heterogeneity is prevalent in populations and often disease-related. Genome-wide association studies have identified many genetic variants associated with complex human traits, but they can only explain a small portion of the expected heritability. This is partially because human genomes are highly diverse with large inter-personal difference. It has been estimated that every human differs from each other by at least 5 million variants. Moreover, many common variants with small effect can contribute to complex traits, but they cannot survive from stringent statistical cutoff given the currently available sample size. Mice are an ideal substitute. They are maintained in a controlled condition to minimize the variation introduced by environment. Each mouse of an inbred strain is genetically identical, but different strains bear innate genetic heterogeneity between each other, mimicking human diversity. Hence, in this work we used inbred mouse strains to study the genetic variation of complex traits. We focused on ventral midbrain, the brain region controlling motor functions and behaviors such as anxiety and fear learning that differ profoundly between inbred mouse strains. Such phenotypic diversity is directed by differences in gene expression that is controlled by cis- and trans-acting regulatory variants. Profound understanding on the genetic variation of ventral midbrain and its related phenotypic differences could pave the way to apprehend the whole genetic makeup of its associated disease phenotypes such as Parkinson’s disease and schizophrenia. Therefore, we set out to investigate the cis- and trans-acting variants affecting mouse ventral midbrain by coupling tissue-level and cell type-specific transcriptomic and epigenomic data. Transcriptomic comparison on ventral midbrains of C57BL/6J, A/J and DBA/2J, three inbred strains segregated by ~ 6 million genetic variants, pinpointed PTTG1 was the only transcription factor significantly altered at transcriptional level between the three strains. Pttg1 ablation on C57BL/6J background led to midbrain transcriptome to shift closer to A/J and DBA/2J during aging, suggesting Pttg1 is a novel regulator for ventral midbrain transcriptome. As ventral midbrain is a mixture of cells, tissue level transcriptome cannot always reveal cell type-specific regulatory variation. Therefore, we set out to generate single nuclei chromatin accessibility profiles on ¬ventral midbrains of C57BL/6J and A/J, providing a rich resource to study the transcriptional control of cellular identity and genetic diversity in this brain region. Data integration with existing single cell transcriptomes predicted the key transcription factors controlling cell identity. Putative regulatory variants showed differential accessibility across cell types, indicating genetic variation can direct cell type-specific gene expression. Comparing chromatin accessibility between mice revealed potential trans-acting variation that can affect strain-specific gene expression in a given cell type. The diverse transcriptome profiles in ventral midbrain can lead to phenotypic variation. Nigrostriatal circuit, bridging from ventral midbrain to dorsal striatum by dopaminergic neurons, is an important pathway controlling motor activity. To search for phenotypes related to dopaminergic neurons, we measured the dopamine concentration in dorsal striatum of eight inbred mouse strains. Interestingly, dopamine levels were varied among stains, suggesting it is a complex trait linked to genetic variation in ventral midbrain. To understand the genetic variation contributing to dopamine level differences, we conducted quantitative trait locus (QTL) mapping with 32 CC strains and found a QTL significantly associated with the trait on chromosome X. As expression changes are likely to be underlying the phenotypic variation, we leveraged our previous transcriptomic data from C57BL/6J and A/J to search for genes differentially expressed in the QTL locus. Col4a6 is the most likely QTL gene because of its 9-fold expression difference between C57BL/6J and A/J. Indeed, COL4A6 has been shown to regulate axogenesis during brain development. This coincides with our observation that A/J had less axon branching in dorsal striatum than C57BL/6J, prompting us to propose that Col4a6 can regulate the axon formation of dopaminergic neurons in embryonic stages. Our study provides a comprehensive overview on cis- and trans-regulatory variants affecting expression phenotypes in ventral midbrain, and how they could possibly introduce phenotypic difference associated with this brain region. In addition, our single nuclei chromatin landscapes of ventral midbrain are a rich resource for analysis on gene regulation and cell identity. Our work paves the way to apprehend full genetic makeup on the gene expression control of ventral midbrain, the result of which is important to understand the genetic background of midbrain associated phenotypes. [less ▲]

Detailed reference viewed: 92 (14 UL)
Full Text
See detailMicrostructure-based multiscale modeling of mechanical response for materials with complex microstructures
Kabore, Brice Wendlassida UL

Doctoral thesis (2020)

Complex microstructures are found in several material especially in biological tissues, geotechnical materials and many manufactured materials including composites. These materials are difficult to handle ... [more ▼]

Complex microstructures are found in several material especially in biological tissues, geotechnical materials and many manufactured materials including composites. These materials are difficult to handle by classical numerical analysis tools and the need to incorporate more details on the microstructure have been observed. This thesis focuses on the microstructure-based multi-scale modeling of the mechanical response of materials with complex microstructures and whose mechanical properties are inherently dependent on their internal structure. The conditions of interest are large displacements and high-rate deformation. This work contributes to the understanding of the relevance of microstructure informations on the macroscopic response. A primary application of this research is the investigation and modeling of snow behavior, it has been extended to modeling the impact response in concrete and composite. In the first part, a discrete approach for fine-scale modeling is applied to study the behavior of snow under the conditions mentioned above. Also, application of the this modeling approach to concrete and composite can be found in the appendices. The fine-scale approach presented herein is based on the coupling of Discrete Element Method and aspects of beam theory. This fine-scale approach has proven to be successful in modeling micro-scale processes found in snow. The micro-scale processes are mainly intergranular friction, intergranular bond fracture, creep, sintering, cohesion, and grain rearrangement. These processes not only influence the overall response of the material but also induce permanent changes in its internal structure. Therefore, the initial geometry considered during numerical analysis should be updated after each time or loading increment before further loading. Moreover, when the material matrix is partially granular and continuum, the influence of fluctuating grains micro-inertia caused by debonding, cracking and contact have a significant effect on the macroscopic response especially under dynamic loading. Consequently, the overall rate and history dependent behavior of the material is more easily captured by discrete models. Discrete modeling has proven to be efficient approach for acquiring profound scientific insight into deformation and failure processes of many materials. While important details can be obtained using the discrete models, computational cost and intensive calibration process is required for a good prediction material behavior in the real case scenarios. Therefore, in order to extend the abovementioned fine-scale model to real engineering cases a coarse-scale continuum model based have been developed using an upscaling approach. This upscaled model is based on the macroscopic response of the material with a special regard to the microstructure information of the material. Different strategies are presented for incorporating the microstructure information in the model. Micro-scale related dissipation mechanisms have been incorporated in the coarse-scale model through viscoplasticity and fracture in finite strain formulation. The thesis is divided into nine chapters, where each is an independent paper published or submitted as a refereed journal article. [less ▲]

Detailed reference viewed: 120 (14 UL)
Full Text
See detailA Formal Approach to Ontology Recommendation for Enhanced Interoperability in Open IoT Ecosystems
Kolbe, Niklas UL

Doctoral thesis (2020)

The vision of the Internet of Things (IoT) promises novel, intelligent applications to improve services across all industries and domains. Efficient data and service discovery are crucial to unfold the ... [more ▼]

The vision of the Internet of Things (IoT) promises novel, intelligent applications to improve services across all industries and domains. Efficient data and service discovery are crucial to unfold the potential value of cross-domain IoT applications. Today, the Web is the primary enabler for integrating data from distributed networks, with more and more sensors and IoT gateways connected to the Web. However, semantic data models, standards and vocabularies used by IoT vendors and service providers are highly heterogeneous, which makes data discovery and integration a challenging task. Industrial and academic research initiatives increasingly rely on Semantic Web technologies to tackle this challenge. Ongoing research efforts emphasize the development of formal ontologies for the description of Things, sensor networks, IoT services and domain-dependent observations to annotate and link data on the Web. Within this context, there is a research gap in investigating and proposing ontology recommendation approaches that foster the reuse of most suitable ontologies relevant to semantically annotate IoT data sources. Improved ontology reuse in the IoT enhances semantic interoperability and thus facilitates the development of more intelligent and context-aware systems. In this dissertation, we show that ontology recommendation can form a key building block to achieve this consensus in the IoT. In particular, we consider large-scale IoT systems, also referred to as IoT ecosystems, in which a wide range of stakeholders and service providers have to cooperate. In such ecosystems, semantic interoperability can only be efficiently achieved when a high degree of consensus on relevant ontologies among data providers and consumers exists. This dissertation includes the following contributions. First, we conceptualize the task of ontology recommendation and evaluate existing approaches with regard to IoT ecosystem requirements. We identify several limitations in ontology recommendation, especially concerning the IoT, which motivates the main focus on ontology ranking in this dissertation. Second, we subsequently propose a novel approach to ontology ranking that offers a fairer scoring of ontologies if their popularity is unknown and thus helps in providing a better recommendation in the current state of the IoT. We employ a `learning to rank' approach to show that qualitative ranking features can improve the ranking performance and potentially substitute an explicit popularity feature. Third, we propose a novel ontology ranking evaluation benchmark to address the lack of comparison studies for ontology ranking approaches as a general issue in the Semantic Web. We develop a large, representative evaluation dataset that we derive from the collected user click logs of the Linked Open Vocabularies (LOV) platform. It is the first dataset of its kind that is capable of comparing learned ontology ranking models as proposed in the literature under real-world constraints. Fourth, we present an IoT ecosystem application to support data providers in semantically annotating IoT data streams with integrated ontology term recommendation and perform an evaluation based on a smart parking use case. In summary, this dissertation presents the advancements of the state-of-the-art in the design of ontology recommendation and its role for establishing and maintaining semantic interoperability in highly heterogeneous and evolving ecosystems of inter-related IoT services. Our experiments show that ontology ranking features that are well designed with regard to the underlying ontology collection and respective user behavior can significantly improve the ranking quality and, thus, the overall recommendation capabilities of related tools. [less ▲]

Detailed reference viewed: 217 (5 UL)
See detailNavigating the narrow circle: Rawls and Stout on justification, discourse and institutions
Burks, Deven UL

Doctoral thesis (2020)

Life in political society unfolds within the bounds of a narrow circle, epistemic and moral. A person has only finite faculties and restricted moral motivation. When formulating projects, the person ought ... [more ▼]

Life in political society unfolds within the bounds of a narrow circle, epistemic and moral. A person has only finite faculties and restricted moral motivation. When formulating projects, the person ought to recognize these limits but also to check them. Accordingly, she seeks a deliberative ideal which is sensitive both to good epistemic practice and to respectful relations. How might the person best justify the shape of her society’s institutions, statutes and policies? What reflexive attitudes and dispositions ought she to adopt towards her justificatory resources? The person might work through the sequence of standpoints from John Rawls’s “political liberalism”: a first-person, action-guiding framework of deliberation and reflection. Alternatively, she might model the exploratory discourse and personal virtues characteristic of Jeffrey Stout’s “democratic traditionalism”. This work reconstructs Rawls’s and Stout’s approaches to justification, discourse and institutions and compares their differing methods in search of the most adequate deliberative ideal for democratic society. [less ▲]

Detailed reference viewed: 103 (3 UL)
See detailL'accord exclusif d'élection de for à travers de la Convention de La Haye: une efficacité mitigée
Mchirgui, Zohra UL

Doctoral thesis (2020)

L'accord exclusif d'élection de for en tant que mode d'attribution de compétence dans les litiges commerciaux internationaux fait partie de l'économie du contrat international. Il constitue une composante ... [more ▼]

L'accord exclusif d'élection de for en tant que mode d'attribution de compétence dans les litiges commerciaux internationaux fait partie de l'économie du contrat international. Il constitue une composante indispensable de l'autonomie des parties en tant que principe gouvernant les rapports commerciaux internationaux. À cet égard, la promotion du commerce et des investissements internationaux nécessitent l'élaboration d'un régime international apportant la sécurité et assurant l'efficacité des accords exclusifs d'élection de for. Tel est l'objectif proclamé par les rédacteurs de la Convention de La Haye sur les accords d'élection de for. L'analyse des dispositions de la Convention révèle que cette efficacité prônée par cet instrument est mitigée. Ce constat se vérifie tant au niveau de la validité de l'accord qu'au niveau de ses effets. [less ▲]

Detailed reference viewed: 49 (4 UL)
See detailSurface Energy Modification of Filter Media to achieve optimal Performance Characteristics in select Applications
Staudt, Johannes UL

Doctoral thesis (2020)

The surface modification of modern filter media is examined from the perspective of the energetic properties and how they influence select filtration applications. In contrast to the known mechanical ... [more ▼]

The surface modification of modern filter media is examined from the perspective of the energetic properties and how they influence select filtration applications. In contrast to the known mechanical filtration mechanisms, which are mainly applicable to the solid-liquid separations, new findings strongly suggest that direct interaction forces between the filter and the functional fluids must be taken into account in order to achieve sufficient efficiencies. Separation processes of liquid phases such as liquidliquid coalescence (LLC) or the treatment of process gases with liquid-gas coalescence (LGC) require special properties of filter media with regard to the degree of interaction with these phases. These include but are not limited to surface energy, wettability, chemical resistance, etc. The focus falls increasingly on eliminating the undesired interactions of modern filters with the fluid to be filtered. Filtration with modern fine filter media can result in undesired additive removal (ADDREM), particularly of those additives that are not fully dissolved in carrier fluid. Specifically this refers to the removal of antifoamants from gear oils, which lead to serious consequential damage of those systems. The interfacial interactions between the filter media and the functional fluids are also responsible for other effects such as the highly undesirable phenomenon of electrostatic charging/discharging (ESC/ESD) during the filtration of low-conductivity oils. In this work, the effect of the surface energy modification, in particular, is examined in greater detail. Ultimately, the surface energy of modern filter media is characterized and modified in order to optimize their performance in select applications. The work also presents some examples that illustrate the importance of surface energy in highly challenging filtration applications. [less ▲]

Detailed reference viewed: 42 (6 UL)
See detailRouting Strategies and Content Dissemination Techniques for Software-Defined Vehicular Networks
di Maio, Antonio UL

Doctoral thesis (2020)

Over the past years, vehicular networking has enabled a wide range of new applications that improve vehicular safety, efficiency, comfort, and environmental impact. Vehicular networks, however, normally ... [more ▼]

Over the past years, vehicular networking has enabled a wide range of new applications that improve vehicular safety, efficiency, comfort, and environmental impact. Vehicular networks, however, normally operate in communication-hostile environments and are characterized by dynamic topologies and volatile links, making it challenging to guarantee Quality of Service (QoS) and reliability of vehicular applications. To this end, the present work explores how the centralized coordination offered by Software-Defined Networking can improve the Quality of Service in vehicular networks, particularly for Vehicle-to-Vehicle (V2V) unicast routing and content dissemination. With regard to V2V routing, this work motivates the case for centralized network coordination by studying the performance of traditional MANET routing protocols when applied to urban VANETs, showing that they cannot provide satisfactory performance for modern vehicular applications because of their limited global network awareness, slow convergence, and high signaling. Hence, this work proposes and validates a centralized Multi-Flow Congestion-Aware Routing (MFCAR) algorithm to allocate multiple data flows on V2V routes. The first novelty of MFCAR is the SDN-based node-busyness estimation technique. The second novelty is the enhancement of the path-cost formulation as a linear combination of path length and path congestion, allowing the user application to fine-tune its QoS requirements between throughput and delay. Concerning content dissemination, this work proposes a Fairness- and Throughput-Enhanced Scheduling for Content Dissemination in VANETs (ROADNET), a centralized strategy to improve the tradeoff between data throughput and user fairness in deterministic vehicular content dissemination. ROADNET’s main novelties are the design of a graph-based multi-channel transmission scheduler and the enforcement of a transmission-priority policy that prevents user starvation. As additional contributions, the present work proposes a heuristic for the centralized selection of opportunistic content dissemination parameters and discusses the main security issues in Software-Defined Vehicular Networking (SDVN) along with possible countermeasures. The proposed techniques are evaluated in realistic scenarios (LuST), using discrete-event network simulators (OMNeT++) and microscopic vehicular-mobility simulators (SUMO). It is shown that MFCAR can improve PDR, throughput and delay for unicast V2V routing up to a five-fold gain over traditional algorithms. ROADNET can increase content dissemination throughput by 36% and user fairness by 6% compared to state-of-the-art techniques, balancing the load over multiple channels with a variance below 1%. [less ▲]

Detailed reference viewed: 175 (13 UL)
Full Text
See detailThe D²Rwanda mixed-methods study including a cluster-randomised controlled clinical trial
Lygidakis, Charilaos UL

Doctoral thesis (2020)

Diabetes mellitus prevalence has been estimated at 5.1% in Rwanda. Several factors, including an increase in screening and diagnosis programmes, the urbanization of the population, and changes in ... [more ▼]

Diabetes mellitus prevalence has been estimated at 5.1% in Rwanda. Several factors, including an increase in screening and diagnosis programmes, the urbanization of the population, and changes in lifestyle are likely to contribute to a sharp increase in the prevalence of diabetes mellitus in the next decade. Patients with low health literacy levels are often unable to recognise the signs and symptoms of diabetes mellitus, and may access their health provider late, hence presenting with more complications. The Rwandan health care system is facing a severe shortage in human resources. In response to the need for a better management of non-communicable diseases at primary health care level, a new type of community health workers was introduced: the home-based care practitioners (HBCPs). Approximately 200 HBCPs were trained and deployed in selected areas (“cells”) in nine hospitals across the country. There is growing evidence for the efficacy of interventions using mobile devices in low- and middle-income countries. In Rwanda, there is an urgent call to using mobile health interventions for the prevention and management of non-communicable diseases. The D²Rwanda (Digital Diabetes in Rwanda) research project aims at responding to this call. The overall objectives of the D²Rwanda project are: a) to determine the efficacy of an integrated programme for the management of diabetes in Rwanda, which would include monthly patient assessments by HBCPs and an educational and self-management mobile health patient tool, and; b) to qualitatively explore the ways these interventions would be enacted, their challenges and effects, and changes in the patients’ health behaviours and HBCPs’ work satisfaction. The project employed a mixed-methods sequential explanatory design consisting of a one-year cluster randomised controlled trial with two interventions and followed by focus group discussions with patients and HBCPs. The dissertation presents three studies from the D²Rwanda project. The first study aimed at describing the protocol of the research project, reporting the research questions, inclusion and exclusion criteria, primary and secondary outcomes, measurements, power calculation, randomisation methods, data collection, analysis plan, implementation fidelity and ethical considerations. The aim of the second study was to report on the translation and cultural adaptation of the Problem Areas in Diabetes (PAID) questionnaire and the evaluation of its psychometric properties. First, the questionnaire was translated following a standard protocol. Second, 29 participants were interviewed before producing a final version. Third, we examined a sample of 266 adult patients living with diabetes to determine the psychometric characteristics of the questionnaire. The full scale showed good internal reliability (Cronbach’s α = 0.88). A four-factor model with subdimensions of emotional, treatment, food-related and social-support problems was found to be an adequately approximate fit (RMSEA = 0.056; CFI = 0.951; TLI = 0.943). The mean total PAID score of the sample was high (48.21). Important cultural and contextual differences were noted, urging a more thorough examination of conceptual equivalence with other cultures. The third study aimed at reporting on the disease-related quality of life of patients living with diabetes mellitus in a non-representative sample in Rwanda and to identify potential predictors. This cross-sectional study was part of the baseline assessment of the clinical controlled trial. Between January and August 2019, 206 adult patients living with diabetes were recruited. Disease-specific quality of life was measured using the Kinyarwanda version of the Diabetes-39 (D-39) questionnaire, which was translated and cross-culturally adapted beforehand by the same group of researchers. A haemoglobin A1c (HbA1c) test was performed on all patients. Socio-demographic and clinical data were collected, including medical history, disease-related complications and comorbidities. “Anxiety and worry” and “sexual functioning” were the two most affected dimensions. Hypertension was the most frequent comorbidity (49.0% of participants). The duration of the disease and HbA1c values were not correlated with any of the D-39 dimensions. The five dimensions of quality of life were predicted differentially by gender, age, years of education, marital status, achieving a HbA1c of 7%, hypertension, presence of complications and hypoglycaemic episodes. A moderating effect was identified between use of insulin and achieving a target HbA1c of 7% in the “diabetes control” scale. Further prospective studies are needed to determine causal relationships. [less ▲]

Detailed reference viewed: 96 (6 UL)
Full Text
See detailESSAYS ON AGGLOMERATION, RESILIENCE, AND REGIONAL INNOVATION
Kalash, Basheer UL

Doctoral thesis (2020)

Detailed reference viewed: 59 (12 UL)
Full Text
See detailOn idempotent n-ary semigroups
Devillet, Jimmy UL

Doctoral thesis (2020)

This thesis, which consists of two parts, focuses on characterizations and descriptions of classes of idempotent n-ary semigroups where n >= 2 is an integer. Part I is devoted to the study of various ... [more ▼]

This thesis, which consists of two parts, focuses on characterizations and descriptions of classes of idempotent n-ary semigroups where n >= 2 is an integer. Part I is devoted to the study of various classes of idempotent semigroups and their link with certain concepts stemming from social choice theory. In Part II, we provide constructive descriptions of various classes of idempotent n-ary semigroups. More precisely, after recalling and studying the concepts of single-peakedness and rectangular semigroups in Chapters 1 and 2, respectively, in Chapter 3 we provide characterizations of the classes of idempotent semigroups and totally ordered idempotent semigroups, in which the latter two concepts play a central role. Then in Chapter 4 we particularize the latter characterizations to the classes of quasitrivial semigroups and totally ordered quasitrivial semigroups. We then generalize these results to the class of quasitrivial n-ary semigroups in Chapter 5. Chapter 6 is devoted to characterizations of several classes of idempotent n-ary semigroups satisfying quasitriviality on certain subsets of the domain. Finally, Chapter 7 focuses on characterizations of the class of symmetric idempotent n-ary semigroups. Throughout this thesis, we also provide several enumeration results which led to new integer sequences that are now recorded in The On-Line Encyclopedia of Integer Sequences (OEIS). For instance, one of these enumeration results led to a new definition of the Catalan numbers. [less ▲]

Detailed reference viewed: 61 (5 UL)
Full Text
See detailFormal Framework for Verifying Implementations of Byzantine Fault-Tolerant Protocols Under Various Models
Vukotic, Ivana UL

Doctoral thesis (2020)

The complexity of critical systems our life depends on (such as water supplies, power grids, blockchain systems, etc.) is constantly increasing. Although many different techniques can be used for proving ... [more ▼]

The complexity of critical systems our life depends on (such as water supplies, power grids, blockchain systems, etc.) is constantly increasing. Although many different techniques can be used for proving correctness of these systems errors still exist, because these techniques are either not complete or can only be applied to some parts of these systems. This is why fault and intrusion tolerance (FIT) techniques, such as those along the well-known Byzantine Fault-Tolerance paradigm (BFT), should be used. BFT is a general FIT technique of the active replication class, which enables seamless correct functioning of a system, even when some parts of that system are not working correctly or are compromised by successful attacks. Although powerful, since it systematically masks any errors, standard (i.e., ``homogeneous'') BFT protocols are expensive both in terms of the messages exchanged, the required number of replicas, and the additional burden of ensuring them to be diverse enough to enforce failure independence. For example, standard BFT protocols usually require 3f+1 replicas to tolerate up to f faults. In contrast to these standard protocols based on homogeneous system models, the so-called hybrid BFT protocols are based on architectural hybridization: well-defined and self-contained subsystems of the architecture (hybrids) follow system model and fault assumptions differentiated from the rest of the architecture (the normal part). This way, they can host one or more components trusted to provide, in a trustworthy way, stronger properties than would be possible in the normal part. For example, it is typical that whilst the normal part is asynchronous and suffers arbitrary faults, the hybrids are synchronous and fail-silent. Under these favorable conditions, they can reliably provide simple but effective services such as perfect failure detection, counters, ordering, signatures, voting, global timestamping, random numbers, etc. Thanks to the systematic assistance of these trusted-trustworthy components in protocol execution, hybrid BFT protocols dramatically reduce the cost of BFT. For example, hybrid BFT protocols require 2f+1 replicas instead of 3f +1 to tolerate up to f faults. Although hybrid BFT protocols significantly decrease message/time/space complexity vs. homogeneous ones, they also increase structural complexity and as such the probability of finding errors in these protocols increases. One other fundamental correctness issue not formally addressed previously, is ensuring that safety and liveness properties of trusted-trustworthy component services, besides being valid inside the hybrid subsystems, are made available, or lifted, to user components at the normal asynchronous and arbitrary-on-failure distributed system level. This thesis presents a theorem-prover based, general, reusable and extensible framework for implementing and proving correctness of synchronous and asynchronous homogeneous FIT protocols, as well as hybrid ones. Our framework comes with: (1) a logic to reason about homogeneous/hybrid fault-models; (2) a language to implement systems as collections of interacting homogeneous/hybrid components; and (3) a knowledge theory to reason about crash/Byzantine homogeneous and hybrid systems at a high-level of abstraction, thereby allowing reusing proofs, and capturing the high-level logic of distributed systems. In addition, our framework supports the lifting of properties of trusted-trustworthy components, first to the level of the local subsystem the trusted component belongs to, and then to the level of the distributed system. As case studies and proofs-of-concept of our findings, we verified seminal protocols from each of the relevant categories: the asynchronous PBFT protocol, two variants of the synchronous SM protocol, as well as two versions of hybrid MinBFT protocol. [less ▲]

Detailed reference viewed: 154 (10 UL)
Full Text
See detailL'intersubjectivité de l'entretien annuel d'évaluation, levier de la reconnaissance psychologique au travail?
Tancredi, Ernestina UL

Doctoral thesis (2020)

This paper aims to observe the impact of the Annual Appraisal Interview (AAI) on psychological recognition in the workplace through its intersubjectivity. Although performance reviews are implicitly ... [more ▼]

This paper aims to observe the impact of the Annual Appraisal Interview (AAI) on psychological recognition in the workplace through its intersubjectivity. Although performance reviews are implicitly discussed in the literature on recognition at work, the link between these two concepts is very poorly developed. The first part of the study details the theoretical foundations of recognition at work from a polysemic and transdisciplinary perspective. Two models of models of psychological recognition at work (Bourcier & Palobart, 1997; Deci & Ryan, 1985) are retained. By also addressing the dual character (objective / subjective) of the AAI, its importance for human relations and its purpose of recognition at work, this part highlights its main intersubjective elements: the participation of the appraisees, their individuality, the skills of the appraiser, the actual work of the appraisee and the fairness of the feedback. The second part presents the research methodology based on semi-structured interviews and questionnaires with three target groups (HR managers, appraisers, appraisees) of around one hundred organizations located in the Grand Duchy of Luxembourg. Analysis of the results shows that the AAI confers six types of psychological recognition (existential recognition, recognition of investment, recognition of results, need for autonomy, need for competence and need for social belonging) through the intersubjective levers of the AAI, but to varying degrees. A classification on a "person-work" basis of the AAI levers and the six types of recognition will allow managers to activate the appropriate levers in order to achieve a balance between the types of psychological recognition. [less ▲]

Detailed reference viewed: 68 (2 UL)
Full Text
See detailIN PURSUIT OF OPENNESS An analysis of the legal framework of the European Union’s Copernicus’ open data policy
Cabrera Alvarado, Sandra UL

Doctoral thesis (2020)

The European Union (EU) civil Earth Observation (EO) programme Copernicus, has positioned itself as one of the largest EO data providers worldwide by providing an answer to strong demand for ... [more ▼]

The European Union (EU) civil Earth Observation (EO) programme Copernicus, has positioned itself as one of the largest EO data providers worldwide by providing an answer to strong demand for (environmental) data and information thanks to its open data policy mandated by its regulatory framework. Nevertheless, it has been the target of criticism by some policymakers who argue that rather benefiting Europeans, Copernicus' open data policy impacts negatively in the Union’s competitiveness and furthers the economic interests of US tech giants, such as Amazon and Google. Facing this criticism, the European Commission has evaluated possible modifications on the open data policy pillars “full, free and open” to address emerging economic and technological challenges. This dissertation contributes to this debate by answering the overarching question of whether alterations to the open data policy could be done without hampering Copernicus’ core goals, and whether such alternation would be in compliance with the EU legal framework. Specifically, this dissertation addresses the balancing of the right of access to public information against the economic public interest protection. To do so, firstly, this dissertation explains the legal meaning of the Copernicus’ open data policy pillars: 1) full, 2) free and 3) open within the context of EU law. Secondly, it explains the substantive limits of the open data policy, by examining non-contractual third party liability for the Commission, as well as the lawful exceptions to access to Copernicus data and information. These lawful exceptions are formulated by the Copernicus Regulation 377/2014 and Delegated Regulation 1159/2013 as the “protection of public security” and “international relations interests,” the “protection of privacy” and the “integrity of the Copernicus system”. However, this dissertation goes further by examining other EU law texts on the right of access to public information, such as the Regulation 1049/2001 and its Article 4 on the protection of public economic and financial interests, and the public overriding interests on access to environmental information enshrined mainly in the Regulation 1367/2006 and the Directive 2003/4/EC. Finally, it presents a proposal on how to evaluate the performance of Copernicus’ open data policy in order to determine if any substantial modification of this policy is indeed desirable. [less ▲]

Detailed reference viewed: 48 (3 UL)
Full Text
See detailReconciling data privacy with sharing in next-generation genomic workflows
Fernandes, Maria UL

Doctoral thesis (2020)

Privacy attacks reported in the literature alerted the research community for the existing serious privacy issues in current biomedical process workflows. Since sharing biomedical data is vital for the ... [more ▼]

Privacy attacks reported in the literature alerted the research community for the existing serious privacy issues in current biomedical process workflows. Since sharing biomedical data is vital for the advancement of research and the improvement of medical healthcare, reconciling sharing with privacy assumes an overwhelming importance. In this thesis, we state the need for effective privacy-preserving measures for biomedical data processing, and study solutions for the problem in one of the harder contexts, genomics. The thesis focuses on the specific properties of the human genome that make critical parts of it privacy-sensitive and tries to prevent the leakage of such critical information throughout the several steps of the sequenced genomic data analysis and processing workflow. In order to achieve this goal, it introduces efficient and effective privacy-preserving mechanisms, namely at the level of reads filtering right upon sequencing, and alignment. Human individuals share the majority of their genome (99.5%), the remaining 0.5% being what distinguishes one individual from all others. However, that information is only revealed after two costly processing steps, alignment and variant calling, which today are typically run in clouds for performance efficiency, but with the corresponding privacy risks. Reaping the benefits of cloud processing, we set out to neutralize the privacy risks, by identifying the sensitive (i.e., discriminating) nucleotides in raw genomic data, and acting upon that. The first contribution is DNA-SeAl, a systematic classification of genomic data into different levels of sensitivity with regard to privacy, leveraging the output of a state-of-the-art automatic filter (SRF) isolating the critical sequences. The second contribution is a novel filtering approach, LRF, which undertakes the early protection of sensitive information in the raw reads right after sequencing, for sequences of arbitrary length (long reads), improving SRF, which only dealt with short reads. The last contribution proposed in this thesis is MaskAl, an SGX-based privacy-preserving alignment approach based on the filtering method developed. These contributions entailed several findings. The first finding of this thesis is the performance × privacy product improvement achieved by implementing multiple sensitivity levels. The proposed example of three sensitivity levels allows to show the benefits of mapping progressively sensitive levels to classes of alignment algorithms with progressively higher privacy protection (albeit at the cost of a performance tradeoff). In this thesis, we demonstrate the effectiveness of the proposed sensitivity levels classification, DNA-SeAl. Just by considering three levels of sensitivity and taking advantage of three existing classes of alignment algorithms, the performance of privacy-preserving alignment significantly improves when compared with state-of-the-art approaches. For reads of 100 nucleotides, 72% have low sensitivity, 23% have intermediate sensitivity, and the remaining 5% are highly sensitive. With this distribution, DNA-SeAl is 5.85× faster and it requires 5.85× less data transfers than the binary classification – two sensitivity levels. The second finding is the sensitive genomic information filtering improvement by replacing the per read classification with a per nucleotide classification. With this change, the filtering approach proposed in this thesis (LRF) allows the filtering of sequences of arbitrary length (long reads), instead of the classification limited to short reads provided by the state-of-the-art filtering approach (SRF). This thesis shows that around 10% of an individuals genome is classified as sensitive by the developed LRF approach. This improves the 60% achieved by the previous state of the art, the SRF approach. The third finding is the possibility of building a privacy-preserving alignment approach based on reads filtering. The sensitivity-adapted alignment relying on hybrid environments, in particular composed by common (e.g., public cloud) and trustworthy execution environments (e.g., SGX enclave cloud) in clouds, gets the best of both worlds: it enjoys the resource and performance optimization of cloud environments,while providing a high degree of protection to genomic data. We demonstrate that MaskAl is 87% faster than existing privacy-preserving alignment algorithms (Balaur), with similar privacy guarantees. On the other hand, Maskal is 58% slower compared to BWA, a highly efficient non-privacy preserving alignment algorithm. In addition, MaskAl requires less 95% of RAM memory and it requires between 5.7 GB and 15 GB less data transfers in comparison with Balaur. This thesis breaks new ground on the simultaneous achievement of two important goals of genomics data processing: availability of data for sharing; and privacy preservation. We hope to have shown that our work, being generalisable, gives a significant step in the direction of, and opens new avenues for, wider-scale, secure, and cooperative efforts and projects within the biomedical information processing life cycle. [less ▲]

Detailed reference viewed: 165 (41 UL)
Full Text
See detailFrom drug resistance mechanisms to microRNA function in melanoma
Kozar, Ines UL

Doctoral thesis (2020)

Cutaneous melanoma is an aggressive skin cancer that emerges from the unrestrained proliferation of melanocytes, which are the pigment producing cells in the basal layer of the epidermis. Despite the fact ... [more ▼]

Cutaneous melanoma is an aggressive skin cancer that emerges from the unrestrained proliferation of melanocytes, which are the pigment producing cells in the basal layer of the epidermis. Despite the fact that it only accounts for approximately 5% of all skin cancers, melanoma is responsible for the vast majority of skin cancer-related deaths. As more than half of the patients with sporadic melanoma harbour activating mutations in the protein kinase BRAF, the development of small kinase inhibitors targeting mutated BRAF led to an increased overall survival of patients with metastatic melanoma. Despite the initially promising results, the rapidly emerging resistance to these targeted therapies remains a serious clinical issue. To investigate the mechanisms underlying resistance to targeted therapies, we used in vitro BRAF-mutant drug-sensitive and drug-resistant melanoma cell models that were generated in our laboratory. First, we performed a kinase inhibitor library screening with the aim to identify novel kinase inhibitor combinations to circumvent or delay BRAF inhibitor-induced resistance. We have characterised synergistic kinase inhibitors targeting the MAPK pathway and the cell cycle showing promising effects in BRAF-mutant drug-sensitive and -resistant cells, which could be used as an effective sequential or alternative treatment option for late-stage melanoma patients. Additionally, we investigated the impact of BRAF inhibitors at the transcriptional level by comparing miRNome and transcriptome changes in drug-sensitive and -resistant melanoma cells. We identified miRNAs (e.g. miR-509, miR-708) and genes (e.g. PCSK2, AXL) that were distinctly differentially expressed in resistant compared to sensitive cells. Subsequent co-expression analyses revealed a low MITF/AXL ratio in a subset of resistant cell lines, suggesting that miRNAs might be involved in the switch from one molecular phenotype to another, thus conferring tolerance to targeted therapies. Finally, we applied a method based on cross-linking ligation and sequencing of hybrids (qCLASH) to provide a comprehensive snapshot of the miRNA targetome in our BRAF-mutant melanoma cells. To our knowledge, we implemented for the first time a CLASH-based method to cancer cells, and identified over 8k direct and distinct miRNA-target interactions in melanoma cells, including many with non-predicted and non-canonical binding characteristics, thus expanding the pool of miRNA-target interactions. Taken together, these results provide new insights into complex and heterogeneous responses to BRAF inhibition, adding an additional level of complexity to drug-induced (post-) transcriptional network rewiring in melanoma. [less ▲]

Detailed reference viewed: 88 (16 UL)
Full Text
See detailNon-localized contact between beams with circular and elliptical cross-sections
Magliulo, Marco UL

Doctoral thesis (2020)

Numerous materials and structures are aggregates of slender bodies. We can, for example, refer to struts in metal foams, yarns in textiles, fibers in muscles or steel wires in wire ropes. To predict the ... [more ▼]

Numerous materials and structures are aggregates of slender bodies. We can, for example, refer to struts in metal foams, yarns in textiles, fibers in muscles or steel wires in wire ropes. To predict the mechanical performance of these materials and structures, it is important to understand how the mechanical load is distributed between the different bodies. If one can predict which slender body is the most likely to fail, changes in the design could be made to enhance its performance. As the aggregates of slender bodies are highly complex, simulations are required to numerically compute their mechanical behaviour. The most widely employed computational framework is the Finite Element Method in which each slender body is modeled as a series of beam elements. On top of an accurate mechanical representation of the individual slender bodies, the contact between the slender bodies must often be accurately modeled. In the past couple of decades, contact between beam elements has received wide-spread attention. However, the focus was mainly directed towards beams with circular cross-sections, whereas elliptical cross-sections are also relevant for numerous applications. Only two works have considered contact between beams with elliptical cross-sections, but they are limited to point-wise contact, which restricts their applicability. In this Ph.D. thesis, different frameworks for beams with elliptical cross-sections are proposed in case a point-wise contact treatment is insufficient. The thesis also reports a framework for contact scenarios where a beam is embedded inside another beam, which is in contrast to conventional contact frameworks for beams in which penetrating beams are actively repelled from each other. Finally, two of the three contact frameworks are enhanced with frictional sliding, where friction not only occurs due to sliding in the beams’ longitudinal directions but also in the transversal directions. [less ▲]

Detailed reference viewed: 50 (13 UL)
Full Text
See detailCondition assessment of bridge structures by damage localisation based on the DAD-method and close-range UAV photogrammetry
Erdenebat, Dolgion UL

Doctoral thesis (2020)

The provided dissertation presents a so-called “Deformation Area Difference (DAD)” method for condition assessment of existing bridges, especially for the detection of stiffness-reducing damages. The ... [more ▼]

The provided dissertation presents a so-called “Deformation Area Difference (DAD)” method for condition assessment of existing bridges, especially for the detection of stiffness-reducing damages. The method is based on the one hand on conventional static load deflection experiments and on the other hand on a high-precision measurement of the structural deflection. The experimental load on the bridge should be generated within the serviceability limit state in order to enable a non-destructive inspection. In the course of the laboratory tests, the most innovative measuring techniques were applied, whereby the photogrammetry has delivered promising results. With the help of additional studies on the influences of camera quality and calibration, the measuring precision of photogrammetry could be brought to its limits. Both the theoretical investigations and the laboratory tests showed the successful use of the DAD method for the identification of local damages. Therefore, the first in-situ experiment was carried out on a single-span, prestressed bridge in Luxembourg. The knowledge gained from this was combined with statistical investigations based on finite element calculations and artificially generated measurement noise effect in order to determine the application limits, such as the achievable measurement precision, identifiable degree of damage, required number of measurement repetitions, influence of the damage position, optimal size of the structural deformation, etc. The development of the DAD method ready for application usefully supplements the state of the art and contributes to the reliable assessment of the bridge condition. [less ▲]

Detailed reference viewed: 146 (13 UL)
Full Text
See detailCollective Effects in Stochastic Thermodynamics
Herpich, Tim UL

Doctoral thesis (2020)

Detailed reference viewed: 133 (12 UL)
Full Text
See detailRapid Automatized Naming and Phonological Awareness: The predictive effect for learning to read and write and their relationship with developmental dyslexia
Botelho da Silva, Patrícia UL

Doctoral thesis (2020)

Among the predictors of reading, rapid automatized naming (RAN) and phonological awareness (PA) are the best predictors. The predictive effect of these abilities is different, and they predict different ... [more ▼]

Among the predictors of reading, rapid automatized naming (RAN) and phonological awareness (PA) are the best predictors. The predictive effect of these abilities is different, and they predict different aspects of reading, being dependent on the orthographic regularity of the language as well as the student’s level or grade in school. The double deficit theory describes these two components as impaired in people with dyslexia and reading disabilities. Longitudinal studies that analyze cognitive processes supporting the development of reading and literacy are important for understanding processes in good readers as well as will help mitigate the effects of dyslexia and reading disability. The present thesis pursues two major aims. The first aim is to analyze the structure of RAN and predictive effect of RAN and PA skills on reading and writing tasks in Brazilian Portuguese in two studies. Study 1 sought to investigate the structure of RAN tests for Brazilian Portuguese throughout its development according to age. The results were important in determining the bidimensional model (alphanumeric and non-alphanumeric) throughout development of age and development of literacy. In addition, the results showed that the period between kindergarten and elementary school may show greater development of RAN skills in conjunction with literacy learning. In Study 2, we sought to investigate the predictive effect of PA and RAN on the development of reading and writing ability in Brazilian Portuguese. The results showed that RAN ability was a better predictor than PA of reading and writing skills for Brazilian Portuguese in relation to reading and writing speed. In addition, the type of stimulus of RAN influenced the predictive effect. Alphanumeric RAN better predicts reading, while non-alphanumeric stimuli predict writing. The second aim of this study is to compare the performance of children and adolescents with and without developmental dyslexia in RAN, PA, and reading tests and to verify the predictive effect of RAN in participants with dyslexia. Study 3 showed that the cognitive profile of dyslexic children was compatible with a single deficit in RAN according to double deficit theory. Impairment has only been found in RAN ability as well as in processes such as visual attention, which is an underlying process of RAN skills. Therefore, despite the importance of PA for the development of reading and writing, both in good readers and in those with reading impairments, RAN ability proved to be a good predictor for both groups in Brazilian Portuguese. [less ▲]

Detailed reference viewed: 96 (4 UL)
See detailART LAUNDERING: PROTECTING CULTURAL HERITAGE THROUGH CRIMINAL LAW
Mosna, Anna UL

Doctoral thesis (2020)

Detailed reference viewed: 43 (8 UL)
See detailUNTERSUCHUNG DER METALLURGISCHEN PHASENBILDUNG UND DEREN EINFLUSS AUF DIE VERBINDUNGSEIGENSCHAFTEN SOWIE AUF DIE VERSAGENSURSACHEN VON LASERGESCHWEIßTEN HARTMETALL-STAHL-VERBUNDEN
Schiry, Marc UL

Doctoral thesis (2020)

Laser beam welding of hard metal to steel offers multiple advantages regarding resource saving, mechanical strength of the joint and automation capability. The present work focuses on the fundamental ... [more ▼]

Laser beam welding of hard metal to steel offers multiple advantages regarding resource saving, mechanical strength of the joint and automation capability. The present work focuses on the fundamental research and development of the laser based welding process of tungsten carbide-cobalt hard metals with a tempering steel. Metallurgical analysis of the welding process showed that the formation of intermetallic and/or intermediate phases has a significant influence on the properties and mechanical strength of the dissimilar joint. The amount of molten hard metal in the steel melt bath plays a key role for the formation of the different phases. Therefore, a new parameter dy was defined, which correlates with the hard metal content in the melt pool. It is shown that for hard metals with 12 wt.% of cobalt binder, the phase transformation in the weld seam starts with a relative hard metal content of 10 vol.%. This threshold is dependent on the relative cobalt concentration in the hard metal. The tungsten carbide grain size has a low influence on the phase transformation in the weld seam. Steel melt pools with hard metal content lower than 10 vol.% show under metallographic observation a martensitic/bainitic microstructure. Simulation of the stress formation in the joint showed that due to the volume expansion of martensite during the transformation, tensile stress in the hard metal part was formed. Under shear load, these tensile stresses compensate with the induced compressive stresses and results an almost stress free interface. High shear strengths of the dissimilar joints are possible. A higher percentage of hard metal melting during the welding process increases the carbon and tungsten content in the melt bath. Consequently, the martensite start temperature decreases significantly. When the initiating temperature for martensite transformation falls under room temperature, the weld seam transforms into an austenitic microstructure. Because of the missing volume expansion during cooling of the weld seam volume, low stresses in the hard metal are generated. Under shear load of the joint area, high tensile stresses appear in the sintered part. These stress concentration decreases the shear strength of the weld and lead to premature failure. For the industrial use case, high mechanical strength and a robust manufacturing process is needed. Therefore, the laser welding process of hard metal to steel was optimized. The joint properties strongly depend on the weld bead geometry. Weld seams with x- or v-shaped profiles enable local concentrated metallurgical bonding of the sintered part to the steel sheet. Reduction of the horizontal focal distance of the laser beam to the interface increases the bonding ratio, but also intensifies the melting of the hard metal part and lead to the metallurgical transformation. By tilting a v-shape weld seam, it was possible to optimize the bonding behavior and to minimize the amount of liquefied hard metal in the melt bath. Hard metal with low amounts of binder showed a high temperature sensitivity. After laser welding of these grades, hot cracks were found in the sinter material. These cracks were formed due to the high stresses, which are generate during cooling of the dissimilar joint. Therefore, a laser based heat treatment process was developed and applied. With a defined pre- and post-heating of the joint area, the cooling rate was reduced significantly and the stresses in the hard metal part minimized. High shear strengths were the result. [less ▲]

Detailed reference viewed: 53 (8 UL)
Full Text
See detailImproving the understanding of binge-watching behavior: An exploration of its underlying psychological processes
Flayelle, Maèva UL

Doctoral thesis (2020)

The advent of the digital age with its progress in on-demand viewing technology has been associated in recent years with a dramatic increase in binge-watching (i.e., watching multiple episodes of TV ... [more ▼]

The advent of the digital age with its progress in on-demand viewing technology has been associated in recent years with a dramatic increase in binge-watching (i.e., watching multiple episodes of TV series in one session), to the point that this practice has now become the new normative way to consume TV shows. Nevertheless, along with its massive rise has come concerns about the associated mental and physical health outcomes, with initial studies even assuming its addictive nature. At a time when the psychological investigation of this behavior was only in its infancy, the current PhD thesis, therefore, aimed at improving the understanding of binge-watching, by clarifying the psychological processes involved in its development and maintenance. To this end, six empirical studies were conducted along two main research axes: 1) the conceptualization and assessment of binge-watching behaviors, and 2) the exploration of binge-watchers’ psychological characteristics. Study 1 consisted of a preliminary qualitative exploration of the phenomenological characteristics of binge-watching. Capitalizing on these pilot findings, Study 2 reported on the development and psychometric validation of two assessment instruments, measuring TV series watching motivations (“Watching TV Series Motives Questionnaire”, WTSMQ) and binge-watching engagement and symptoms (“Binge-Watching Engagement and Symptoms Questionnaire”, BWESQ). Finally, Study 3 aimed at cross-culturally validating the WTSMQ and BWESQ in nine languages (i.e., English, French, Spanish, Italian, German, Hungarian, Persian, Arabic, Chinese). Subsequent to this first line of investigation, Study 4 then explored potential binge-watchers’ subtypes by taking into consideration three key psychological factors, i.e. motivations for binge-watching, impulsivity traits, and emotional reactivity. Study 5 consisted of a pre-registered experimental study aimed at ascertaining differences on behavioral and self-reported impulsivity in non-problematic and problematic binge-watchers. Finally, Study 6 carried out the first systematic review of literature on binge-watching correlates. Beyond providing two theoretically and psychometrically sound binge-watching measures that may enable widespread expansion of international research on the topic, this doctoral research also allowed for important insights into the heterogeneous and complex nature of binge-watching, as well as into the understanding of its underlying psychological mechanisms. Centrally, by revealing that high – but healthy – and problematic engagement in binge-watching are underlined by distinct motivational and dispositional psychological processes, the overall findings of this PhD thesis bring an alternative etiological comprehension of problematic binge-watching as a maladaptive coping or emotional regulation strategy to deal with negative affect states. [less ▲]

Detailed reference viewed: 238 (9 UL)
See detailPublic Hearings in Investor-State Treaty Arbitration: Revisiting the Principle
Harvey Geb. Koprivica, Ana UL

Doctoral thesis (2020)

This thesis examines the scope, role and contemporary application of the traditional principle of public hearings, with a particular focus on the specific dispute resolution system of investor-state ... [more ▼]

This thesis examines the scope, role and contemporary application of the traditional principle of public hearings, with a particular focus on the specific dispute resolution system of investor-state arbitration. Whereas there have been extensive discussions in recent years surrounding developments which aim to increase the procedural openness and transparency of this traditionally private dispute resolution system, the emergence of a distinct requirement of holding public hearings in such contexts has not yet been given much attention. This thesis seeks to provide a better understanding of public hearings in investment treaty arbitration. By going beyond the usual narrative of the legitimacy and policy objectives of transparency, a more systematic and comprehensive approach to the issue of public hearings in investor-state arbitration is adopted. In addressing existing gaps in the literature, this thesis contends that current developments related to the principle of public hearings should not be analysed as a phenomenon specific to investor-state arbitration, but should instead be analysed within the broader context of the analogous developments at both the domestic and international level. In conducting such an investigation, this thesis situates the debate surrounding public hearings in investment treaty arbitration within a broader legal landscape, encompassing both national and international courts and tribunals. By examining the evolution of public hearings, it is argued that a steady shift in the understanding of the principle of public hearings over time may be detected. Public hearings have gone from serving merely as a means of protecting the individual from the secrecy and arbitrariness of the state, to becoming a democratic tool which the public is entitled to use not only in order to monitor and evaluate the administration of justice, but also as a platform for facilitating further public debate. In other words, the thesis demonstrates a shift in the understanding of the principle of public hearings from being a mere right of an individual to be heard in an open court, to the (additional) right of the general public to have an insight into what goes on in the courtroom and the active duty of the courts to ensure that this right is respected. This latter aspect of the principle of public hearings is subject to comparative analysis which examines the normative and practical solutions adopted by national and international courts when applying the principle of public hearings. While detecting a divergent legal landscape when it comes to providing public access to hearings, this thesis reveals a general trend towards greater regulation of the ways in which the public may obtain such access. What is more, it shall be shown that, in an era of expanded media coverage of public hearings, the subsequent enlargement of the audiences for such hearings, and the possibility to instantly disseminate information about proceedings through various technologies creates new paths for procedural openness and new challenges for the courts. Based on the findings of this comparative analysis, the thesis argues that it is not only the principle of public hearings which has been renewed and transformed. In seeking to adapt to the principle, the procedures in which the principle of public hearings operates have also started to change. This comparative analysis then forms the basis upon which a critical analysis and in-depth assessment is provided within the context of public hearings in investor-state arbitration. From a more “dispute-oriented” perspective, the thesis looks into the considerations and challenges that ought to be taken into account by arbitral tribunals and parties when organising a public hearing. By not losing sight of the implications for the system as a whole, however, the thesis addresses the future impact that the introduction of public hearings into the system investor-state arbitration may have on that system and, notably, upon its procedures. The key finding here is that the increasing relevance of public hearings in investor-state arbitration constitutes merely one part of the overall evolution of the “public” dimension of the requirement of public hearings. When taking these developments together, the thesis concludes that the debate on what constitutes a truly public hearing has entered a new epoch, with new actors and new challenges. [less ▲]

Detailed reference viewed: 315 (9 UL)
Full Text
See detailDESIGN AND OPTIMIZATION OF SIMULTANEOUS WIRELESS INFORMATION AND POWER TRANSFER SYSTEMS
Gautam, Sumit UL

Doctoral thesis (2020)

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world ... [more ▼]

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world population keeps witnessing or hearing about new generations of mobile/wireless technologies within every half to one decade. It is certain the wireless communication systems have enabled the exchange of information without any physical cable(s), however, the dependence of the mobile devices on the power cables still persist. Each passing year unveils several critical challenges related to the increasing capacity and performance needs, power optimization at complex hardware circuitries, mobility of the users, and demand for even better energy efficiency algorithms at the wireless devices. Moreover, an additional issue is raised in the form of continuous battery drainage at these limited-power devices for sufficing their assertive demands. In this regard, optimal performance at any device is heavily constrained by either wired, or an inductive based wireless recharging of the equipment on a continuous basis. This process is very inconvenient and such a problem is foreseen to persist in future, irrespective of the wireless communication method used. Recently, a promising idea for simultaneous wireless radio-frequency (RF) transmission of information and energy came into spotlight during the last decade. This technique does not only guarantee a more flexible recharging alternative, but also ensures its co-existence with any of the existing (RF-based) or alternatively proposed methods of wireless communications, such as visible light communications (VLC) (e.g., Light Fidelity (Li-Fi)), optical communications (e.g., LASER-equipped communication systems), and far-envisioned quantum-based communication systems. In addition, this scheme is expected to cater to the needs of many current and future technologies like wearable devices, sensors used in hazardous areas, 5G and beyond, etc. This Thesis presents a detailed investigation of several interesting scenarios in this direction, specifically concerning design and optimization of such RF-based power transfer systems. The first chapter of this Thesis provides a detailed overview of the considered topic, which serves as the foundation step. The details include the highlights about its main contributions, discussion about the adopted mathematical (optimization) tools, and further refined minutiae about its organization. Following this, a detailed survey on the wireless power transmission (WPT) techniques is provided, which includes the discussion about historical developments of WPT comprising its present forms, consideration of WPT with wireless communications, and its compatibility with the existing techniques. Moreover, a review on various types of RF energy harvesting (EH) modules is incorporated, along with a brief and general overview on the system modeling, the modeling assumptions, and recent industrial considerations. Furthermore, this Thesis work has been divided into three main research topics, as follows. Firstly, the notion of simultaneous wireless information and power transmission (SWIPT) is investigated in conjunction with the cooperative systems framework consisting of single source, multiple relays and multiple users. In this context, several interesting aspects like relay selection, multi-carrier, and resource allocation are considered, along with problem formulations dealing with either maximization of throughput, maximization of harvested energy, or both. Secondly, this Thesis builds up on the idea of transmit precoder design for wireless multigroup multicasting systems in conjunction with SWIPT. Herein, the advantages of adopting separate multicasting and energy precoder designs are illustrated, where we investigate the benefits of multiple antenna transmitters by exploiting the similarities between broadcasting information and wirelessly transferring power. The proposed design does not only facilitates the SWIPT mechanism, but may also serve as a potential candidate to complement the separate waveform designing mechanism with exclusive RF signals meant for information and power transmissions, respectively. Lastly, a novel mechanism is developed to establish a relationship between the SWIPT and cache-enabled cooperative systems. In this direction, benefits of adopting the SWIPT-caching framework are illustrated, with special emphasis on an enhanced rate-energy (R-E) trade-off in contrast to the traditional SWIPT systems. The common notion in the context of SWIPT revolves around the transmission of information, and storage of power. In this vein, the proposed work investigates the system wherein both information and power can be transmitted and stored. The Thesis finally concludes with insights on the future directions and open research challenges associated with the considered framework. [less ▲]

Detailed reference viewed: 244 (19 UL)
Full Text
See detailDESIGN AND OPTIMIZATION OF SIMULTANEOUS WIRELESS INFORMATION AND POWER TRANSFER SYSTEMS
Gautam, Sumit UL

Doctoral thesis (2020)

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world ... [more ▼]

The recent trends in the domain of wireless communications indicate severe upcoming challenges, both in terms of infrastructure as well as design of novel techniques. On the other hand, the world population keeps witnessing or hearing about new generations of mobile/wireless technologies within every half to one decade. It is certain the wireless communication systems have enabled the exchange of information without any physical cable(s), however, the dependence of the mobile devices on the power cables still persist. Each passing year unveils several critical challenges related to the increasing capacity and performance needs, power optimization at complex hardware circuitries, mobility of the users, and demand for even better energy efficiency algorithms at the wireless devices. Moreover, an additional issue is raised in the form of continuous battery drainage at these limited-power devices for sufficing their assertive demands. In this regard, optimal performance at any device is heavily constrained by either wired, or an inductive based wireless recharging of the equipment on a continuous basis. This process is very inconvenient and such a problem is foreseen to persist in future, irrespective of the wireless communication method used. Recently, a promising idea for simultaneous wireless radio-frequency (RF) transmission of information and energy came into spotlight during the last decade. This technique does not only guarantee a more flexible recharging alternative, but also ensures its co-existence with any of the existing (RF-based) or alternatively proposed methods of wireless communications, such as visible light communications (VLC) (e.g., Light Fidelity (Li-Fi)), optical communications (e.g., LASER-equipped communication systems), and far-envisioned quantum-based communication systems. In addition, this scheme is expected to cater to the needs of many current and future technologies like wearable devices, sensors used in hazardous areas, 5G and beyond, etc. This Thesis presents a detailed investigation of several interesting scenarios in this direction, specifically concerning design and optimization of such RF-based power transfer systems. The first chapter of this Thesis provides a detailed overview of the considered topic, which serves as the foundation step. The details include the highlights about its main contributions, discussion about the adopted mathematical (optimization) tools, and further refined minutiae about its organization. Following this, a detailed survey on the wireless power transmission (WPT) techniques is provided, which includes the discussion about historical developments of WPT comprising its present forms, consideration of WPT with wireless communications, and its compatibility with the existing techniques. Moreover, a review on various types of RF energy harvesting (EH) modules is incorporated, along with a brief and general overview on the system modeling, the modeling assumptions, and recent industrial considerations. Furthermore, this Thesis work has been divided into three main research topics, as follows. Firstly, the notion of simultaneous wireless information and power transmission (SWIPT) is investigated in conjunction with the cooperative systems framework consisting of single source, multiple relays and multiple users. In this context, several interesting aspects like relay selection, multi-carrier, and resource allocation are considered, along with problem formulations dealing with either maximization of throughput, maximization of harvested energy, or both. Secondly, this Thesis builds up on the idea of transmit precoder design for wireless multigroup multicasting systems in conjunction with SWIPT. Herein, the advantages of adopting separate multicasting and energy precoder designs are illustrated, where we investigate the benefits of multiple antenna transmitters by exploiting the similarities between broadcasting information and wirelessly transferring power. The proposed design does not only facilitates the SWIPT mechanism, but may also serve as a potential candidate to complement the separate waveform designing mechanism with exclusive RF signals meant for information and power transmissions, respectively. Lastly, a novel mechanism is developed to establish a relationship between the SWIPT and cache-enabled cooperative systems. In this direction, benefits of adopting the SWIPT-caching framework are illustrated, with special emphasis on an enhanced rate-energy (R-E) trade-off in contrast to the traditional SWIPT systems. The common notion in the context of SWIPT revolves around the transmission of information, and storage of power. In this vein, the proposed work investigates the system wherein both information and power can be transmitted and stored. The Thesis finally concludes with insights on the future directions and open research challenges associated with the considered framework. [less ▲]

Detailed reference viewed: 94 (7 UL)
Full Text
See detailUrban Green Amenity and City Structure
Tran, Thi Thu Huyen UL

Doctoral thesis (2020)

One of the main components that make cities attractive to their residents is their public park and garden systems. Green urban areas from a small community garden to famous areas such as `Jardin du ... [more ▼]

One of the main components that make cities attractive to their residents is their public park and garden systems. Green urban areas from a small community garden to famous areas such as `Jardin du Luxembourg' in Paris not only shape the face of the city but are a quintessential aspect of the quality of life for local inhabitants. They offer places for local recreation, beautiful views, cleaner air and many other advantages. Recent research has validated the connection between urban parks and the well-being of the city's inhabitants. Although green urban areas might seem to be meagre in comparison with other natural ecosystems such as wetlands or forests, the value of the environmental, recreational, and other services they offer is likely to be disproportionally high due to their strategic locations. This dissertation focuses on studying the optimal provision of green urban areas and the welfare effects of a substantial change in green provision policies in the presence of other types of land uses and adverse shocks. It comprises four papers (chapters). [less ▲]

Detailed reference viewed: 89 (27 UL)