References of "Doctoral thesis"
     in
Bookmark and Share    
Full Text
See detailDesigning Compliance Patterns: Integrating Value Modeling, Legal Interpretation and Argument Schemes for Legal Risk Management.
Kiriinya, Robert Kevin Muthuri UL

Doctoral thesis (2017)

Companies must be able to demonstrate that their way of doing business is compliant with relevant rules and regulations. However, the law often has open texture; it is generic and needs to be interpreted ... [more ▼]

Companies must be able to demonstrate that their way of doing business is compliant with relevant rules and regulations. However, the law often has open texture; it is generic and needs to be interpreted before it can be applied in a specific case. Entrepreneurs generally lack the expertise to engage in the regulatory conversations that make up this interpretation process. In particular, for the application domain of technological startups, this leads to legal risks. This research seeks to develop a robust module for legal interpretation. We apply informal logic to bridge the gap between the principles of interpretation in legal theory with the legal rules that determine the compliance of business processes. Accordingly, interpretive arguments characterized by argument schemes are applied to business models represented by value modeling (VDML). The specific outcome of the argumentation process (if any) is then summarized into a compliance pattern, in a context-problem-solution format. Two case studies in the application area of startups shows that the approach is able to express the legal arguments, but is also understandable for the target audience. The project is presented in two parts; Part I, the background, contains an introduction, literature review, motivational case studies, a survey on legal risks, and a modeling of business and legal aspects. Part II builds on the interdisciplinary facets of the first part to develop the Compliance Patterns Framework which is then validated with two case studies followed by a conclusion. [less ▲]

Detailed reference viewed: 107 (9 UL)
Full Text
See detailComprehensive Specification and Efficient Enforcement of Role-based Access Control Policies using a Model-driven Approach
Ben Fadhel, Ameni UL

Doctoral thesis (2017)

Prohibiting unauthorized access to critical resources and data has become a major requirement for enterprises. Access control (AC) mechanisms manage requests from users to access system resources; the ... [more ▼]

Prohibiting unauthorized access to critical resources and data has become a major requirement for enterprises. Access control (AC) mechanisms manage requests from users to access system resources; the access is granted or denied based on the authorization policies defined within the enterprise. One of the most used AC paradigms is role-based access control (RBAC), in which access rights are determined based on the user’s role. In this dissertation, we focus on the problems of modeling, specifying and enforcing complex RBAC policies, by making the following contributions: 1. the GemRBAC+CTX conceptual model, a UML extension of the RBAC model that includes all the entities required to express the various types of RBAC policies found in the literature, with a specific emphasis on contextual policies. For each type of policy, we provided the corresponding formalization using the Object Constraint Language (OCL) to operationalize the access decision for a user’s request using model-driven technologies. 2. the GemRBAC-DSL language, a domain-specific language for RBAC policies designed on top of the GemRBAC+CTX model. The language is characterized by a syntax close to natural language, which does not require any mathematical background for expressing RBAC policies. The language supports all the authorization policies captured by the GemRBAC+CTX model. 3. MORRO, a model-driven framework for the run-time enforcement of RBAC policies expressed in GemRBAC-DSL, built on top of the GemRBAC+CTX model. MORRO provides policy enforcement for both access and usage control. 4. three tools (an editor for GemRBAC-DSL, a model transformation tool for GemRBAC-DSL, a run-time enforcement framework) have been implemented and released as part of this work. The GemRBAC+CTX model and the GemRBAC-DSL language have been adopted by our industrial partner for the specification of the access control policies of a Web application in the domain of disaster reliefintervention. We have extensively evaluated the applicability and the scalability of MORRO on this Web application. The experimental results show that an access decision can be made on average, in less than 107 ms and that the time for processing a notification of an AC-related event is less than 512ms. Furthermore, both the access decision time and the execution time for processing a notification of an AC-related event scale—in the majority of the cases—linearly with respect to the parameters characterizing AC configurations; in the remaining cases, the access decision time is constant. [less ▲]

Detailed reference viewed: 221 (46 UL)
Full Text
See detail‘WHAT DO YOU MEAN YOU LOST THE PAST?’ AGENCY, EXPRESSION AND SPECTACLE IN AMATEUR FILMMAKING
Wecker, Danièle UL

Doctoral thesis (2017)

The following thesis presents an examination of privately produced amateur films taken from the Amateur Film Archive in the Centre National d’Audiovisuel in Luxembourg. It analyzes how amateur films ... [more ▼]

The following thesis presents an examination of privately produced amateur films taken from the Amateur Film Archive in the Centre National d’Audiovisuel in Luxembourg. It analyzes how amateur films present a filmic world and examines specific notions of meaning generation without meta-data and original context. Rather than take amateur film as a homogenous genre or practice, this study concentrates on film language. The first part of the following two-fold engagement with these filmic worlds thus identifies the highly differentiated filmic modes that can be read from the images. A filmic mode is related to as a concomitance of style and choice in subject matter. Without original context, these films lose their most important means of meaning generation, namely the recollective narratives that are constructed by the intended audience in the viewing situation. This work operates from a basis of analysis that takes these images as remnants of a visual narration rather than in terms of recollective narratives. It operates from the very simple basis that what was filmed had significance for these filmmakers and how the camera was used can serve as illustration of underlying intentions and motivations—both intended and inadvertent. The first part of this study then focuses on the diversification within the images and reads concomitant cultural codifications that structure representational productions in the private and also analyzes film language as means of self-inscription and self-narration. The second part of this two-fold engagement explores filmic language in terms of a visualization of primordial signifying expression coming-into-being. It relates to amateur film and practice from a basis of primary Becoming rather than a fixed Being. This engagement extends to include the researcher and his/her own background as co-constitutive part of this process of primordial meaning coming-into-being. Film is related to as the opening of a filmic universe that presents its own structures and engagements and not as visualization of a profilmic world from the past. [less ▲]

Detailed reference viewed: 78 (18 UL)
Full Text
See detailOptical Characterization of Cu2ZnSnSe4 Thin Films
Sendler, Jan Michael UL

Doctoral thesis (2017)

Detailed reference viewed: 94 (12 UL)
Full Text
See detailUne étude acoustique et comparative sur les voyelles du luxembourgeois
Thill, Tina UL

Doctoral thesis (2017)

This thesis is part of a descriptive work in acoustic phonetics, with the aim of studying the productions of Luxembourgish vowels in native and non-native speech. Its objective is to conciliate the ... [more ▼]

This thesis is part of a descriptive work in acoustic phonetics, with the aim of studying the productions of Luxembourgish vowels in native and non-native speech. Its objective is to conciliate the variation of Luxembourgish, mainly a spoken language, composed of many regional varieties, evolving in a multilingual context, and the learning of Luxembourgish as a foreign language in the Grand-Duchy of Luxembourg. As we assume the fact that language acquisition implies knowledge of sound contrast in speech, we investigate the productions of speakers whose mother tongues have different features than Luxembourgish, such as French, to see whether if the contrast are reproduced in non-native speech. Productions of French speakers are compared to those of native speakers from the region around the capital city of the Grand-Duchy of Luxembourg, whose variety serves as a reference to the teaching of Luxembourgish as a foreign language. The purpose of the study is the following : - to extend the descriptions on the acoustic properties of vowels produced in a regional variety of the Grand-Duchy of Luxembourg, - to highlight the specific difficulties of productions by French learners of Luxembourgish, - to interpret the results regarding the teaching of Luxembourgish as a foreign language. Fieldwork and the creation of a corpus through recordings of 10 Luxembourg speakers and 10 French speakers are an important part of the empirical work. We obtained a corpus of 12 hours and a half of spoken and spontaneous speech, including native speech and not native of Luxembourgish and also native speech of French. This corpus represents a first corpus containing native and non-native speech of Luxembourgish and enables to conduct different comparative studies. In our thesis, we did comparative analyses of the data in read speech. The methodology we used made it possible to compare data of native and non-native speech and also data of the L1 and L2 of French speakers. The results gave information about native and non-native productions of vowels. They showed that, on the one hand, vowel productions vary among speakers, even if these speak the same regional variety and, on the other hand, French speakers who learn Luxembourgish at B1/B2 level have difficulties producing contrasts in Luxembourgish. This concerns: - the quantity of the long vowels [iː], [eː], [aː], [oː], [uː] and short vowels [i], [e], [ɑ], [ɔ]. [u], - the quality of the long vowel [aː] and the two short vowels [æ] and [ɑ], - the quality of the beginning of the diphthongs [æi], [æu], [ɑi], [ɑu]. These results as well as thorough descriptions of the vowels in native speech, extend knowledge not only of Luxembourgish, but also of the variety which serves as the reference for Luxembourgish as a foreign language. In addition, they open up prospects for studying Luxembourgish by problematizing the introduction of rules for this type of education, despite the absence of language instruction in schools and the evolution of regional varieties in a concentrated geographical area. [less ▲]

Detailed reference viewed: 234 (11 UL)
See detailA Combined Unsupervised Technique for Automatic Classification in Electronic Discovery
Ayetiran, Eniafe Festus UL

Doctoral thesis (2017)

Electronic data discovery (EDD), e-discovery or eDiscovery is any process by which electronically stored information (ESI) is sought, identified, collected, preserved, secured, processed, searched for the ... [more ▼]

Electronic data discovery (EDD), e-discovery or eDiscovery is any process by which electronically stored information (ESI) is sought, identified, collected, preserved, secured, processed, searched for the ones relevant to civil and/or criminal litigations or regulatory matters with the intention of using them as evidence. Searching electronic document collections for relevant documents is part of eDiscovery which poses serious problems for lawyers and their clients alike. Getting efficient and effective techniques for search in eDiscovery is an interesting and still an open problem in the field of legal information systems. Researchers are shifting away from traditional keyword search to more intelligent approaches such as machine learning (ML) techniques. State-of-the-art algorithms for search in eDiscovery focus mainly on supervised approaches, mainly; supervised learning and interactive approaches. The former uses labelled examples for training systems while the latter uses human assistance in the search process to assist in retrieving relevant documents. Techniques in the latter approach include interactive query expansion among others. Both approaches are supervised form of technology assisted review (TAR). Technology assisted review is the use of technology to assist or completely automate the process of searching and retrieval of relevant documents from electronically stored information (ESI). In text retrieval/classification, supervised systems are known for their superior performance over unsupervised systems. However, two serious issues limit their application in the electronic discovery search and information retrieval (IR) in general. First, they have associated high cost in terms of finance and human effort. This is particularly responsible for the huge amount of money expended on eDiscovery on annual basis. Secondly, their case/project-specific nature does not allow for resuse, thereby contributing more to organizations' expenses when they have two or more cases involving eDiscovery. Unsupervised systems on the other hand, is cost-effective in terms of finance and human effort. A major challenge in unsupervised ad hoc information retrieval is that of vocabulary problem which causes terms mismatch in queries and documents. While topic modelling techniques try to tackle this from the thematic point of view in the sense that both queries and documents are likely to match if they discuss about the same topic, natural language processing (NLP) approaches view it from the semantic perspective. Scalable topic modelling algorithms, just like the traditional bag of words technique, suffer from polysemy and synonymy problems. Natural language processing techniques on the other hand, while being able to considerably resolve the polysemy and synonymy problems are computationally expensive and not suitable for large collections as is the case in eDiscovery. In this thesis, we exploit the peculiarity of eDiscovery collections being composed mainly of e-mail communications and their attachments, mining topics of discourse from e-mails and disambiguating these topics and queries for terms matching has been proven to be effective for retrieving relevant documents when compared to traditional stem-based retrieval. In this work, we present an automated unsupervised approach for retrieval/classification in eDiscovery. This approach is an ad hoc retrieval which creates a representative for each original document in the collection using latent dirichlet allocation (LDA) model with Gibbs sampling and explores word sense disambiguation (WSD) to give these representative documents and queries deeper meanings for distributional semantic similarity. The word sense disambiguation technique by itself is a hybrid algorithm derived from the modified version of the original Lesk algorithm and the Jiang & Conrath similarity measure. Evaluation was carried out on this technique using the TREC legal track. Results and observations are discussed in chapter 8. We conclude that WSD can improve ad hoc retrieval effectiveness. Finally, we suggest further work focusing on efficient algorithms for word sense disambiguation which can further improve retrieval effectiveness if applied to original document collections in contrast to using representative collections. [less ▲]

Detailed reference viewed: 86 (7 UL)
Full Text
See detailTax havens under international pressure: a game theoretical approach
Pulina, Giuseppe UL

Doctoral thesis (2017)

Detailed reference viewed: 68 (20 UL)
Full Text
See detailImages of Galois representations and p-adic models of Shimura curves
Amoros Carafi, Laia UL

Doctoral thesis (2016)

The thesis treats two questions situated in the Langlands program, which is one of the most active and important areas in current number theory and arithmetic geometry. The first question concerns the ... [more ▼]

The thesis treats two questions situated in the Langlands program, which is one of the most active and important areas in current number theory and arithmetic geometry. The first question concerns the study of images of Galois representations into Hecke algebras coming from modular forms over finite fields, and the second one deals with p-adic models of Shimura curves and its bad reduction. Consequently, the thesis is divided in two parts. The first part is concerned with the study of images of Galois representations that take values in Hecke algebras of modular forms over finite fields. The main result of this part is a complete classification of the possible images of 2-dimensional Galois representations with coefficients in local algebras over finite fields under the hypotheses that: (i) the square of the maximal ideal is zero, (ii) that the residual image is big (in a precise sense), and (iii) that the coefficient ring is generated by the traces. In odd characteristic, the image is completely determined by these conditions; in even characteristic the classification is much richer. In this case, the image is uniquely determined by the number of different traces of the representation, a number which is given by an easy formula. As an application of these results, the existence of certain p-elementary abelian extensions of big non-solvable number fields can be deduced. Whereas some aspects of class field theory are accessible through this approach, it can be applied to huge fields for which standard techniques totally fail. The second part of the thesis consists of an approach to p-adic uniformisations of Shimura curves X(Dp,N) through a combination of different techniques concerning rigid analytic geometry and arithmetic of quaternion orders. The results in this direction lean on two methods: one is based on the information provided by certain Mumford curves covering Shimura curves and the second one on the study of Eichler orders of level N in the definite quaternion algebra of discriminant D. Combining these methods, an explicit description of fundamental domains associated to p-adic uniformisation of families of Shimura curves of discriminant Dp and level N ≥ 1, for which the one-sided ideal class number h(D,N) is 1, is given. The method presented in this thesis enables one to find Mumford curves covering Shimura curves, together with a free system of generators for the associated Schottky groups, p-adic good fundamental domains and their stable reduction-graphs. As an application, general formulas for the reduction-graphs with lengths at p of the considered families of Shimura curves can be computed. [less ▲]

Detailed reference viewed: 119 (19 UL)
See detailHousehold Nonemployment, Social Risks and Inequality in Europe
Hubl, Vanessa Julia UL

Doctoral thesis (2016)

The dissertation explores interactions between households, states and markets and their relation to socio-economic inequalities among working-age households. The focus lies on three aspects: the ... [more ▼]

The dissertation explores interactions between households, states and markets and their relation to socio-economic inequalities among working-age households. The focus lies on three aspects: the importance of the welfare state, economic risks and opportunities within households, and the link between these two aspects and broader patterns of inequality at the societal level. These are analysed in three empirical studies, using a range of statistical methods (multilevel analysis, event history models and counterfactual analyses of income distributions). In addition, an extensive framework paper provides a background to the analyses, clarifies their relation in theoretical terms, and discusses the results. The first empirical study explores the relation between the regulation of social benefits, social risks, and household nonemployment in 20 European countries using internationally comparative institutional and survey data. The study reveals that eligibility conditions and activation policy vary systematically with the effect of social risks on the probability of household nonemployment. The strength and direction of influence depends on the specific policy area and risk factor. The second study analyses the duration of household nonemployment for British and German couples from the early 1990s to the mid-2000s. Dual joblessness has become longer over time, which is related to changes in the household composition of nonemployed couples. The third analysis evaluates the consequences of welfare shifts between households on changing patterns of inequality between 2005 and 2010. Changes in the distribution of household employment, benefit transfers, and family types in Germany, the United Kingdom, Poland, and Spain are analysed in terms of their contribution to developments in income inequality between households. The analysis of income distributions suggests that changes in socio-demographic and economic household characteristics in a population can have a substantial impact on different income groups. The overarching conclusion of the dissertation is that certain aspects of household composition enhance the risk of lower economic activity and welfare but that the impact of these factors varies strongly according to the broader context the households are situated in. Social policies that have the potential to reduce inequalities between households need to consider possible adverse effects on economic risk structures and spill-over effects to other areas of social protection. Future research should continue studying the household’s role in relation to the market, the state, and individual needs and resources; incorporate additional economic and welfare regime aspects into the analyses; and explore further statistical tools to do so. [less ▲]

Detailed reference viewed: 89 (15 UL)
Full Text
See detailCohomologies and derived brackets of Leibniz algebras
Cai, Xiongwei UL

Doctoral thesis (2016)

In this thesis, we work on the structure of Leibniz algebras and develop cohomology theories for them. The motivation comes from: • Roytenberg, Stienon-Xu and Ginot-Grutzmann's work on standard and naive ... [more ▼]

In this thesis, we work on the structure of Leibniz algebras and develop cohomology theories for them. The motivation comes from: • Roytenberg, Stienon-Xu and Ginot-Grutzmann's work on standard and naive cohomology of Courant algebroids (Courant-Dorfman algebras). • Kosmann-Schwarzbach, Roytenberg and Alekseev-Xu's constructions of derived brackets for Courant algebroids. • The classical equivariant cohomology theory and the generalized geometry theory. This thesis consists of three parts: 1. We introduce standard cohomology and naive cohomology for a Leibniz algebra. We discuss their properties and show that they are isomorphic. By similar methods, we prove a generalization of Ginot-Grutzmann's theorem on transitive Courant algebroids, which was conjectured by Stienon-Xu. The relation between standard complexes of a Leibniz algebra and its corresponding crossed product is also discussed. 2. We observe a canonical 3-cochain in the standard complex of a Leibniz algebra. We construct a bracket on the subspace consisting of so-called representable cochains, and prove that the subspace becomes a graded Poisson algebra. Finally we show that for a fat Leibniz algebra, the Leibniz bracket can be represented as a derived bracket. 3. In spired by the notion of a Lie algebra action and the idea of generalized geometry, we introduce the notion of a generalized action of a Lie algebra g on a smooth manifold M, to be a homomorphism of Leibniz algebras from g to the generalized tangent bundle TM+T*M. We define the interior product and Lie derivative so that the standard complex of TM+T*M becomes a g differential algebra, then we discuss its equivariant cohomology. We also study the equivariant cohomology for a subcomplex of a Leibniz complex. [less ▲]

Detailed reference viewed: 148 (17 UL)
Full Text
See detailEssays on Inequality, Public Policy, and Banking
Mavridis, Dimitrios UL

Doctoral thesis (2016)

Detailed reference viewed: 60 (14 UL)
Full Text
See detailCOLLABORATIVE RULE-BASED PROACTIVE SYSTEMS: MODEL, INFORMATION SHARING STRATEGY AND CASE STUDIES
Dobrican, Remus-Alexandru UL

Doctoral thesis (2016)

The Proactive Computing paradigm provides us with a new way to make the multitude of computing systems, devices and sensors spread through our modern environment, work for/pro the human beings and be ... [more ▼]

The Proactive Computing paradigm provides us with a new way to make the multitude of computing systems, devices and sensors spread through our modern environment, work for/pro the human beings and be active on our behalf. In this paradigm, users are put on top of the interactive loop and the underlying IT systems are automated for performing even the most complex tasks in a more autonomous way. This dissertation focuses on providing further means, at both theoretical and applied levels, to design and implement Proactive Systems. It is shown how smart mobile, wearable and/or server applications can be developed with the proposed Rule-Based Middleware Model for computing pro-actively and for operating on multiple platforms. In order to represent and to reason about the information that the proactive system needs to know about its environment where it performs its computations, a new technique called Proactive Scenario is proposed. As an extension of its scope and properties, and for achieving global reasoning over inter-connected proactive systems, a new collaborative technique called Global Proactive Scenario is then proposed. Furthermore, to show their potential, three real world case studies of (collaborative) proactive systems have been explored for validating the proposed development methodology and its related technological framework in various domains like e-Learning, e-Business and e-Health. Results from these experiments con rm that software applications designed along the lines of the proposed rule-based proactive system model together with the concepts of local and global proactive scenarios, are capable of actively searching for the information they need, of automating tasks and procedures that do not require the user's input, of detecting various changes in their context and of taking measures to adapt to it for addressing the needs of the people which use these systems, and of performing collaboration and global reasoning over multiple proactive engines spread across different networks. [less ▲]

Detailed reference viewed: 239 (34 UL)
Full Text
See detailSynchronisation of Model Visualisation and Code Generation Based on Model Transformation
Gottmann, Susann UL

Doctoral thesis (2016)

The development, maintenance and documentation of complex systems is commonly supported by model-driven approaches where system properties are captured by visual models at different layers of abstraction ... [more ▼]

The development, maintenance and documentation of complex systems is commonly supported by model-driven approaches where system properties are captured by visual models at different layers of abstraction and from different perspectives as proposed by the Object Management Group (OMG) and its model-driven architecture. Generally, a model is a concrete view on the system from a specific perspective in a particular domain. We focus on visual models in the form of diagrams and whose syntax is defined by domain-specific modelling languages (DSLs). Different models may represent different views on a system, i.e., they may be linked to each other by sharing a common set of information. Therefore, models that are expressed in one DSL may be transformed to interlinked models in other DSLs and furthermore, model updates may be synchronised between different domains. Concretely, this thesis presents the transformation and synchronisation of source code (abstract syntax trees, ASTs) written in the Satellite-Procedure & Execution Language (SPELL) to flow charts (code visualisation) and vice versa (code generation) as the result of an industrial case study. The transformation and synchronisation are performed based on existing approaches for model transformations and synchronisations between two domains in the theoretic framework of graph transformation where models are represented by graphs. Furthermore, extensions to existing approaches are presented for treating non-determinism in concurrent model synchronisations. Finally, the existing results for model transformations and synchronisations between two domains are lifted to the more general case of an arbitrary number of domains or models containing views, i.e., a model in one domain may be transformed to models in several domains or to all other views, respectively, and model updates in one domain may be synchronised to several other domains or to all other views, respectively. [less ▲]

Detailed reference viewed: 93 (10 UL)
See detailDepression and ostracism: the role of attachment, self-esteem and rejection sensitivity for treatment success and depressive symptom deterioration
Borlinghaus, Jannika UL

Doctoral thesis (2016)

The current research programme is based on three studies investigating ramifications of ostracism in inpatients diagnosed with depression. It aims at understanding responses to ostracism in depressed ... [more ▼]

The current research programme is based on three studies investigating ramifications of ostracism in inpatients diagnosed with depression. It aims at understanding responses to ostracism in depressed patients, and the implications for psychotherapy and symptom deterioration using experimental (study 1) and longitudinal (studies 2 and 3) research designs. Investigating psychological factors such as attachment, self-esteem and Rejection Sensitivity, we found that attachment affects the immediate, physiological reactions to ostracism (study 1), that state self-esteem after an ostracism experience impacts therapy outcome (study 2) and that Rejection Sensitivity, the cognitive- affective disposition to anxiously expect and overact to rejection, predicts deterioration of depressive symptoms 6 months after treatment (study 3). These results highlight the salience of attachment when investigating reactions to ostracism, and the importance of Rejection Sensitivity over the course of therapy as an indicator for therapy outcome and risk for relapse. [less ▲]

Detailed reference viewed: 137 (9 UL)
Full Text
See detailAnalysis of the impact of ROS in networks describing neurodegenerative diseases
Ignatenko, Andrew UL

Doctoral thesis (2016)

In the current thesis the model of the ROS management network is built using the domino principle. The model offers insight into the design principles underlying the ROS management network and enlightens ... [more ▼]

In the current thesis the model of the ROS management network is built using the domino principle. The model offers insight into the design principles underlying the ROS management network and enlightens its functionality in the diseases such as cancer and Parkinson’s disease (PD). It is validated using experimental data. The model is used for in silico study of the ROS management dynamics under the stress conditions (oxidative stress). This highlights the phenomena of both adaptation to stress and the stress accumulation effect in case of repeated stress. This study also helps to discover the potential ways to a personalized treatment of the insufficient ROS management. The different ways of a control of the ROS management network are shown using the optimal control approach. Obtained results could be used for a seeking of the treatment strategies to fix the ROS management failures caused by an oxidative stress, neurodegenerative diseases, etc. Or, in vice versa, to develop the ways of a controllable cell death that might be used in cancer research. [less ▲]

Detailed reference viewed: 100 (13 UL)
See detailDiagnostic Competencies of Teachers: Accuracy of Judgement, Sources of Biases, and Consequences of (Mis-) Judgement
Wollschläger, Rachel UL

Doctoral thesis (2016)

Educational assessment tends to rely on more or less standardized tests, teacher judgments, and observations. Although teachers spend approximately half of their professional conduct in assessment-related ... [more ▼]

Educational assessment tends to rely on more or less standardized tests, teacher judgments, and observations. Although teachers spend approximately half of their professional conduct in assessment-related activities, most of them enter their professional life unprepared, as classroom assessment is often not part of their educational training. Since teacher judgments matter for the educational development of students, the judgments should be up to a high standard. The present dissertation comprises three studies focusing on accuracy of teacher judgments (Study 1), consequences of (mis-)judgment regarding teacher nomination for gifted programming (Study 2) and teacher recommendations for secondary school tracks (Study 3), and individual student characteristics that impact and potentially bias teacher judgment (Studies 1 through 3). All studies were designed to contribute to a further understanding of classroom assessment skills of teachers. Overall, the results implied that, teacher judgment of cognitive ability was an important constant for teacher nominations and recommendations but lacked accuracy. Furthermore, teacher judgments of various traits and school achievement were substantially related to social background variables, especially the parents’ educational background. However, multivariate analysis showed social background variables to impact nomination and recommendation only marginally if at all. All results indicated differentiated but potentially biased teacher judgments to impact their far-reaching referral decisions directly, while the influence of social background on the referral decisions itself seems mediated. Implications regarding further research practices and educational assessment strategies are discussed. The implications on the needs of teachers to be educated on judgment and educational assessment are of particular interest and importance. [less ▲]

Detailed reference viewed: 54 (11 UL)
See detailLa conversion de CO2, NOx et/ou SO2 en utilisant la technologie du charbon actif
Chaouni, Wafaâ

Doctoral thesis (2016)

Detailed reference viewed: 117 (9 UL)
Full Text
See detailΤο Ειδικό Καθεστώς οικονομικών δραστηριοτήτων του Ελληνικού Κράτους ως μέλους της Ευρωπαϊκής Ένωσης και Ελλήνων υπηκόων σε χώρες των Βαλκανίων: θεωρητικές προσεγγίσεις και εμπειρικοί προσανατολισμοί
Kavvadia, Helen UL

Doctoral thesis (2016)

During the two decades from 1990 to 2010, the Balkans was the only European region to suffer a cruel reversal. On the one hand, it welcomed the transition to a free economy, despite the difficulties it ... [more ▼]

During the two decades from 1990 to 2010, the Balkans was the only European region to suffer a cruel reversal. On the one hand, it welcomed the transition to a free economy, despite the difficulties it had to overcome. On the other hand, it experienced the merciless reality of war, when it was convulsed by successive waves of various armed confrontations extending over most of its territory. These left severe and far-reaching post-conflict symptoms: a shattered economic and social base; unemployment and poverty bordering on humanitarian crisis; morale broken, infrastructure in ruins. Yet at the same time this fragmented, conflict-ridden region became the theatre of international collaboration of unprecedented quality and scale. With more than 22 donor/sponsor countries and investors, as well as international and multilateral organisations, the total flow of official development assistance and foreign direct investment in the region is estimated at more than 150 billion euro during the period in question. The region is therefore a model of reorganisation, restructuring and economic development. As an integral part of this broader geopolitical and economic scene, Greece is participating economically, in common with other ‘players’, in the ‘rebirth’ and integration of the region into the international community, both directly, by providing official development assistance (ODA) under the Hellenic Plan for the Economic Reconstruction of the Balkans (HPERB), and indirectly, by promoting Greece’s foreign direct investment (FDI) in the region in a variety of ways, mainly through subsidies. Thus, Greece evolved from a long-term ‘net importer’ of investments into an ‘exporter’ focused on the Balkans, where it ranked among the top sources of ODA and FDI. This study approaches the subject holistically, examining the dual nature of Greece’s involvement and analysing the broader economic and geopolitical setting. The dual economic involvement of the Greek State in the Balkans is also examined from a dual perspective within the scope of this doctoral thesis, both from a theoretical and from an empirical point of view, using business strategy models such as PEST, SWOT, VRIO, as well as Porter models of ‘competitive forces’, and the ‘value chain’. This thesis examines the reasons why Greece’s involvement was inevitable; investigates the compatibility between Greece’s involvement and the country’s other commitments within the EU; studies the advisability and usefulness of the endeavour; compares Greece with other countries as a provider of ODA and FDI; analyses and evaluates the HPERB; assesses the role of the Greek State and any encouragement and guidance of FDI through Greek subsidies; and scrutinises the characteristics of Greek FDI, in terms of competitiveness and the links with the parent companies in Greece. This subject is novel in an academic context, because a survey of the bibliography shows that this combined dual line of inquiry has not attracted scientific attention, as the existing studies focus on one or other of the two aspects, not both together. Furthermore, this paper is original in terms of method, firstly because it examines the subject from a new viewpoint, that of business strategy, and secondly because it uses the relevant models in new ways, with new applications. The subject, through the study of what has been achieved, remains ‘topical’ and retains its significance, as guidance to future policy makers and/or investors, especially during the current unfavourable economic situation in Greece, because, apart from any conclusions regarding the impact of Greek economic involvement in the Balkans, foreign markets represent a way out of the Greek recession. [less ▲]

Detailed reference viewed: 29 (2 UL)
Full Text
See detailÉtude sociolinguistique sur les pratiques linguistiques au sein de familles plurilingues vivant au Grand-Duché de Luxembourg
Made Mbe, Annie Flore UL

Doctoral thesis (2016)

The importance of investigating the family language policies within multilingual families living in Luxembourg is primarily based the trilingualism that characterizes Luxembourg, the heterogeneity of its ... [more ▼]

The importance of investigating the family language policies within multilingual families living in Luxembourg is primarily based the trilingualism that characterizes Luxembourg, the heterogeneity of its population, problems faced by immigrant children schooling in Luxembourg’s school and individual’s personal experience with everyday language use as well. Hence, this thesis’s aim is to investigate how parents from different linguistic backgrounds or having the same language of origin communicated with each other prior to the birth of their children and how the birth of these children reshapes the family language environment. Specifically, we aim to understand the parents’ motivations with regard to their language choices and the communication strategies they implement in order to establish a family communication environment. In addition, considering the effects of language contact, we focus on the school languages and their influence on the children’s language at home. In order to achieve this, from a methodological point of view, by combining ethnographic interviews with the recordings of a family conversation, we gained access to the declared and real linguistic practices of ten families with highly diverse linguistic profiles. These families reside between seven and forty-two years in Luxembourg. Further, content analysis was used to examine the migratory experience of each parent. Some of the major reasons why parents adopted a positive attitude towards multilingualism were (a) the language learning and use opportunities offered by Luxembourg and (b) the desire to develop the linguistic capital of their children. Our results later suggest that although children do not participate actively in the language use decision-making process they actively influence the family language environment. Because the languages they learn in school impact the ways in which they speak at home. Moreover, we discovered that once these children have contact with the officially recognised languages in Luxembourg, which might be different from that of the family, they tend to shift their preference towards these dominant languages. In addition, we discovered that there is no standard parental communication strategy for passing the family languages on to the children. Rather, depending on the parents' objectives, they can adopt different strategies. Overall, this thesis opens new perspectives for research that investigates the family language policies of multilingual families by highlighting the relevance of educational dimensions of children with immigrant backgrounds. [less ▲]

Detailed reference viewed: 184 (10 UL)
Full Text
See detailBoosting Static Security Analysis of Android Apps through Code Instrumentation
Li, Li UL

Doctoral thesis (2016)

Within a few years, Android has been established as a leading platform in the mobile market with over one billion monthly active Android users. To serve these users, the official market, Google Play ... [more ▼]

Within a few years, Android has been established as a leading platform in the mobile market with over one billion monthly active Android users. To serve these users, the official market, Google Play, hosts around 2 million apps which have penetrated into a variety of user activities and have played an essential role in their daily life. However, this penetration has also opened doors for malicious apps, presenting big threats that can lead to severe damages. To alleviate the security threats posed by Android apps, the literature has proposed a large body of works which propose static and dynamic approaches for identifying and managing security issues in the mobile ecosystem. Static analysis in particular, which does not require to actually execute code of Android apps, has been used extensively for market-scale analysis. In order to have a better understanding on how static analysis is applied, we conduct a systematic literature review (SLR) of related researches for Android. We studied influential research papers published in the last five years (from 2011 to 2015). Our in-depth examination on those papers reveals, among other findings, that static analysis is largely performed to uncover security and privacy issues. The SLR also highlights that no single work has been proposed to tackle all the challenges for static analysis of Android apps. Existing approaches indeed fail to yield sound results in various analysis cases, given the different specificities of Android programming. Our objective is thus to reduce the analysis complexity of Android apps in a way that existing approaches can also succeed on their failed cases. To this end, we propose to instrument the app code for transforming a given hard problem to an easily-resolvable one (e.g., reducing an inter-app analysis problem to an intra-app analysis problem). As a result, our code instrumentation boosts existing static analyzers in a non-invasive manner (i.e., no need to modify those analyzers). In this dissertation, we apply code instrumentation to solve three well-known challenges of static analysis of Android apps, allowing existing static security analyses to 1) be inter-component communication (ICC) aware; 2) be reflection aware; and 3) cut out common libraries. ICC is a challenge for static analysis. Indeed, the ICC mechanism is driven at the framework level rather than the app level, leaving it invisible to app-targeted static analyzers. As a consequence, static analyzers can only build an incomplete control-flow graph (CFG) which prevents a sound analysis. To support ICC-aware analysis, we devise an approach called IccTA, which instruments app code by adding glue code that directly connects components using traditional Java class access mechanism (e.g., explicit new instantiation of target components). Reflection is a challenge for static analysis as well because it also confuses the analysis context. To support reflection-aware analysis, we provide DroidRA, a tool-based approach, which instruments Android apps to explicitly replace reflective calls with their corresponding traditional Java calls. The mapping from reflective calls to traditional Java calls is inferred through a solver, where the resolution of reflective calls is reduced to a composite constant propagation problem. Libraries are pervasively used in Android apps. On the one hand, their presence increases time/memory consumption of static analysis. On the other hand, they may lead to false positives and false negatives for static approaches (e.g., clone detection and machine learning-based malware detection). To mitigate this, we propose to instrument Android apps to cut out a set of automatically identified common libraries from the app code, so as to improve static analyzer’s performance in terms of time/memory as well as accuracy. To sum up, in this dissertation, we leverage code instrumentation to boost existing static analyzers, allowing them to yield more sound results and to perform quicker analyses. Thanks to the afore- mentioned approaches, we are now able to automatically identify malicious apps. However, it is still unknown how malicious payloads are introduced into those malicious apps. As a perspective for our future research, we conduct a thorough dissection on piggybacked apps (whose malicious payloads are easily identifiable) in the end of this dissertation, in an attempt to understand how malicious apps are actually built. [less ▲]

Detailed reference viewed: 419 (37 UL)
See detailAstrocyte phenotype during differentiation: implication of the NFkB pathway
Birck, Cindy UL

Doctoral thesis (2016)

Detailed reference viewed: 109 (14 UL)
Full Text
See detailELECTRONIC AND STRUCTURAL PROPERTIES OF BISMUTH- AND RARE-EARTH-FERRITES
Weber, Mads Christof UL

Doctoral thesis (2016)

The work of this thesis stands in the framework of the understanding of multiferroics and light induced effects in these materials, more specifically in rare-earth and bismuth ferrites. Iron-based ... [more ▼]

The work of this thesis stands in the framework of the understanding of multiferroics and light induced effects in these materials, more specifically in rare-earth and bismuth ferrites. Iron-based materials offer the advantage of a high magnetic-ordering temperature, commonly well-above room temperature. To understand the coupling between magnetism and crystal lattice and the interaction of a material with light, knowledge about the crystal structure and electronic band structure, respectively, is crucial. In the first part of this work, the structural properties of six rare-earth orthoferrites RFeO3 (R = La, Sm, Eu, Gd, Tb, Dy) are analyzed by Raman scattering (RS). Polarization dependent RS of SmFeO3 and the comparison with first-principle calculations enable the assignment of the measured phonon modes to vibrational symmetries and atomic displacements. This allows correlating the phonon modes with the orthorhombic structural distortions of RFeO3 perovskites. In particular, the positions of two specific Ag modes scale linearly with the two FeO6 octahedra tilt angles, allowing the distortion to be tracked throughout the series. At variance with literature, we find that the two octahedra tilt angles scale differently with the vibration frequencies of their respective Ag modes. This behavior, as well as the general relations between the tilt angles, the frequencies of the associated modes, and the ionic radii are rationalized in a simple Landau model. The precise knowledge about the lattice vibration is used in the second part of the work to investigate the impact of magnetic transitions on crystal lattice of SmFeO3. SmFeO3 stands out of the rest of the rare-earth orthoferrites for its comparably high magnetic transition temperatures. While tuning the temperature through magnetic transitions, the structural properties are probed by RS complemented by resonant ultrasound spectroscopy and linear birefringence measurements. During the Fe3+-spin reorientation phase, we find an important softening of the elastic constants in the resonant-ultrasound spectra. Towards lower temperatures the Sm3+-spins order; this ordering is clearly represented in the Raman spectra in the form of changes of the evolution of certain vibrational bands and additional bands appear in the spectra. The knowledge about the vibrational displacements of the Raman bands allows an investigation of the anomalies related to the Sm3+-spin ordering. Bismuth ferrite can be seen as the role model multiferroic material since it is one of the few room temperature multiferroics with a strong electric polarization. The ferroelectric and magnetic properties have been studied in great detail. In addition to the multiferroic properties, photo-induced phenomena have renewed the interest in BiFeO3. However, for the understanding and tuning of photo-induced effects a profound knowledge of the electronic band structure is important. Despite the extensive study of BiFeO3, the understanding of the electronic transitions remains very limited. In the third part of the thesis, the electronic band structure of BiFeO3 is investigate using RS with twelve different excitation wavelengths ranging from the blue to the near infrared. Resonant Raman signatures (RRS) can be assigned to direct and indirect electronic transitions, as well as in-gap electronic levels, most likely associated with oxygen vacancies. RRS allows to distinguish between direct and indirect transitions even at higher temperatures. Thus, it is found that the remarkable and intriguing variation of the optical band gap with temperature can be related to the shrinking of an indirect electronic band gap, while the energies for direct electronic transitions remain nearly temperature independent. [less ▲]

Detailed reference viewed: 424 (18 UL)
Full Text
See detailTHREE-DIMENSIONAL MICROFLUIDIC CELL CULTURE OF STEM CELL-DERIVED NEURONAL MODELS OF PARKINSON'S DISEASE
Lucumi-Moreno, Edinson UL

Doctoral thesis (2016)

Cell culture models in 3D have become an essential tool for the implementation of cellular models of neurodegenerative diseases. Parkinson’s disease (PD) is characterized by the loss of dopaminergic ... [more ▼]

Cell culture models in 3D have become an essential tool for the implementation of cellular models of neurodegenerative diseases. Parkinson’s disease (PD) is characterized by the loss of dopaminergic neurons from the substantia nigra. The study of PD at the cellular level requires a cellular model that recapitulates the complexity of those neurons affected in PD. Induced Pluripotent Stem Cells (iPSC) technology is an efficient method for the derivation of dopaminergic neurons from human neuroepithelial stem cells (hNESC), hence proving to be a suitable tool to develop cellular models of PD. To obtain DA neurons from hNESC in a 3D culture, a protocol based on the use of small molecules and growth factors was implemented in a microfluidic device (OrganoPlate). This non PDMS device is based on the use of phaseguide (capillary pressure barriers that guide the liquid air interface) technology and the hydrogel matrigel as an extra cellular matrix surrogate. To characterize the morphological features and the electrophysiological activity of wild type hNESCs differentiated neuronal population, with those differentiated neurons carrying the LRRK2 mutation G2019S, a calcium imaging assay based on the use of a calcium sensitive dye (Fluo-4) and image analysis methods, were implemented. Additionally, several aspects of fluid flow dynamics, rheological properties of matrigel and its use as surrogate extracellular matrix were investigated. Final characterization of the differentiated neuronal population was done using an immunostaining assay and microscopy techniques. The yields of differentiated dopaminergic neurons in the 2 lane OrganoPlate were in the range of 13% to 27%. Morphological (length of processes) and electrophysiological (firing patterns) characteristics of wild type differentiated neurons and those carrying the LRRK2 mutation G2019S, were determined applying an image analysis pipeline. Velocity profiles and shear stress of fluorescent beads in matrigel flowing in culture lanes of the 2 lane OrganoPlate, were estimated using particle image velocimetry techniques. In this thesis, we integrate two new technologies to establish a new in vitro 3D cell based model to study several aspects of PD at the cellular level, aiming to establish a microfluidic cell culture experimental platform to study PD, using a systems biology approach. [less ▲]

Detailed reference viewed: 129 (17 UL)
Full Text
See detailTo Share or not to Share: Access Control and Information Inference in Social Networks
Zhang, Yang UL

Doctoral thesis (2016)

Online social networks (OSNs) have been the most successful online applications during the past decade. Leading players in the business, including Facebook, Twitter and Instagram, attract a huge number of ... [more ▼]

Online social networks (OSNs) have been the most successful online applications during the past decade. Leading players in the business, including Facebook, Twitter and Instagram, attract a huge number of users. Nowadays, OSNs have become a primary way for people to connect, communicate and share life moments. Although OSNs have brought a lot of convenience to our life, users' privacy, on the other hand, has become a major concern due to the large amount of personal data shared online. In this thesis, we study users' privacy in social networks from two aspects, namely access control and information inference. Access control is a mechanism, provided by OSNs, for users themselves to regulate who can view their resources. Access control schemes in OSNs are relationship-based, i.e., a user can define access control policies to allow others who are in a certain relationship with him to access his resources. Current OSNs have deployed multiple access control schemes, however most of these schemes do not satisfy users' expectations, due to expressiveness and usability. There are mainly two types of information that users share in OSNs, namely their activities and social relations. The information has provided an unprecedented chance for academia to understand human society and for industry to build appealing applications, such as personalized recommendation. However, the large quantity of data can also be used to infer a user's personal information, even though not shared by the user in OSNs. This thesis concentrates on users' privacy in online social networks from two aspects, i.e., access control and information inference, it is organized into two parts. The first part of this thesis addresses access control in social networks from three perspectives. First, we propose a formal framework based on a hybrid logic to model users' access control policies. This framework incorporates the notion of public information and provides users with a fine-grained way to control who can view their resources. Second, we design cryptographic protocols to enforce access control policies in OSNs. Under these protocols, a user can allow others to view his resources without leaking private information. Third, major OSN companies have deployed blacklist for users to enforce extra access control besides the normal access control policies. We formally model blacklist with the help of a hybrid logic and propose efficient algorithms to implement it in OSNs. The second part of this thesis concentrates on the inference of users' information in OSNs, using machine learning techniques. The targets of our inference are users' activities, represented by mobility, and social relations. First, we propose a method which uses a user's social relations to predict his locations. This method adopts a user's social community information to construct the location predictor, and perform the inference with machine learning techniques. Second, we focus on inferring the friendship between two users based on the common locations they have been to. We propose a notion namely location sociality that characterizes to which extent a location is suitable for conducting social activities, and use this notion for friendship prediction. Experiments on real life social network datasets have demonstrated the effectiveness of our two inferences. [less ▲]

Detailed reference viewed: 118 (15 UL)
See detailMitarbeiterführung und Social-Media-Nutzung im Führungsalltag von Generation-Y-Führungskräften - Eine explorative Analyse mittels Mixed-Methods-Ansatz
Feltes, Florian UL

Doctoral thesis (2016)

The topic of this thesis is the qualitative and quantitative evaluation of leadership behaviour and therefore the leadership style of Generation Y (GenY) considering the use of social media in day-to-day ... [more ▼]

The topic of this thesis is the qualitative and quantitative evaluation of leadership behaviour and therefore the leadership style of Generation Y (GenY) considering the use of social media in day-to-day management. It examines the question of how GenY leaders lead and how they use social media in this context. It explores the topic based on a sequential mixed methods approach of qualitative interviews and a quantitative online questionnaire. Using the qualitative content analysis, it examines 25 qualitative interviews concerning the following aspects: leadership behaviour of generation Y, generation-based differences in the leadership and different strength of leadership styles, influence of contextual factors like hierarchies, sector and company size on the leadership style and use of social media, use of social media on day-to-day management, and, finally, connections between applied leadership styles and social media usage of GenY leaders. The findings and tendencies were then verified in an online questionnaire. The results of the online questionnaire [self-evaluation of leaders (N=406), bottom-up evaluation by employees (N=622)] show a significant discrepancy between the leaders’ statements and those of the employees. However, there are clear results and tendencies that confirm the findings of the qualitative study. It was established that GenY leaders show characteristics of task-oriented, person-oriented, transactional and transformational leadership. GenY leadership is characterised by clear outcome orientation, flat hierarchies and feedback. The use of social media varies considerably, depending for example on the context in which the leader works, e. g. sector and level of management. In summary, it can be stated that there is a connection between the strength of the leadership style and the usage of social media in day-to-day management. [less ▲]

Detailed reference viewed: 257 (13 UL)
Full Text
See detailDynamic Vehicular Routing in Urban Environments
Codeca, Lara UL

Doctoral thesis (2016)

Traffic congestion is a persistent issue that most of the people living in a city have to face every day. Traffic density is constantly increasing and, in many metropolitan areas, the road network has ... [more ▼]

Traffic congestion is a persistent issue that most of the people living in a city have to face every day. Traffic density is constantly increasing and, in many metropolitan areas, the road network has reached its limits and cannot easily be extended to meet the growing traffic demand. Intelligent Transportation System (ITS) is a world wide trend in traffic monitoring that uses technology and infrastructure improvements in advanced communication and sensors to tackle transportation issues such as mobility efficiency, safety, and traffic congestion. The purpose of ITS is to take advantage of all available technologies to improve every aspect of mobility and traffic. Our focus in this thesis is to use these advancements in technology and infrastructure to mitigate traffic congestion. We discuss the state of the art in traffic flow optimization methods, their limitations, and the benefits of a new point of view. The traffic monitoring mechanism that we propose uses vehicular telecommunication to gather the traffic information that is fundamental to the creation of a consistent overview of the traffic situation, to provision real-time information to drivers, and to optimizing their routes. In order to study the impact of dynamic rerouting on the traffic congestion experienced in the urban environment, we need a reliable representation of the traffic situation. In this thesis, traffic flow theory, together with mobility models and propagation models, are the basis to providing a simulation environment capable of providing a realistic and interactive urban mobility, which is used to test and validate our solution for mitigating traffic congestion. The topology of the urban environment plays a fundamental role in traffic optimization, not only in terms of mobility patterns, but also in the connectivity and infrastructure available. Given the complexity of the problem, we start by defining the main parameters we want to optimize, and the user interaction required, in order to achieve the goal. We aim to optimize the travel time from origin to destination with a selfish approach, focusing on each driver. We then evaluated constraints and added values of the proposed optimization, providing a preliminary study on its impact on a simple scenario. Our evaluation is made in a best-case scenario using complete information, then in a more realistic scenario with partial information on the global traffic situation, where connectivity and coverage play a major role. The lack of a general-purpose, freely-available, realistic and dependable scenario for Vehicular Ad Hoc Networks (VANETs) creates many problems in the research community in providing and comparing realistic results. To address these issues, we implemented a synthetic traffic scenario, based on a real city, to evaluate dynamic routing in a realistic urban environment. The Luxembourg SUMO Traffic (LuST) Scenario is based on the mobility derived from the City of Luxembourg. The scenario is built for the Simulator of Urban MObiltiy (SUMO) and it is compatible with Vehicles in Network Simulation (VEINS) and Objective Modular Network Testbed in C++ (OMNet++), allowing it to be used in VANET simulations. In this thesis we present a selfish traffic optimization approach based on dynamic rerouting, able to mitigate the impact of traffic congestion in urban environments on a global scale. The general-purpose traffic scenario built to validate our results is already being used by the research community, and is freely-available under the MIT licence, and is hosted on GitHub. [less ▲]

Detailed reference viewed: 269 (35 UL)
Full Text
See detailNovel Methods for Multi-Shape Analysis
Bernard, Florian UL

Doctoral thesis (2016)

Multi-shape analysis has the objective to recognise, classify, or quantify morphological patterns or regularities within a set of shapes of a particular object class in order to better understand the ... [more ▼]

Multi-shape analysis has the objective to recognise, classify, or quantify morphological patterns or regularities within a set of shapes of a particular object class in order to better understand the object class of interest. One important aspect of multi-shape analysis are Statistical Shape Models (SSMs), where a collection of shapes is analysed and modelled within a statistical framework. SSMs can be used as (statistical) prior that describes which shapes are more likely and which shapes are less likely to be plausible instances of the object class of interest. Assuming that the object class of interest is known, such a prior can for example be used in order to reconstruct a three-dimensional surface from only a few known surface points. One relevant application of this surface reconstruction is 3D image segmentation in medical imaging, where the anatomical structure of interest is known a-priori and the surface points are obtained (either automatically or manually) from images. Frequently, Point Distribution Models (PDMs) are used to represent the distribution of shapes, where each shape is discretised and represented as labelled point set. With that, a shape can be interpreted as an element of a vector space, the so-called shape space, and the shape distribution in shape space can be estimated from a collection of given shape samples. One crucial aspect for the creation of PDMs that is tackled in this thesis is how to establish (bijective) correspondences across the collection of training shapes. Evaluated on brain shapes, the proposed method results in an improved model quality compared to existing approaches whilst at the same time being superior with respect to runtime. The second aspect considered in this work is how to learn a low-dimensional subspace of the shape space that is close to the training shapes, where all factors spanning this subspace have local support. Compared to previous work, the proposed method models the local support regions implicitly, such that no initialisation of the size and location of these regions is necessary, which is advantageous in scenarios where this information is not available. The third topic covered in this thesis is how to use an SSM in order to reconstruct a surface from only few surface points. By using a Gaussian Mixture Model (GMM) with anisotropic covariance matrices, which are oriented according to the surface normals, a more surface-oriented fitting is achieved compared to a purely point-based fitting when using the common Iterative Closest Point (ICP) algorithm. In comparison to ICP we find that the GMM-based approach gives superior accuracy and robustness on sparse data. Furthermore, this work covers the transformation synchronisation method, which is a procedure for removing noise that accounts for transitive inconsistency in the set of pairwise linear transformations. One interesting application of this methodology that is relevant in the context of multi-shape analysis is to solve the multi-alignment problem in an unbiased/reference-free manner. Moreover, by introducing an improvement of the numerical stability, the methodology can be used to solve the (affine) multi-image registration problem from pairwise registrations. Compared to reference-based multi-image registration, the proposed approach leads to an improved registration accuracy and is unbiased/reference-free, which makes it ideal for statistical analyses. [less ▲]

Detailed reference viewed: 121 (16 UL)
Full Text
See detailEmotion Regulation and Job Burnout: Investigating the relationship between emotion regulation knowledge, abilities and dispositions and their role in the prediction of Job Burnout
Seixas, Rita UL

Doctoral thesis (2016)

The present thesis has two goals: 1) to understand the relationship between three levels of emotion regulation - knowledge, abilities and dispositions - as proposed by the Three-level model of emotional ... [more ▼]

The present thesis has two goals: 1) to understand the relationship between three levels of emotion regulation - knowledge, abilities and dispositions - as proposed by the Three-level model of emotional competences (Mikolajczak, 2009) and 2) to investigate the role of these three levels in the prediction of job burnout – while accounting for the moderator role of the emotional labor of the job, and by distinguishing these effects in two professional sectors (finance and health-care sector). Methodologically, besides emotion regulation knowledge, specific emotion regulation strategies - reappraisal, suppression, enhancement and expressive flexibility – are considered and assessed both as abilities and as dispositions. Results from goal 1 indicate that: a) knowledge, abilities and dispositions are not hierarchically structured; b) different strategies are independent from each other (both in terms of ability and in terms of disposition); c) the disposition to reappraise and to enhance do not depend on a priori knowledge or ability, while the disposition to suppress decreases as the emotion regulation knowledge and the ability to enhance increase. Results from goal 2 indicate that emotion regulation knowledge, abilities and dispositions are incremental predictors of job burnout. Specifically: a) emotion regulation knowledge decreases emotional exhaustion, and reappraisal ability increases the sense of professional efficacy; b) expressive flexibility increases professional efficacy for workers in high emotional labor jobs, while its effect is detrimental for workers in low emotional labor jobs; c) suppression disposition protects individuals from professional inefficacy while suppression ability is detrimental in this regard. Finally, the results point out that different strategies have different impacts in different professional sectors, notably suppression which appears as a detrimental strategy for finance workers and as a protective strategy for health-care workers. Overall, these results point out that several dimensions of emotion regulation are relevant in the prediction of job burnout. Specifically, knowledge, as well as abilities and dispositions seem to play an incremental role in explaining variability in job burnout symptoms. The effects of the specific strategies should not be analyzed in a simplistic way but instead, are better understood when taking into account the specificities of the job and the professional context. [less ▲]

Detailed reference viewed: 331 (31 UL)
Full Text
See detailEnabling Model-Driven Live Analytics For Cyber-Physical Systems: The Case of Smart Grids
Hartmann, Thomas UL

Doctoral thesis (2016)

Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded ... [more ▼]

Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded systems. They will be entrusted with increasingly complex tasks like controlling electric grids or autonomously driving cars. These systems have the potential to lay the foundations for tomorrow’s critical infrastructures, to form the basis of emerging and future smart services, and to improve the quality of our everyday lives in many areas. In order to solve their tasks, they have to continuously monitor and collect data from physical processes, analyse this data, and make decisions based on it. Making smart decisions requires a deep understanding of the environment, internal state, and the impacts of actions. Such deep understanding relies on efficient data models to organise the sensed data and on advanced analytics. Considering that cyber-physical systems are controlling physical processes, decisions need to be taken very fast. This makes it necessary to analyse data in live, as opposed to conventional batch analytics. However, the complex nature combined with the massive amount of data generated by such systems impose fundamental challenges. While data in the context of cyber-physical systems has some similar characteristics as big data, it holds a particular complexity. This complexity results from the complicated physical phenomena described by this data, which makes it difficult to extract a model able to explain such data and its various multi-layered relationships. Existing solutions fail to provide sustainable mechanisms to analyse such data in live. This dissertation presents a novel approach, named model-driven live analytics. The main contribution of this thesis is a multi-dimensional graph data model that brings raw data, domain knowledge, and machine learning together in a single model, which can drive live analytic processes. This model is continuously updated with the sensed data and can be leveraged by live analytic processes to support decision-making of cyber-physical systems. The presented approach has been developed in collaboration with an industrial partner and, in form of a prototype, applied to the domain of smart grids. The addressed challenges are derived from this collaboration as a response to shortcomings in the current state of the art. More specifically, this dissertation provides solutions for the following challenges: First, data handled by cyber-physical systems is usually dynamic—data in motion as opposed to traditional data at rest—and changes frequently and at different paces. Analysing such data is challenging since data models usually can only represent a snapshot of a system at one specific point in time. A common approach consists in a discretisation, which regularly samples and stores such snapshots at specific timestamps to keep track of the history. Continuously changing data is then represented as a finite sequence of such snapshots. Such data representations would be very inefficient to analyse, since it would require to mine the snapshots, extract a relevant dataset, and finally analyse it. For this problem, this thesis presents a temporal graph data model and storage system, which consider time as a first-class property. A time-relative navigation concept enables to analyse frequently changing data very efficiently. Secondly, making sustainable decisions requires to anticipate what impacts certain actions would have. Considering complex cyber-physical systems, it can come to situations where hundreds or thousands of such hypothetical actions must be explored before a solid decision can be made. Every action leads to an independent alternative from where a set of other actions can be applied and so forth. Finding the sequence of actions that leads to the desired alternative, requires to efficiently create, represent, and analyse many different alternatives. Given that every alternative has its own history, this creates a very high combinatorial complexity of alternatives and histories, which is hard to analyse. To tackle this problem, this dissertation introduces a multi-dimensional graph data model (as an extension of the temporal graph data model) that enables to efficiently represent, store, and analyse many different alternatives in live. Thirdly, complex cyber-physical systems are often distributed, but to fulfil their tasks these systems typically need to share context information between computational entities. This requires analytic algorithms to reason over distributed data, which is a complex task since it relies on the aggregation and processing of various distributed and constantly changing data. To address this challenge, this dissertation proposes an approach to transparently distribute the presented multi-dimensional graph data model in a peer-to-peer manner and defines a stream processing concept to efficiently handle frequent changes. Fourthly, to meet future needs, cyber-physical systems need to become increasingly intelligent. To make smart decisions, these systems have to continuously refine behavioural models that are known at design time, with what can only be learned from live data. Machine learning algorithms can help to solve this unknown behaviour by extracting commonalities over massive datasets. Nevertheless, searching a coarse-grained common behaviour model can be very inaccurate for cyber-physical systems, which are composed of completely different entities with very different behaviour. For these systems, fine-grained learning can be significantly more accurate. However, modelling, structuring, and synchronising many fine-grained learning units is challenging. To tackle this, this thesis presents an approach to define reusable, chainable, and independently computable fine-grained learning units, which can be modelled together with and on the same level as domain data. This allows to weave machine learning directly into the presented multi-dimensional graph data model. In summary, this thesis provides an efficient multi-dimensional graph data model to enable live analytics of complex, frequently changing, and distributed data of cyber-physical systems. This model can significantly improve data analytics for such systems and empower cyber-physical systems to make smart decisions in live. The presented solutions combine and extend methods from model-driven engineering, models@run.time, data analytics, database systems, and machine learning. [less ▲]

Detailed reference viewed: 524 (66 UL)
See detailLanguage Practices in Multilingual Legal Education: A Case Study at the University of Luxembourg
Uwera, Francine UL

Doctoral thesis (2016)

Abstract in English The effects of globalisation include the internationalisation of universities, executed for the purpose of quality improvement, restructuration or upgrading. At the higher education ... [more ▼]

Abstract in English The effects of globalisation include the internationalisation of universities, executed for the purpose of quality improvement, restructuration or upgrading. At the higher education level, internationalisation may be visible in the outline of the curriculum, the superdiversity of students’ and teachers’ origins and languages, as well as the establishment of student and staff exchange programmes. All these aspects lead to the need for communication in different languages and, in an increasing numbers of cases, the presence of languages of teaching and learning which are not the first languages of the teachers, students and administrative staff involved. Therefore, the trend towards internationalisation of universities around the world goes hand in hand with re-structuring related to language use and language choice. In other words, language being an essential component of education has become a major point in governance. The topic of the present thesis is situated in the area of the role of languages and multilingualism in higher education. The study addresses the problem of using a specific instructional language, French, English or German in our case, to learn content subjects in contexts where some of these languages may be a foreign language to students and teachers and in a field, which in many countries is traditionally considered bound to the nation-state and hence to a national language. With regard to the field of law, the situation in the multilingual country Luxembourg is particular in that the Luxembourgish law is formulated in French, to some extent in German but the national language is Luxembourgish. The study sets out to describe the linguistic practices applied in learning and teaching the law at the trilingual university of Luxembourg and investigating how students and teachers view these practices. Using a range of ethnographic methods, the study examines classroom practices in a specific multilingual context and the way participants view them, rather than assessing the appropriateness of the pedagogical means selected or applied by teachers in a normative sense. However, matters of general and legal pedagogy and legal theory will be accessorily although not thoroughly discussed. We hope that the study will contribute to increase the knowledge on this subject in general and be useful for this specific research terrain (Luxembourg) as well as for other similar contexts worldwide. [less ▲]

Detailed reference viewed: 84 (18 UL)
See detailLa sanction de l'obligation légale d'information en droit des contrats de consommation : Étude de droit français et luxembourgeois.
Pitzalis Épouse Welch, Cécile Elise UL

Doctoral thesis (2016)

Numerous legal duties to disclose information are promulgated in consumer contract law by the legislational body of the European Union and are thus common to French and Luxembourgish laws. In this context ... [more ▼]

Numerous legal duties to disclose information are promulgated in consumer contract law by the legislational body of the European Union and are thus common to French and Luxembourgish laws. In this context, the legal duty to disclose information possesses a double objective to protect the consumer by enlightening their consent, and regulating the market by favoring loyal competition. A breach of obligatory information disclosures by a professional must be sanctioned to ensure the effectiveness of the obligation. The penalty for breaching the legal obligation to disclose information in consumer contract law must be analyzed using its angle of efficiency within the capacity of its effects to reach the assigned goals. Analyzing French and Luxembourgish consumer contract laws, both similar but with specificities, surmounts a perspective of legislatory choices in terms of sanctioning the legal duties to disclose information, and also aids by informing proposals to improve these current systems of sanction. [less ▲]

Detailed reference viewed: 184 (10 UL)
Full Text
See detailLOAD PREDICTION AND BALANCING FOR CLOUD-BASED VOICE-OVER-IP SOLUTIONS
Simionovici, Ana-Maria UL

Doctoral thesis (2016)

Detailed reference viewed: 143 (33 UL)
Full Text
See detailEnergy-efficient Communications in Cloud, Mobile Cloud and Fog Computing
Fiandrino, Claudio UL

Doctoral thesis (2016)

This thesis studies the problem of energy efficiency of communications in distributed computing paradigms, including cloud computing, mobile cloud computing and fog/edge computing. Distributed computing ... [more ▼]

This thesis studies the problem of energy efficiency of communications in distributed computing paradigms, including cloud computing, mobile cloud computing and fog/edge computing. Distributed computing paradigms have significantly changed the way of doing business. With cloud computing, companies and end users can access the vast majority services online through a virtualized environment in a pay-as-you-go basis. %Three are the main services typically consumed by cloud users are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Mobile cloud and fog/edge computing are the natural extension of the cloud computing paradigm for mobile and Internet of Things (IoT) devices. Based on offloading, the process of outsourcing computing tasks from mobile devices to the cloud, mobile cloud and fog/edge computing paradigms have become popular techniques to augment the capabilities of the mobile devices and to reduce their battery drain. Being equipped with a number of sensors, the proliferation of mobile and IoT devices has given rise to a new cloud-based paradigm for collecting data, which is called mobile crowdsensing as for proper operation it requires a large number of participants. A plethora of communication technologies is applicable to distributing computing paradigms. For example, cloud data centers typically implement wired technologies while mobile cloud and fog/edge environments exploit wireless technologies such as 3G/4G, WiFi and Bluetooth. Communication technologies directly impact the performance and the energy drain of the system. This Ph.D. thesis analyzes from a global perspective the efficiency in using energy of communications systems in distributed computing paradigms. In particular, the following contributions are proposed: - A new framework of performance metrics for communication systems of cloud computing data centers. The proposed framework allows a fine-grain analysis and comparison of communication systems, processes, and protocols, defining their influence on the performance of cloud applications. - A novel model for the problem of computation offloading, which describes the workflow of mobile applications through a new Directed Acyclic Graph (DAG) technique. This methodology is suitable for IoT devices working in fog computing environments and was used to design an Android application, called TreeGlass, which performs recognition of trees using Google Glass. TreeGlass is evaluated experimentally in different offloading scenarios by measuring battery drain and time of execution as key performance indicators. - In mobile crowdsensing systems, novel performance metrics and a new framework for data acquisition, which exploits a new policy for user recruitment. Performance of the framework are validated through CrowdSenSim, which is a new simulator designed for mobile crowdsensing activities in large scale urban scenarios. [less ▲]

Detailed reference viewed: 412 (17 UL)
Full Text
See detailA Model-Driven Approach to Offline Trace Checking of Temporal Properties
Dou, Wei UL

Doctoral thesis (2016)

Offline trace checking is a procedure for evaluating requirements over a log of events produced by a system. The goal of this thesis is to present a practical and scalable solution for the offline ... [more ▼]

Offline trace checking is a procedure for evaluating requirements over a log of events produced by a system. The goal of this thesis is to present a practical and scalable solution for the offline checking of the temporal requirements of a system, which can be used in contexts where model-driven engineering is already a practice, where temporal specifications should be written in a domain-specific language not requiring a strong mathematical background, and where relying on standards and industry-strength tools for property checking is a fundamental prerequisite. The main contributions of this thesis are: i) the TemPsy (Temporal Properties made easy) language, a pattern-based domain-specific language for the specification of temporal properties; ii) a model-driven trace checking procedure, which relies on an optimized mapping of temporal requirements written in TemPsy into Object Constraint Language (OCL) constraints on a conceptual model of execution traces; iii) a model-driven approach to violation information collection, which relies on the evaluation of OCL queries on an instance of the trace model; iv) three publicly-available tools: 1) TemPsy-Check and 2) TemPsy-Report, implementing, respectively, the trace checking and violation information collection procedures; 3) an interactive visualization tool for navigating and analyzing the violation information collected by TemPsy-Report; v) an evaluation of the scalability of TemPsy-Check and TemPsy-Report, when applied to the verification of real properties. The proposed approaches have been applied to and evaluated on a case study developed in collaboration with a public service organization, active in the domain of business process modeling for eGovernment. The experimental results show that TemPsy-Check is able to analyze traces with one million events in about two seconds, and TemPsy-Report can collect violation information from such large traces in less than ten seconds; both tools scale linearly with respect to the length of the trace. [less ▲]

Detailed reference viewed: 191 (61 UL)
Full Text
See detailGAMES AND STRATEGIES IN ANALYSIS OF SECURITY PROPERTIES
Tabatabaei, Masoud UL

Doctoral thesis (2016)

Information security problems typically involve decision makers who choose and adjust their behaviors in the interaction with each other in order to achieve their goals. Consequently, game theoretic ... [more ▼]

Information security problems typically involve decision makers who choose and adjust their behaviors in the interaction with each other in order to achieve their goals. Consequently, game theoretic models can potentially be a suitable tool for better understanding the challenges that the interaction of participants in information security scenarios bring about. In this dissertation, we employ models and concepts of game theory to study a number of subjects in the field of information security. In the first part, we take a game-theoretic approach to the matter of preventing coercion in elections. Our game models for the election involve an honest election authority that chooses between various protection methods with different levels of resistance and different implementation costs. By analysing these games, it turns out that the society is better off if the security policy is publicly announced, and the authorities commit to it. Our focus in the second part is on the property of noninterference in information flow security. Noninterference is a property that captures confidentiality of actions executed by a given process. However, the property is hard to guarantee in realistic scenarios. We show that the security of a system can be seen as an interplay between functionality requirements and the strategies adopted by users, and based on this we propose a weaker notion of noninterference, which we call strategic noninterference. We also give a characterisation of strategic noninterference through unwinding relations for specific subclasses of goals and for the simplified setting where a strategy is given as a parameter. In the third part, we study the security of information flow based on the consequences of information leakage to the adversary. Models of information flow security commonly prevent any information leakage, regardless of how grave or harmless the consequences the leakage can be. Even in models where each piece of information is classified as either sensitive or insensitive, the classification is “hardwired” and given as a parameter of the analysis, rather than derived from more fundamental features of the system. We suggest that information security is not a goal in itself, but rather a means of preventing potential attackers from compromising the correct behavior of the system. To formalize this, we first show how two information flows can be compared by looking at the adversary’s ability to harm the system. Then, we propose that the information flow in a system is effectively secure if it is as good as its idealized variant based on the classical notion of noninterference. Finally, we shift our focus to the strategic aspect of information security in voting procedures. We argue that the notions of receipt-freeness and coercion resistance are underpinned by existence (or nonexistence) of a suitable strategy for some participants of the voting process. In order toback the argument formally, we provide logical “transcriptions” of the informal intuitions behind coercion-related properties that can be found in the existing literature. The transcriptions are formulatedin the modal game logic ATL*, well known in the area of multi-agent systems. [less ▲]

Detailed reference viewed: 117 (31 UL)
Full Text
See detailMining Software Artefact Variants for Product Line Migration and Analysis
Martinez, Jabier UL

Doctoral thesis (2016)

Software Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy ... [more ▼]

Software Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets associated to the features, to allow systematic and planned reuse. SPLs are reported to have numerous benefits such as time-to-market reduction, productivity increase or product quality improvement. However, the barriers to adopt an SPL are equally numerous requiring a high up-front investment in domain analysis and implementation. In this context, to create variants, companies more commonly rely on ad-hoc reuse techniques such as copy-paste-modify. Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Extractive SPL adoption allows the migration from single-system development mentality to SPL practices. Several activities are involved to achieve this goal. Due to the complexity of artefact variants, feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed as well. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations (e.g., one feature requires or excludes another). Then, the reusable assets associated to the feature should be constructed. And finally, to facilitate the communication among stakeholders, a comprehensive feature model need to be synthesized. While several approaches have been proposed for the above-mentioned activities, extractive SPL adoption remains challenging. A recurring barrier consists in the limitation of existing techniques to be used beyond the specific types of artefacts that they initially targeted, requiring inputs and providing outputs at different granularity levels and with different representations. Seamlessly address the activities within the same environment is a challenge by itself. This dissertation presents a unified, generic and extensible framework for mining software artefact variants in the context of extractive SPL adoption. We describe both its principles and its realization in Bottom-Up Technologies for Reuse (BUT4Reuse). Special attention is paid to model-driven development scenarios. A unified process and representation would enable practitioners and researchers to empirically analyse and compare different techniques. Therefore, we also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We also present visualisation paradigms to support domain experts on feature naming during feature identification and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments. [less ▲]

Detailed reference viewed: 299 (42 UL)
Full Text
See detailEssays on Financial Markets and Banking Regulation.
El Joueidi, Sarah UL

Doctoral thesis (2016)

Detailed reference viewed: 74 (19 UL)
Full Text
See detailCOMPLEX PROBLEM SOLVING IN UNIVERSITY SELECTION
Stadler, Matthias UL

Doctoral thesis (2016)

Detailed reference viewed: 52 (16 UL)
Full Text
See detailAUTOMATED ANALYSIS OF NATURAL-LANGUAGE REQUIREMENTS USING NATURAL LANGUAGE PROCESSING
Arora, Chetan UL

Doctoral thesis (2016)

Natural Language (NL) is arguably the most common vehicle for specifying requirements. This dissertation devises automated assistance for some important tasks that requirements engineers need to perform ... [more ▼]

Natural Language (NL) is arguably the most common vehicle for specifying requirements. This dissertation devises automated assistance for some important tasks that requirements engineers need to perform in order to structure, manage, and elaborate NL requirements in a sound and effective manner. The key enabling technology underlying the work in this dissertation is Natural Language Processing (NLP). All the solutions presented herein have been developed and empirically evaluated in close collaboration with industrial partners. The dissertation addresses four different facets of requirements analysis: • Checking conformance to templates. Requirements templates are an effective tool for improving the structure and quality of NL requirements statements. When templates are used for specifying the requirements, an important quality assurance task is to ensure that the requirements conform to the intended templates. We develop an automated solution for checking the conformance of requirements to templates. • Extraction of glossary terms. Requirements glossaries (dictionaries) improve the understandability of requirements, and mitigate vagueness and ambiguity. We develop an auto- mated solution for supporting requirements analysts in the selection of glossary terms and their related terms. • Extraction of domain models. By providing a precise representation of the main concepts in a software project and the relationships between these concepts, a domain model serves as an important artifact for systematic requirements elaboration. We propose an automated approach for domain model extraction from requirements. The extraction rules in our approach encompass both the rules already described in the literature as well as a number of important extensions developed in this dissertation. • Identifying the impact of requirements changes. Uncontrolled change in requirements presents a major risk to the success of software projects. We address two different dimen- sions of requirements change analysis in this dissertation: First, we develop an automated approach for predicting how a change to one requirement impacts other requirements. Next, we consider the propagation of change from requirements to design. To this end, we develop an automated approach for predicting how the design of a system is impacted by changes made to the requirements. [less ▲]

Detailed reference viewed: 505 (99 UL)
See detailHIGHER MOMENT ASSET PRICING: RISK PREMIUMS, METHODOLOGY AND ANOMALIES
Lin, Yuehao UL

Doctoral thesis (2016)

Detailed reference viewed: 65 (12 UL)
Full Text
See detailSpatial modelling of feedback effects between urban structure and traffic-induced air pollution - Insights from quantitative geography and urban economics
Schindler, Mirjam UL

Doctoral thesis (2016)

Urban air pollution is among the largest environmental health risk and its major source is traffic, which is also the main cause of spatial variation of pollution concerns within cities. Spatial responses ... [more ▼]

Urban air pollution is among the largest environmental health risk and its major source is traffic, which is also the main cause of spatial variation of pollution concerns within cities. Spatial responses by residents to such a risk factor have important consequences on urban structures and, in turn, on the spatial distribution of air pollution and population exposure. These spatial interactions and feedbacks need to be understood comprehensively in order to design spatial planning policies to mitigate local health effects. This dissertation focusses on how residents take their location decisions when they are concerned about health effects associated with traffic-induced air pollution and how these decisions shape future cities. Theoretical analytical and simulation models integrating urban economics and quantitative geography are developed to analyse and simulate the feedback effect between urban structure and population exposure to traffic-induced air pollution. Based on these, spatial impacts of policy, socio-economic and technological frameworks are analysed. Building upon an empirical exploratory analysis, a chain of theoretical models simulates in 2D how the preference of households for green amenities as indirect appraisal of local air quality and local neighbourhood design impact the environment, residents' health and well-being. In order to study the feedback effect of households' aversion to traffic-induced pollution exposure on urban structure, a 1D theoretical urban economics model is developed. Feedback effects on pollution and exposure distributions and intra-urban equity are analysed. Equilibrium, first- and second-best are compared and discussed as to their population distributions, spatial extents and environmental and health implications. Finally, a dynamic agent-based simulation model in 2D further integrates geographical elements into the urban economics framework. Thus, it enhances the spatial representation of the spatial interactions between the location of households and traffic-induced air pollution within cities. Simulations contrast neighbourhood and distance effects of the pollution externality and emphasise the role of local urban characteristics to mitigate population exposure and to consolidate health and environmental effects. The dissertation argues that the consideration of local health concerns due to traffic-induced air pollution in policy design challenges the concept of high urban densification both locally and with respect to distance and advises spatial differentiation. [less ▲]

Detailed reference viewed: 364 (77 UL)
Full Text
See detailNew approaches to understand conductive and polar domain walls by Raman spectroscopy and low energy electron microscopy
Nataf, Guillaume UL

Doctoral thesis (2016)

We investigate the structural and electronic properties of domain walls to achieve a better understanding of the conduction mechanisms in domain walls of lithium niobate and the polarity of domain walls ... [more ▼]

We investigate the structural and electronic properties of domain walls to achieve a better understanding of the conduction mechanisms in domain walls of lithium niobate and the polarity of domain walls in calcium titanate. In a first part, we discuss the interaction between defects and domain walls in lithium niobate. A dielectric resonance with a low activation energy is observed, which vanishes under thermal annealing in monodomain samples while it remains stable in periodically poled samples. Therefore we propose that domain walls stabilize polaronic states. We also report the evolution of Raman modes with increasing amount of magnesium in congruent lithium niobate. We identified specific frequency shifts of the modes at the domain walls. The domains walls appear then as spaces where polar defects are stabilized. In a second step, we use mirror electron microscopy (MEM) and low energy electron microscopy (LEEM) to characterize the domains and domain walls at the surface of magnesium-doped lithium niobate. We demonstrate that out of focus settings can be used to determine the domain polarization. At domain walls, a local stray, lateral electric field arising from different surface charge states is observed. In a second part, we investigate the polarity of domain walls in calcium titanate. We use resonant piezoelectric spectroscopy to detect elastic resonances induced by an electric field, which is interpreted as a piezoelectric response of the walls. A direct image of the domain walls in calcium titanate is also obtained by LEEM, showing a clear contrast in surface potential between domains and walls. This contrast is observed to change reversibly upon electron irradiation due to the screening of polarization charges at domain walls. [less ▲]

Detailed reference viewed: 135 (15 UL)
See detailRegulating Hedge Funds in the EU
Seretakis, Alexandros UL

Doctoral thesis (2016)

Praised for enhancing the liquidity of the markets in which they trade and improving the corporate governance of the companies which they target and criticized for contributing to the instability of the ... [more ▼]

Praised for enhancing the liquidity of the markets in which they trade and improving the corporate governance of the companies which they target and criticized for contributing to the instability of the financial system hedge funds remain the most controversial vehicles of the modern financial system. Unconstrained until recently by regulation, operating under the radar of securities laws and with highly incentivized managers, hedge funds have managed to attract ever-increasing amounts of capital from sophisticated investors and have attracted the attention of the public, regulators and politicians. The financial crisis of 2007-2008, the most severe financial crisis after the Great Depression, prompted politicians and regulators both in the U.S. and Europe to redesign the financial system. The unregulated hedge fund industry heavily criticized for contributing or even causing the financial crisis was one of the first to come under the regulator?s ambit. The result was the adoption of the Dodd-Frank Act in the U.S. and the Alternative Investment Fund Managers Directive in the European Union. These two pieces of legislation are the first ever attempt to directly regulate the hedge fund industry. Taking into account the exponential growth of the hedge fund industry, its beneficial effects and its importance for certain countries such as U.S and Luxembourg, one can easily understand the considerable impact of these regulations. A comparative and critical examination of these major pieces of regulation and their potential impact on the hedge fund industry in Europe and the U.S. is absent from the academic literature something completely excusable when considering that the Dodd-Frank was adopted in 2010 and the AIFM Directive in 2009. Our Phd thesis will attempt to fill this gap and offer a critical assessment of both the Dodd-Frank Act and the AIFM Directive and their impact on the hedge fund industry across the Atlantic. Furthermore, our thesis will seek to offer concrete proposals for the amelioration of the current EU regime with respect to hedge funds building upon US regulations. [less ▲]

Detailed reference viewed: 194 (11 UL)
Full Text
See detailMetalorganic chemical vapour deposition of p-type delafossite CuCrO2 semiconductor thin films: characterization and application to transparent p-n junction
Crêpellière, Jonathan Charles UL

Doctoral thesis (2016)

Transparent conducting oxides such as ITO, FTO or AZO, are currently used in a number of commercial applications, such as transparent electrodes for flat panel displays, light-emitting diodes and solar ... [more ▼]

Transparent conducting oxides such as ITO, FTO or AZO, are currently used in a number of commercial applications, such as transparent electrodes for flat panel displays, light-emitting diodes and solar cells. These applications rely essentially on n-type conductive materials. The developments towards electronic devices based on transparent p-n junctions have triggered an intense research for the synthesis of p-type transparent conductors with sufficiently high quality. Copper-based delafossite materials are thought to hold one of the highest potential and among them CuCrO2 has exhibited strong potential in terms of trade off electrical conductivity and optical transparency. In this work, we report for the first time on CuCrO2 thin-films, grown using a pulsed injection MOCVD. We particularly highlight the influence of the growth temperature, the volume precursor ration and the oxygen partial pressure on chemical, morphological, structural, electrical and optical properties of the films. Delafossite CuCrO2 thin films are synthesized as low as 310°C on glass substrate, which is the lowest growth temperature reported to our knowledge. The films exhibit a carbon contamination below 1%, an excess of chromium and a p-type conductivity. Electrical conductivity at room temperature is measured as high as 17S.cm-1 with a moderate visible transparency at 50%. We report the highest trade off electrical conductivity and visible transparency of CuCrO2 thin films. We investigate the transport conduction with simultaneous electrical and thermoelectrical measurements and band conduction and small polaron models are controversially discussed. A functional transparent p-n junction CuCrO2/ZnO, based on only two-layers, is synthesized with a visible transparency of 45-50%. The junction shows a typical current-voltage characteristic of a diode, with high series resistance features. The device is efficiently acting as an UV detector. [less ▲]

Detailed reference viewed: 126 (3 UL)
Full Text
See detailThe C*-algebras of certain Lie groups
Günther, Janne-Kathrin UL

Doctoral thesis (2016)

In this doctoral thesis, the C*-algebras of the connected real two-step nilpotent Lie groups and the Lie group SL(2,R) are characterized. Furthermore, as a preparation for an analysis of its C*-algebra ... [more ▼]

In this doctoral thesis, the C*-algebras of the connected real two-step nilpotent Lie groups and the Lie group SL(2,R) are characterized. Furthermore, as a preparation for an analysis of its C*-algebra, the topology of the spectrum of the semidirect product U(n) x H_n is described, where H_n denotes the Heisenberg Lie group and U(n) the unitary group acting by automorphisms on H_n. For the determination of the group C*-algebras, the operator valued Fourier transform is used in order to map the respective C*-algebra into the algebra of all bounded operator fields over its spectrum. One has to find the conditions that are satisfied by the image of this C*-algebra under the Fourier transform and the aim is to characterize it through these conditions. In the present thesis, it is proved that both the C*-algebras of the connected real two-step nilpotent Lie groups and the C*-algebra of SL(2,R) fulfill the same conditions, namely the “norm controlled dual limit” conditions. Thereby, these C*-algebras are described in this work and the “norm controlled dual limit” conditions are explicitly computed in both cases. The methods used for the two-step nilpotent Lie groups and the group SL(2,R) are completely different from each other. For the two-step nilpotent Lie groups, one regards their coadjoint orbits and uses the Kirillov theory, while for the group SL(2,R) one can accomplish the calculations more directly. [less ▲]

Detailed reference viewed: 161 (21 UL)
Full Text
See detailTorsion and purity on non-integral schemes and singular sheaves in the fine Simpson moduli spaces of one-dimensional sheaves on the projective plane
Leytem, Alain UL

Doctoral thesis (2016)

This thesis consists of two individual parts, each one having an interest in itself, but which are also related to each other. In Part I we analyze the general notions of the torsion of a module over a ... [more ▼]

This thesis consists of two individual parts, each one having an interest in itself, but which are also related to each other. In Part I we analyze the general notions of the torsion of a module over a non-integral ring and the torsion of a sheaf on a non-integral scheme. We give an explicit definition of the torsion subsheaf of a quasi-coherent O_X-module and prove a condition under which it is also quasi-coherent. Using the associated primes of a module and the primary decomposition of ideals in Noetherian rings, we review the main criteria for torsion-freeness and purity of a sheaf that have been established by Grothendieck and Huybrechts-Lehn. These allow to study the relations between both concepts. It turns out that they are equivalent in "nice" situations, but they can be quite different as soon as the scheme does not have equidimensional components. We illustrate the main differences on various examples. We also discuss some properties of the restriction of a coherent sheaf to its annihilator and its Fitting support and finally prove that sheaves of pure dimension are torsion-free on their support, no matter which closed subscheme structure it is given. Part II deals with the problem of determining "how many" sheaves in the fine Simpson moduli spaces M = M_{dm-1}(P2) of stable sheaves on the projective plane P2 with linear Hilbert polynomial dm-1 for d\geq 4 are not locally free on their support. Such sheaves are called singular and form a closed subvariety M' in M. Using results of Maican and Drézet, the open subset M0 of sheaves in M without global sections may be identified with an open subvariety of a projective bundle over a variety of Kronecker modules N. By the Theorem of Hilbert-Burch we can describe sheaves in an open subvariety of M0 as twisted ideal sheaves of curves of degree d. In order to determine the singular ones, we look at ideals of points on planar curves. In the case of simple and fat curvilinear points, we characterize free ideals in terms of the absence of two coeffcients in the polynomial defining the curve. This allows to show that a generic fiber of M0\cap M' over N is a union of projective subspaces of codimension 2 and finally that M' is singular of codimension 2. [less ▲]

Detailed reference viewed: 528 (67 UL)
Full Text
See detailEntre régions : Le Maroc et le Mexique face aux migrations , dans les contextes d'intégration régionale
Nanga, Emeline Modeste UL

Doctoral thesis (2016)

L’objet de cette thèse vise à analyser d’un point de vue comparé, les liens étroits qui existent entre le phénomène de l’immigration (clandestine), les processus d’intégration régionale actuellement en ... [more ▼]

L’objet de cette thèse vise à analyser d’un point de vue comparé, les liens étroits qui existent entre le phénomène de l’immigration (clandestine), les processus d’intégration régionale actuellement en cours dans l’espace EuroMed et les Amériques et la sécurité (nationale/humaine). Et parallèlement, l’impact de ces considérations sur les droits fondamentaux des migrants en transit ou en situation irrégulière dans ces espaces ; ainsi que sur le rôle et les responsabilités traditionnellement reconnus à l’État. L’accent ici étant mis sur l'UE et les Etats-Unis en tant que pays d’accueil ; le Mexique et le Maroc, simultanément en tant que pays d’émigration, d’immigration et de transit. [less ▲]

Detailed reference viewed: 188 (7 UL)
Full Text
See detailBerezin-Toeplitz Quantization on K3 Surfaces and Hyperkähler Berezin-Toeplitz Quantization
Castejon-Diaz, Hector UL

Doctoral thesis (2016)

Given a quantizable Kähler manifold, the Berezin-Toeplitz quantization scheme constructs a quantization in a canonical way. In their seminal paper Martin Bordemann, Eckhard Meinrenken and Martin ... [more ▼]

Given a quantizable Kähler manifold, the Berezin-Toeplitz quantization scheme constructs a quantization in a canonical way. In their seminal paper Martin Bordemann, Eckhard Meinrenken and Martin Schlichenmaier proved that for a compact Kähler manifold such scheme is a well defined quantization which has the correct semiclassical limit. However, there are some manifolds which admit more than one (non-equivalent) Kähler structure. The question arises then, whether the choice of a different Kähler structure gives rise to a completely different quantizations or the resulting quantizations are related. An example of such objects are the so called K3 surfaces, which have some extra relations between the different Kähler structures. In this work, we consider the family of K3 surfaces which admit more than one quantizable Kähler structure and we use the relations between the different Kähler structures to study whether the corresponding quantizations are related or not. In particular, we prove that such K3 surfaces have always Picard number 20, which implies that their moduli space is discrete, and that the resulting quantum Hilbert spaces are always isomorphic, although not always in a canonical way. However, there exists an infinite subfamily of K3 surfaces for which the isomorphism is canonical. We also define new quantization operators on the product of the different quantum Hilbert spaces and we call this process Hyperkähler quantization. We prove that these new operators have the semiclassical limit, as well as new properties inherited from the quaternionic numbers. [less ▲]

Detailed reference viewed: 155 (11 UL)
Full Text
See detailUnderstanding the internationalization of higher education as a policy process. The case of Romania
Deca, Ligia UL

Doctoral thesis (2016)

This doctoral thesis analyzes internationalization of higher education in Romania as a both an international norm diffusion process and as a discrete policy process, in a wider context of post-communist ... [more ▼]

This doctoral thesis analyzes internationalization of higher education in Romania as a both an international norm diffusion process and as a discrete policy process, in a wider context of post-communist transition. It is conceived as a study of policy for policy, with the explicit aim of contributing to better decision-making at the national and institutional levels. As such, it is intended to facilitate a strategic pursuit of internationalization strategies in Romania, which may further inform our understanding of other similar (post-communist transition) national cases. The research objective is to understand the internationalization of higher education as a distinct policy process at the national and university level, by using a five-point star model of the policy field, which highlights the multiplicity of actors involved and acts as a ‘cat’s cradle’. A multi-theory approach for higher education governance is used for unpacking the complexity of this policy field. Stakeholder and resource dependency theories are employed for understanding the articulation of the interests, capacities and interactions between the actors, while discursive institutionalism is used to look at the role of ideas (norms) mobilized by actors to influence policy change and to construct policy frames. In terms of scope, the thesis addresses the rationales, drivers and impacts of internationalization of higher education, as well as its strategic use by relevant actors. The conclusion yields that internationalization in Romania, especially at the national level, is more a fruit of the existing context – the overall globalization trends, the Bologna Process and the EU pre- and post-accession policy processes – than a deliberate strategic pursuit based on either foresight or long term planning. Political and economic rationales are predominant, to the detriment of those linked to social and cultural considerations, given the competing pressures linked to the demographic downturn, reduced public funding to universities, the perceived need to ‘catch-up with Europe’ and the global competitiveness imperative. Another finding is that internationalization of higher education has never reached the stage of policy formulation at the national level and in most Romanian universities; it was used as a legitimating discourse within higher education reform, but a genuine commitment to comprehensive internationalization policies was lacking, leading to an over-reliance on European programs and a narrow focus on mobility and research partnerships. When looking at the agents of change, it can be inferred that success in pursuing internationalization activities was mostly influenced by policy entrepreneurs and leadership commitment and continuity, regardless of the institutional profile. At the same time, Romania has proven to be an exceptional laboratory for understanding internationalization as a distinctive public policy process within the higher education sector. This is due to the double centralization legacy of the higher education system (caused by its Napoleonic model of higher education system and the communist influence) and the over-sized influence of international actors in policy reform (e.g. UNESCO CEPES and the World Bank). A number of the overall conclusions, mainly aimed at improving decision-making at the national level, are also potentially relevant for a wider regional audience: the need to minimize the over-reliance on international funds and technical assistance of international organizations; limiting over-regulation based on international norms; and improving the national role in the global discussions on internationalization and fighting double discourse. This latter aspect points to the difficulties of replicating policy concepts across borders in a non-contextualized form, especially when domestic contexts differ significantly from the pioneering setting of a given policy. [less ▲]

Detailed reference viewed: 243 (16 UL)
Full Text
See detailFULL 3D RECONSTRUCTION OF DYNAMIC NON-RIGID SCENES: ACQUISITION AND ENHANCEMENT
Afzal, Hassan UL

Doctoral thesis (2016)

Recent advances in commodity depth or 3D sensing technologies have enabled us to move closer to the goal of accurately sensing and modeling the 3D representations of complex dynamic scenes. Indeed, in ... [more ▼]

Recent advances in commodity depth or 3D sensing technologies have enabled us to move closer to the goal of accurately sensing and modeling the 3D representations of complex dynamic scenes. Indeed, in domains such as virtual reality, security, surveillance and e-health, there is now a greater demand for aff ordable and flexible vision systems which are capable of acquiring high quality 3D reconstructions. Available commodity RGB-D cameras, though easily accessible, have limited fi eld-of-view, and acquire noisy and low-resolution measurements which restricts their direct usage in building such vision systems. This thesis targets these limitations and builds approaches around commodity 3D sensing technologies to acquire noise-free and feature preserving full 3D reconstructions of dynamic scenes containing, static or moving, rigid or non-rigid objects. A mono-view system based on a single RGB-D camera is incapable of acquiring full 360 degrees 3D reconstruction of a dynamic scene instantaneously. For this purpose, a multi-view system composed of several RGB-D cameras covering the whole scene is used. In the first part of this thesis, the domain of correctly aligning the information acquired from RGB-D cameras in a multi-view system to provide full and textured 3D reconstructions of dynamic scenes, instantaneously, is explored. This is achieved by solving the extrinsic calibration problem. This thesis proposes an extrinsic calibration framework which uses the 2D photometric and 3D geometric information, acquired with RGB-D cameras, according to their relative (in)accuracies, a ffected by the presence of noise, in a single weighted bi-objective optimization. An iterative scheme is also proposed, which estimates the parameters of noise model aff ecting both 2D and 3D measurements, and solves the extrinsic calibration problem simultaneously. Results show improvement in calibration accuracy as compared to state-of-art methods. In the second part of this thesis, the domain of enhancement of noisy and low-resolution 3D data acquired with commodity RGB-D cameras in both mono-view and multi-view systems is explored. This thesis extends the state-of-art in mono-view template-free recursive 3D data enhancement which targets dynamic scenes containing rigid-objects, and thus requires tracking only the global motions of those objects for view-dependent surface representation and fi ltering. This thesis proposes to target dynamic scenes containing non-rigid objects which introduces the complex requirements of tracking relatively large local motions and maintaining data organization for view-dependent surface representation. The proposed method is shown to be e ffective in handling non-rigid objects of changing topologies. Building upon the previous work, this thesis overcomes the requirement of data organization by proposing an approach based on view-independent surface representation. View-independence decreases the complexity of the proposed algorithm and allows it the flexibility to process and enhance noisy data, acquired with multiple cameras in a multi-view system, simultaneously. Moreover, qualitative and quantitative experimental analysis shows this method to be more accurate in removing noise to produce enhanced 3D reconstructions of non-rigid objects. Although, extending this method to a multi-view system would allow for obtaining instantaneous enhanced full 360 degrees 3D reconstructions of non-rigid objects, it still lacks the ability to explicitly handle low-resolution data. Therefore, this thesis proposes a novel recursive dynamic multi-frame 3D super-resolution algorithm together with a novel 3D bilateral total variation regularization to filter out the noise, recover details and enhance the resolution of data acquired from commodity cameras in a multi-view system. Results show that this method is able to build accurate, smooth and feature preserving full 360 degrees 3D reconstructions of the dynamic scenes containing non-rigid objects. [less ▲]

Detailed reference viewed: 200 (18 UL)
Full Text
See detailFast reconsonstruction of compact context-specific network models
Pacheco, Maria UL

Doctoral thesis (2016)

Recent progress in high-throughput data acquisition has shifted the focus from data generation to the processing and understanding of now easily collected patient-specific information. Metabolic models ... [more ▼]

Recent progress in high-throughput data acquisition has shifted the focus from data generation to the processing and understanding of now easily collected patient-specific information. Metabolic models, which have already proven to be very powerful for the integration and analysis of such data sets, might be successfully applied in precision medicine in the near future. Context-specific reconstructions extracted from generic genome-scale models like Reconstruction X (ReconX) (Duarte et al., 2007; Thiele et al., 2013) or Human Metabolic Reconstruction (HMR) (Agren et al., 2012; Mardinoglu et al., 2014a) thereby have the potential to become a diagnostic and treatment tool tailored to the analysis of specific groups of individuals. The use of computational algorithms as a tool for the routinely diagnosis and analysis of metabolic diseases requires a high level of predictive power, robustness and sensitivity. Although multiple context-specific reconstruction algorithms were published in the last ten years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. The aim of this thesis was to create a family of robust and fast algorithms for the building of context-specific models that could be used for the integration of different types of omics data and which should be sensitive enough to be used in the framework of precision medicine. FASTCORE (Vlassis et al., 2014), which was developed in the frame of this thesis is among the first context-specific building algorithms that do not optimize for a biological function and that has a computational time around seconds. Furthermore, FASTCORE is devoid of heuristic parameter settings. FASTCORE requires as input a set of reactions that are known to be active in the context of interest (core reactions) and a genome-scale reconstruction. FASTCORE uses an approximation of the cardinality function to force the core set of reactions to carry a flux above a threshold. Then an L1-minimization is applied to penalize the activation of reactions with low confidence level while still constraining the set of core reactions to carry a flux. The rationale behind FASTCORE is to reconstruct a compact consistent (all the reactions of the model have the potential to carry non zero-flux) output model that contains all the core reactions and a small number of non-core reactions. Then, in order to cope with the non-negligible amount of noise that impede direct comparison within genes, FASTCORE was extended to the FASTCORMICS workflow (Pires Pacheco and Sauter, 2014; Pires Pacheco et al., 2015a) for the building of models via the integration of microarray data . FASTCORMICS was applied to reveal control points regulated by genes under high regulatory load in the metabolic network of monocyte derived macrophages (Pires Pacheco et al., 2015a) and to investigate the effect of the TRIM32 mutation on the metabolism of brain cells of mice (Hillje et al., 2013). The use of metabolic modelling in the frame of personalized medicine, high-throughput data analysis and integration of omics data calls for a significant improvement in quality of existing algorithms and generic metabolic reconstructions used as input for the former. To this aim and to initiate a discussion in the community on how to improve the quality of context-specific reconstruction, benchmarking procedures were proposed and applied to seven recent contextspecific algorithms including FASTCORE and FASTCORMICS (Pires Pacheco et al., 2015a). Further, the problems arising from a lack of standardization of building and annotation pipelines and the use of non-specific identifiers was discussed in the frame of a review. In this review, we also advocated for a switch from gene-centred protein rules (GPR rules) to transcript-centred protein rules (Pfau et al., 2015). [less ▲]

Detailed reference viewed: 134 (44 UL)
Full Text
See detailEssays on the macro-analysis of international migration
Delogu, Marco UL

Doctoral thesis (2016)

This dissertation consists of three chapters, all of them are self-contained works. The first chapter, “Globalizing labor and the world economy: the role of human capital” is a joint work with Prof. Dr ... [more ▼]

This dissertation consists of three chapters, all of them are self-contained works. The first chapter, “Globalizing labor and the world economy: the role of human capital” is a joint work with Prof. Dr. Frédéric Docquier and Dr. Joël Machado. We develop a microfounded model of the world economy aiming to compare short- and long-run effects of migration restrictions on the world distribution of income. We find that a complete removal of migration barriers would increase the world average level of GDP per worker by 13% in the short run and by about 54% after one century. These results are very robust to our identification strategy and technological assumptions. The second chapter, titled “Infrastructure Policy: the role of informality and brain drain” analyses the effectiveness of infrastructure policy in developing countries. I show that, at low level of development, the possibility to work informally has a detrimental impact on infrastructure accumulation. I find that increasing the tax rate or enlarging the tax base can reduce the macroeconomic performance in the short run, while inducing long-run gains. These effects are amplified when brain drain is endogenous. The last chapter, titled “The role of fees in foreign education: evidence from Italy and the UK” is mainly empirical. Relying upon a discrete choice model, together with Prof. Dr. Michel Beine and Prof. Dr. Lionel Ragot I assess the determinants of international students mobility exploiting, for the first time in the literature, data at the university level. We focus on student inflows to Italy and the UK, countries on which tuition fees varies across universities. We obtain evidence for a clear and negative impact of tuition fees on international students inflows and confirm the positive impact of quality of education. The estimations find also support for an important role of additional destination-specific variables such as host capacity, expected return of education and cost of living in the vicinity of the university. [less ▲]

Detailed reference viewed: 144 (17 UL)
Full Text
See detailAUTOMATED TESTING OF SIMULINK/STATEFLOW MODELS IN THE AUTOMOTIVE DOMAIN
Matinnejad, Reza UL

Doctoral thesis (2016)

Context. Simulink/Stateflow is an advanced system modeling platform which is prevalently used in the Cyber Physical Systems domain, e.g., automotive industry, to implement software con- trollers. Testing ... [more ▼]

Context. Simulink/Stateflow is an advanced system modeling platform which is prevalently used in the Cyber Physical Systems domain, e.g., automotive industry, to implement software con- trollers. Testing Simulink models is complex and poses several challenges to research and prac- tice. Simulink models often have mixed discrete-continuous behaviors and their correct behav- ior crucially depends on time. Inputs and outputs of Simulink models are signals, i.e., values evolving over time, rather than discrete values. Further, Simulink models are required to operate satisfactory for a large variety of hardware configurations. Finally, developing test oracles for Simulink models is challenging, particularly for requirements capturing their continuous aspects. In this dissertation, we focus on testing mixed discrete-continuous aspects of Simulink models, an important, yet not well-studied, problem. The existing Simulink testing techniques are more amenable to testing and verification of logical and state-based properties. Further, they are mostly incompatible with Simulink models containing time-continuos blocks, and floating point and non- linear computations. In addition, they often rely on the presence of formal specifications, which are expensive and rare in practice, to automate test oracles. Approach. In this dissertation, we propose a set of approaches based on meta-heuristic search and machine learning techniques to automate testing of software controllers implemented in Simulink. The work presented in this dissertation is motived by Simulink testing needs at Delphi Automotive Systems, a world leading part supplier to the automotive industry. To address the above-mentioned challenges, we rely on discrete-continuous output signals of Simulink models and provide output- based black-box test generation techniques to produce test cases with high fault-revealing ability. Our algorithms are black-box, hence, compatible with Simulink/Stateflow models in their en- tirety. Further, we do not rely on the presence of formal specifications to automate test oracles. Specifically, we propose two sets of test generation algorithms for closed-loop and open-loop con- trollers implemented in Simulink: (1) For closed-loop controllers, test oracles can be formalized and automated relying on the feedback received from the controlled system. We characterize the desired behavior of closed-loop controllers in a set of common requirements, and then use search to identify the worst-case test scenarios of the controller with respect to each requirement. (2) For open-loop controllers, we cannot automate test oracles since the feedback is not available, and test oracles are manual. Hence, we focus on providing test generation algorithms that develop small effective test suites with high fault revealing ability. We further provide a test case prioriti- zation algorithm to rank the generated test cases based on their fault revealing ability and lower the manual oracle cost. Our test generation and prioritization algorithms are evaluated with several industrial and publicly available Simulink models. Specifically, we showed that fault revealing ability of our our approach outperforms that of Simulink Design Verifier (SLDV), the only test generation toolbox of Simulink and a well-known commercial Simulink testing tool. In addition, using our approach, we were able to detect several real faults in Simulink models from our industry partner, Delphi, which had not been previously found by manual testing based on domain expertise and existing Simulink testing tools. Contributions. The main research contributions in this dissertation are: 1. An automated approach for testing closed-loop controllers that characterize the desired be- havior of such controllers in a set of common requirements, and combines random explo- ration and search to effectively identify the worst-case test scenarios of the controller with respect to each requirement. 2. An automated approach for testing highly configurable closed-loop controllers by account- ing for all their feasible configurations and providing strategies to scale the search to large multi-dimensional spaces relying on dimensionality reduction and surrogate modelling 3. A black-box output-based test generation algorithm for open-loop Simulink models which uses search to maximize the likelihood of presence of specific failure patterns (i.e., anti- patterns) in Simulink output signals. 4. A black-box output-based test generation algorithm for open-loop Simulink models that maximizes output diversity to develop small test suites with diverse output signal shapes and, hence, high fault revealing ability. 5. A test case prioritization algorithm which relies on output diversity of the generated test suites, in addition to the dynamic structural coverage achieved by individual tests, to rank test cases and help engineers identify faults faster by inspecting a few test cases. 6. Two test generation tools, namely CoCoTest and SimCoTest, that respectively implement our test generation approaches for closed-loop and open-loop controllers. [less ▲]

Detailed reference viewed: 269 (35 UL)
Full Text
See detailDomain Completeness of Model Transformations and Synchronisations
Nachtigall, Nico UL

Doctoral thesis (2016)

The intrinsic question of most activities in information science, in practice or science, is “Does a given system satisfy the requirements regarding its application?” Commonly, requirements are expressed ... [more ▼]

The intrinsic question of most activities in information science, in practice or science, is “Does a given system satisfy the requirements regarding its application?” Commonly, requirements are expressed and accessible by means of models, mostly in a diagrammatic representation by visual models. The requirements may change over time and are often defined from different perspectives and within different domains. This implies that models may be transformed either within the same domain-specific visual modelling language or into models in another language. Furthermore, model updates may be synchronised between different models. Most types of visual models can be represented by graphs where model transformations and synchronisations are performed by graph transformations. The theory of graph transformations emerged from its origins in the late 1960s and early 1970s as a generalisation of term and tree rewriting systems to an important field in (theoretical) computer science with applications particularly in visual modelling techniques, model transformations, synchronisations and behavioural specifications of models. Its formal foundations but likewise visual notation enable both precise definitions and proofs of important properties of model transformations and synchronisations from a theoretical point of view and an intuitive approach for specifying transformations and model updates from an engineer’s point of view. The recent results were presented in the EATCS monographs “Fundamentals of Algebraic Graph Transformation” (FAGT) in 2006 and its sequel “Graph and Model Transformation: General Framework and Applications” (GraMoT) in 2015. This thesis concentrates on one important property of model transformations and synchronisations, i.e., syntactical completeness. Syntactical completeness of model transformations means that given a specification for transforming models from a source modelling language into models in a target language, then all source models can be completely transformed into corresponding target models. In the same given context, syntactical completeness of model synchronisations means that all source model updates can be completely synchronised, resulting in corresponding target model updates. This work is essentially based on the GraMoT book and mainly extends its results for model transformations and synchronisations based on triple graph grammars by a new more general notion of syntactical completeness, namely domain completeness, together with corresponding verification techniques. Furthermore, the results are instantiated to the verification of the syntactical completeness of software transformations and synchronisations. The well-known transformation of UML class diagrams into relational database models and the transformation of programs of a small object-oriented programming language into class diagrams serve as running examples. The existing AGG tool is used to support the verification of the given examples in practice. [less ▲]

Detailed reference viewed: 148 (16 UL)
Full Text
See detailPopulating Legal Ontologies using Information Extraction based on Semantic Role Labeling and Text Similarity
Humphreys, Llio UL

Doctoral thesis (2016)

This thesis seeks to address the problem of the 'resource consumption bottleneck' of creating (legal) semantic technologies manually. It builds on research in legal theory, ontologies and natural language ... [more ▼]

This thesis seeks to address the problem of the 'resource consumption bottleneck' of creating (legal) semantic technologies manually. It builds on research in legal theory, ontologies and natural language processing in order to semi-automatically normalise legislative text, extract definitions and structured norms, and link normative provisions to recitals. The output is intended to help make laws more accessible, understandable, and searchable in a legal document management system. Key contributions are: - an analysis of legislation and structured norms in legal ontologies and compliance systems in order to determine the kind of information that individuals and organisations require from legislation to understand their rights and duties; - an analysis of the semantic and structural challenges of legislative text for machine understanding; - a rule-based normalisation module to transform legislative text into regular sentences to facilitate natural language processing; - a Semantic Role Labeling based information extraction module to extract definitions and norms from legislation and represent them as structured norms in legal ontologies; - an analysis of the impact of recitals on the interpretation of legislative norms; - a Cosine Similarity based text similarity module to link recitals to relevant normative provisions; - a description of important challenges that have emerged from this research which may prove useful for future work in the extraction and linking of information from legislative text. [less ▲]

Detailed reference viewed: 223 (15 UL)
Full Text
See detailDevelopment of biospecimen quality control tools and disease diagnostic markers by metabolic profiling
Trezzi, Jean-Pierre UL

Doctoral thesis (2016)

In metabolomics-based biomarker studies, the monitoring of pre-analytical variations is crucial and requires quality control tools to enable proper sample quality evaluation. In this dissertation work ... [more ▼]

In metabolomics-based biomarker studies, the monitoring of pre-analytical variations is crucial and requires quality control tools to enable proper sample quality evaluation. In this dissertation work, biospecimen research and machine learning algorithms are applied (1) to develop sample quality assessment tools and (2) to develop disease-specific diagnostic models. In this regard, a novel plasma sample quality assessment tool, the LacaScore, is presented. The LacaScore plasma quality assessment is based on the plasma levels of ascorbic acid and lactic acid. The biggest challenge in metabolomics analyses is that the sample quality is often not known. The presented tool enhances the knowledge and importance of the monitoring of pre-analytical variations, such as pre-centrifugation time and temperature, prior to sample analysis in the emerging field of metabolomics. Based on the LacaScore, decisions on the suitability/fit-for-purpose of a given sample or sample cohort can be made. In this dissertation work, the knowledge on sample quality was applied in a biomarker discovery study based on cerebrospinal fluid (CSF) from early-stage Parkinson’s disease (PD) patients. To date, no markers for the diagnosis of Parkinson’s disease are available. In this work, a non-targeted GC-MS approach is presented and shows significant changes in the metabolic profile in CSF from early-stage PD patients compared to matched healthy control subjects. Based on these findings, a biomarker signature for the prediction of earlystage PD has been developed by the application of sophisticated machine learning algorithms. This disease-specific signature is composed of metabolites involved in inflammation, glycosylation/glycation and oxidative stress response. In summary, this dissertation illustrates the importance of sample quality monitoring in biomarker studies that are often limited by small amounts of human body fluids. The monitoring of sample quality enhances the robustness and reproducibility of biomarker discovery studies. In addition, proper data analysis and powerful machine learning algorithms enable the generation of potential disease diagnosis biomarker signatures. [less ▲]

Detailed reference viewed: 123 (23 UL)
Full Text
See detailIn vitro Metabolic Studies of Dopamine Synthesis and the Toxicity of L-DOPA in Human Cells
Delcambre, Sylvie UL

Doctoral thesis (2016)

This work is divided in two parts. In the first, I investigated the effects of 2,3- dihydroxy-L-phenylalanine (L-DOPA) on the metabolism of human tyrosine hydroxylase (TH)-positive neuronal LUHMES cells ... [more ▼]

This work is divided in two parts. In the first, I investigated the effects of 2,3- dihydroxy-L-phenylalanine (L-DOPA) on the metabolism of human tyrosine hydroxylase (TH)-positive neuronal LUHMES cells. L-DOPA is the gold standard treatment for Parkinson’s disease (PD) and its effects on cellular metabolism are controversial. It induced a re-routing of intracellular carbon supplies. While glutamine contribution to tricarboxylic acid (TCA) cycle intermediates increased, glucose contribution to the same metabolites decreased. Carbon contribution from glucose was decreased in lactate and was compensated by an increased pyruvate contribution. Pyruvate reacted with hydrogen peroxide generated during the auto-oxidation of L-DOPA and lead to an increase of acetate in the medium. In the presence of L-DOPA, this acetate was taken up by the cells. In combination with an increased glutamate secretion, all these results seem to point towards a mitochondrial complex II inhibition. In the second part of this work, I studied and compared dopamine (DA)-producing in vitro systems. First, I compared gene and protein expression of catecholamine (CA)- related genes. Then, I performed molecular engineering to increase TH expression in LUHMES and SH-SY5Y cells. This was sufficient to induce DA production in SH-SY5Y, but not in LUHMES cells, indicating that TH expression is not sufficient to characterize dopaminergic neurons. Therefore I used SH-SY5Y cells overexpressing TH to study substrates for DA production. Upon overexpression of aromatic amino acid decarboxylase (AADC), LUHMES cells produced DA after L-DOPA supplementation. This model was useful to study L-DOPA uptake in LUHMES cells and I showed that L-DOPA is imported via large amino acid transporter (LAT). In conclusion, the expression of TH is not sufficient to obtain a DA-producing cell system and this work opened many and answered some questions about DA metabolism. [less ▲]

Detailed reference viewed: 219 (46 UL)
Full Text
See detailBig Galois image for p-adic families of positive slope automorphic forms
Conti, Andrea UL

Doctoral thesis (2016)

Detailed reference viewed: 21 (5 UL)
See detailIP Box Regime im Europäischen Steuerrecht
Schwarz, Paloma Natascha UL

Doctoral thesis (2016)

Detailed reference viewed: 101 (10 UL)
Full Text
See detailDoping, Defects And Solar Cell Performance Of Cu-rich Grown CuInSe2
Bertram, Tobias UL

Doctoral thesis (2016)

Cu-rich grown CuInSe2 thin-film solar cells can be as efficient as Cu-poor ones. However record lab cells and commercial modules are grown exclusively under Cu-poor conditions. While the Cu-rich ... [more ▼]

Cu-rich grown CuInSe2 thin-film solar cells can be as efficient as Cu-poor ones. However record lab cells and commercial modules are grown exclusively under Cu-poor conditions. While the Cu-rich material’s bulk properties show advantages, e.g. higher minority carrier mobilities and quasi-Fermi level splitting - both indicating a superior performance - it also features some inherent problems that led to its widespread dismissal for solar cell use. Two major challenges can be identified that negatively impact the Curich’s performance: a too high doping density and recombination close to the interface. In this work electrical characterisation techniques were employed to investigate the mechanisms that cause the low performance. Capacitance measurements are especially well suited to probe the electrically active defects within the space-charge region. Under a variation of applied DC bias they give insights into the shallow doping density, while frequency and temperature dependent measurements are powerful in revealing deep levels within the bandgap. CuInSe2 samples were produced via a thermal co-evaporation process and subsequently characterized utilizing the aforementioned techniques. The results have been grouped into two partial studies. First the influence of the Se overpressure during growth on the shallow doping and deep defects is investigated and how this impacts solar cell performance. The second study revolves around samples that feature a surface treatment to produce a bilayer structure - a Cu-rich bulk and a Cu-poor interface. It is shown that via a reduction of the Se flux during absorber preparation the doping density can be reduced and while this certainly benefits solar cell efficiency, a high deficit in open-circuit voltage still results in lower performance compared to the Cu-poor devices. Supplementary measurements trace this back to recombination close to the interface. Furthermore a defect signature is identified, that is not present in Cu-poor material. These two results are tied together via the investigation of the surface treated samples, which do not show interface recombination and reach the same high voltage as the Cu-poor samples. The defect signature, normally native to the Cu-rich material, however is not found in the surface treated samples. It is concluded that this deep trap acts as a recombination centre close to the interface. Shifting it towards the bulk via the treatment is then related to the observed increase in voltage. Within this thesis a conclusive picture is derived to unite all measurement results and show the mechanisms that work together and made it possible to produce a high efficient Cu-rich thin-film solar cell. [less ▲]

Detailed reference viewed: 215 (40 UL)
Full Text
See detailAccession Treaties in the EU legal order
Prek, Miro UL

Doctoral thesis (2016)

In the present thesis, it is argued that (1) the Accession Treaties have been used in accordance with their nature and proclaimed objective: they only brought about limited changes to primary law, proper ... [more ▼]

In the present thesis, it is argued that (1) the Accession Treaties have been used in accordance with their nature and proclaimed objective: they only brought about limited changes to primary law, proper to the needs of accession and have not introduced any fundamental changes. The numerous and still growing arrangements that depart from the principle of the application of the acquis in toto on accession do not alter this conclusion. (2) The evolution, especially from the 2004 Accession Treaty onwards and predictable for the future Accession Treaties (e.g. with Turkey), shows a tendency of diversification of that legal instrument by (a) adding new and/or reinforced elements of conditionality, protracted from the pre-accession phase to the membership phase, devising new mechanisms of conditionality and control (general and specific safeguard clauses, monitoring and verification mechanisms, membership postponement clause) and thus (b) contributing to a further differentiation in two respects - as among the Member States and with regard to the core acquis. Such differentiation exists already on the basis of the constitutive treaties (“in-built constitutive treaties induced differentiation”) and is accentuated by the Accession Treaties and their transitional arrangements (“Accession Treaties induced differentiation”). Questions of differentiation acquired another dimension with the introduction of the citizenship of the EU. (3) Finally, negotiations with certain candidate countries will show whether additional innovations are to be expected: a) whether future instruments of accession would be used in order to increase the existing level of differentiation (and protract the pre-accession phase logic well into the membership phase) with the conditionality becoming the most important element of the relations within an enlarged EU and thus paradoxically negating the nature of the integration itself, b) whether they will perhaps be used to bring about more important modifications to the treaties, or c) whether they will go as far as to provide a legal basis for permanent derogations with regard to certain new Member States (as explicitly envisaged, for instance, in the negotiating framework for Turkey). [less ▲]

Detailed reference viewed: 207 (17 UL)
See detailTowards harmonization of proteomics methods. Implication for high resolution/accurate mass targeted methods.
Bourmaud, Adèle Gaëlle Annabelle UL

Doctoral thesis (2016)

Mass spectrometry plays a central role in proteomics studies which has allowed its expansion to biomedical research. In an effort to accelerate the understanding of the various aspects of protein biology ... [more ▼]

Mass spectrometry plays a central role in proteomics studies which has allowed its expansion to biomedical research. In an effort to accelerate the understanding of the various aspects of protein biology, the comparison and integration of results across laboratories have gained importance. However, the variety of laboratory-specific protocols, instruments, and data processing methods limits the reliability and reproducibility of the proteomics datasets. The harmonization of LC-MS based proteomics experiments is thus urgently needed to ensure that the workflows used are suitable for the intended purpose of the experiments and that they generate consistent and reproducible results. In a first step towards this harmonization, the critical components of each step of the workflow must be identified. Consolidated sample preparation methods with defined recovery and qualified platforms along with systematic assessment of their performance have to be established. They should ultimately rely on well-defined recommendations and reference materials. Towards these goals, the present project aimed to define, based on current proteomics practices and recent technologies, experimental protocols that will constitute reference methods for the community. The associated results will represent a baseline that can be used to benchmark workflows and platforms, and to conduct routine experiments. A quality control procedure was developed to routinely assess the uniformity of proteomics analyses. The combination of a simple protocol and the addition of reference materials at different stages of the workflow allowed a straightforward monitoring of both sample preparation and LC-MS performance. In addition, as high resolution/accurate mass instruments with fast scanning capabilities turned out to be particularly suited to targeted quantitative experiments, a significant part of the work has consisted in the evaluation of the capabilities offered by such mass spectrometers to promote good practice upon their inception. The methods developed based on these emerging technologies were compared to the existing workflows allowing recommendations to be made for their implementation for fit-for-purpose experiments. [less ▲]

Detailed reference viewed: 111 (8 UL)
Full Text
See detailSoil Fatigue Due To Cyclically Loaded Foundations
Pytlik, Robert Stanislaw UL

Doctoral thesis (2016)

Cyclic loading on civil structures can lead to a reduction of strength of the used materials. A literature study showed that, in contrast to steel structures and material engineering, there are no design ... [more ▼]

Cyclic loading on civil structures can lead to a reduction of strength of the used materials. A literature study showed that, in contrast to steel structures and material engineering, there are no design codes or standards for fatigue of foundations and the surrounding ground masses in terms of shear strength reduction. Scientific efforts to study the fatigue behaviour of geomaterials are mainly focused on strain accumulation, while the reduction of shear strength of geomaterials has not been fully investigated. It has to be mentioned that a number of laboratory investigation have been done and some models have been already proposed for strain accumulation and pore pressure increase which can lead to liquefaction. Laboratory triaxial tests have been performed in order to evaluate the fatigue of soils and rocks by comparing the shear strength parameters obtained in cyclic triaxial tests with the static one. Correlations of fatigue with both, the number of cycles and cyclic stress ratio have been given. In order to apply cyclic movements in a triaxial apparatus, a machine setup and configuration was made. A special program was written in LabVIEW to control the applied stresses and the speed of loading, which allowed simulating the natural loading frequencies. Matlab scripts were also written to reduce the time required for the data processing. Both cohesive and cohesionless geomaterials were tested: artificial gypsum and mortar as cohesive geomaterials, and sedimentary limestone, and different sands, as cohesionless and low-cohesive natural materials. The artificial gypsum, mortar and natural limestone exhibit mostly brittle behaviour, where the crumbled limestone and other sand typical ductile one. All the sands as well as the crumbled limestone were slightly densified before testing therefore; they can be treated as dense sands. The UCS for the crumbled limestone is 0.17 MPa and standard error of estimate σest = 0.021 MPa, where for mortar UCS = 9.11 MPa with σest = 0.18 MPa and for gypsum UCS = 6.02 MPa with standard deviation = 0.53. All triaxial tests were conducted on dry samples in the natural state, without presence of water (no pore pressure). The range of the confining pressure was between 0 MPa and 0.5 MPa. The cyclic tests carried out were typical multiple loading tests with constant displacement ratio up to a certain stress level. The frequency was kept low to allow for precise application of cyclic load and accurate readings. What is more, the frequency of the cyclic loading corresponds to the natural loading of waves and winds. The number of applied cycles was from few cycles up to few hundred thousand (max number of applied cycles was 370 000). Due to the complex behaviour of materials and high scatter of the results, many tests were required. Two different strategies were used to investigate fatigue of geomaterials: 1) the remaining shear strength curve; after a given number of cycles, a final single load test was done until failure in order to measure the remaining shear strength of the sample. 2) the typical S-N curve (Wöhler curves); there is simply a constant loading until failure. The remaining shear strength (or strength reduction) curve has been compared with the standard S-N curve, and is found to be very similar because the cyclic stress ratio has little influence. The cyclic loading on geomaterials, being an assemblage of different sizes and shapes of grains with voids etc., showed different types of effects. Cohesionless materials show a shear strength increase during the cyclic loading, while cohesive ones show a shear strength decrease. For the cohesive materials the assumption was made that the friction angle remains constant; so, the fatigue of geomaterials can be seen as a reduction of the cohesion. In this way, the fatigue of a cohesive geomaterial can be described by a remaining cohesion. The imperfections in the artificial gypsum have a significant impact on the results of the (especially cyclic) strength tests. Therefore another man made materials was used – a mixture of sand and cement (mortar). As the first static test results were very promising, mortar was used in further tests. The cyclic tests, however, presented similar, high scatter of results as for artificial gypsum. An unexpected observation for both materials was a lack of dependency of the remaining shear strength on the cyclic stress ratio. The strain-stress relationship in cyclic loading shows that the fatigue life of the geomaterials can be divided into three stages, just as for creep. The last phase with a fast increase in plastic strains could be an indicator of an incoming failure. The accumulation of strains and increase of internal energy could be good indicators too, but no strong correlation, has been found. Similar to the shear strength, the stiffness changes during cyclic loading; for cohesive materials the stiffness increase, while for cohesionless it decreases. This could help to predict the remaining shear strength of a geomaterial by using a non-destructive method. [less ▲]

Detailed reference viewed: 484 (5 UL)
Full Text
See detailSocio-Technical Aspects of Security Analysis
Huynen, Jean-Louis UL

Doctoral thesis (2016)

This thesis seeks to establish a semi-automatic methodology for security analysis when users are considered part of the system. The thesis explores this challenge, which we refer to as ‘socio-technical ... [more ▼]

This thesis seeks to establish a semi-automatic methodology for security analysis when users are considered part of the system. The thesis explores this challenge, which we refer to as ‘socio-technical security analysis’. We consider that a socio-technical vulnerability is the conjunction of a human behaviour, the factors that foster the occurrence of this behaviour, and a system. Therefore, the aim of the thesis is to investigate which human-related factors should be considered in system security, and how to incorporate these identified factors into an analysis framework. Finding a way to systematically detect, in a system, the socio-technical vulnerabilities that can stem from insecure human behaviours, along with the factors that influence users into engaging in these behaviours is a long journey that we can summarise in three research questions: 1. How can we detect a socio-technical vulnerability in a system? 2. How can we identify in the interactions between a system and its users, the human behaviours that can harm this system’s security? 3. How can we identify the factors that foster human behaviours that are harmful to a system’s security? A review of works that aim at bringing social sciences findings into security analysis reveals that there is no unified way to do it. Identifying the points where users can harm a system’s security, and clarifying what factors can foster an insecure behaviour is a complex matter. Hypotheses can arise about the usability of the system, aspects pertaining to the user or the organisational context but there is no way to find and test them all. Further, there is currently no way to systematically integrate the results regarding hypotheses we tested in a security analysis. Thus, we identify two objectives related to these methodological challenges that this thesis aims at fulfilling in its contributions: 1. What form should a framework that intends to identify harmful behaviours for security, and to investigate the factors that foster their occurrence take? 2. What form should a semi-automatic, or tool-assisted methodology for the security analysis of socio-technical systems take? The thesis provides partial answers to the questions. First it defines a methodological framework called STEAL that provides a common ground for an interdisciplinary approach to security analysis. STEAL supports the interaction between computer scientists and social scientists by providing a common reference model to describe a system with its human and non-human components, potential attacks and defences, and the surrounding context. We validate STEAL in a two experimental studies, showing the role of the context and graphical cues in Wi-Fi networks’ security. Then the thesis complements STEAL with a Root Cause Analysis (RCA) methodology for security inspired from the ones used in safety. This methodology, called S·CREAM aims at being more systematic than the research methods that can be used with STEAL (surveys for instance) and at providing reusable findings for analysing security. To do so, S·CREAM provides a retrospective analysis to identify the factors that can explain the success of past attacks and a methodology to compile these factors in a form that allows for the consideration of their potential effects on a system’s security, given an attacker Threat Model. The thesis also illustrates how we developed a tool—the S·CREAM assistant— that supports the methodology with an extensible knowledge base and computer-supported reasoning. [less ▲]

Detailed reference viewed: 97 (21 UL)
Full Text
See detailLogic and Games of Norms: a Computational Perspective
Sun, Xin UL

Doctoral thesis (2016)

Detailed reference viewed: 75 (15 UL)
Full Text
See detailInfluence of interface conditioning and dopants on Cd-free buffers for Cu(In,Ga)(S,Se)2 solar cells
Hönes, Christian UL

Doctoral thesis (2016)

In the search for a non-toxic replacement of the commonly employed CdS buffer layer for Cu(In,Ga)(S,Se)2 (CIGSSe) based solar cells, indium sulfide thin films, deposited via thermal evaporation, and ... [more ▼]

In the search for a non-toxic replacement of the commonly employed CdS buffer layer for Cu(In,Ga)(S,Se)2 (CIGSSe) based solar cells, indium sulfide thin films, deposited via thermal evaporation, and chemical bath deposited (CBD) Zn(O,S) thin films are promising materials. However, while both materials have already been successfully utilized in highly efficient cells, solar cells with both materials usually need an ill-defined post-treatment step in order to reach maximum efficiencies, putting them at a disadvantage for mass production. In this thesis the influence of interface conditioning and dopants on the need for post-treatments is investigated for both materials, giving new insights into the underlying mechanisms and paving the way for solar cells with higher initial efficiencies. First, CIGSSe solar cells with In2S3 thin film buffer layers, deposited by thermal evaporation, are presented in chapter 3. The distinctive improvement of these buffer layers upon annealing of the completed solar cell and the change of this annealing behavior when the CIGSSe surface is treated with wet-chemical means prior to buffer layer deposition is investigated. Additional model simulations lead to a two-part explanation for the observed effects, involving a reduction of interface recombination, and the removal of a highly p-doped CIGSSe surface layer. chapter 4 introduces a novel, fast process for the deposition of Zn(O,S) buffer layers on submodule sized substrates. The resulting solar cell characteristics and the effects of annealing and prolonged illumination are discussed within the framework of theoretical considerations involving an electronic barrier for generated charge carriers. The most important influences on such an electronic barrier are investigated by model simulations and an experimental approach to each parameter. This leads to an improved window layer deposition process, absorber optimization, and intentional buffer layer doping, all reducing the electronic barrier and therefore the necessity for post-treatments to some extent. The energetic barrier discussed above may be avoided altogether by effective interface engineering. Therefore, the controlled incorporation of indium as an additional cation into CBD-Zn(O,S) buffer layers by means of a newly developed alkaline chemical bath deposition process is presented in chapter 5. With increasing amount of incorporated indium, the energetic barrier in the conduction band can be reduced. This is quantitatively assessed by a combination of photoelectron spectroscopy measurements and the determination of the buffer layer's optical band gap. This barrier lowering leads to less distorted current--voltage characteristics and efficiencies above 14 %, comparable to CdS reference cells, without extensive light-soaking. [less ▲]

Detailed reference viewed: 97 (17 UL)
See detailRepräsentationen der Kaiserin Elisabeth von Österreich nach dem Ende des Habsburgerreiches. Eine struktur-funktionale Untersuchung mythisierender Filmdarstellungen
Karczmarzyk, Nicole UL

Doctoral thesis (2016)

By tracing the process of the evolving myth surrounding the empress Elisabeth of Austria since her death one would very likely find an almost continuous sinusoidal curve as a result. The aim of the thesis ... [more ▼]

By tracing the process of the evolving myth surrounding the empress Elisabeth of Austria since her death one would very likely find an almost continuous sinusoidal curve as a result. The aim of the thesis is to explore the question which mediating sociopolitical functions the myth fullfils and where its value lies within the national and historical myth systems (the ones of Austria and also Prussia/Germany) by analysing audio-visual media, especially films. Since the representations of the Empress during the 20. century have been mainly manifested within popular culture, printed media, like e.g. newspapers, magazines, biograhies will also be considered for the analysis as well as theatrical plays like operas and musical comedies. The thesis stresses that the myth of Empress Elizabeth of Austria can be seen as a set of essential kernels, in structuralism theory called ‚mythemes’, that can constantly get reassembled and re-told. The mythemes and the actualizations of the myth itself are to be elaborated as well as its different cultural functionalisations at different times by diachronical and a synchronical readings. The thesis is trying to fill the gap of the frequently stated absence of female figures within the field of myth research and points out functions of a contemporary myth and its different appearances. Beyond that those procedures and strategies of media will be addressed that lead a long constituted myth into serving as a collective character or figure which can continously be applicated into new thematic contexts. The main proposition of the thesis is that the representations of Empress Elizabeth of Austria during the 20th century have gotten adjusted to the sociocultural contexts of the particular time they have appeared in. The character of the Empress has therefore been utilized as a carrier for different ideologies, e.g. the idea of the multiethnic state of the Habsburgian Empire, as well as the national socialists idea of a the ethnic community, the ‚Volksgemeinschaft’. The representations also adjust to altering female stererotypes and even role models like e.g. the ideal wife or in the ending 20th century the progressive feminist. [less ▲]

Detailed reference viewed: 134 (6 UL)
Full Text
See detailAutomated Security Testing of Web-Based Systems Against SQL Injection Attacks
Appelt, Dennis UL

Doctoral thesis (2016)

Injection vulnerabilities, such as SQL injection (SQLi), are ranked amongst the most dangerous types of vulnerabilities. Despite having received much attention from academia and practitioners, the ... [more ▼]

Injection vulnerabilities, such as SQL injection (SQLi), are ranked amongst the most dangerous types of vulnerabilities. Despite having received much attention from academia and practitioners, the prevalence of SQLi is common and the impact of their successful exploitation is severe. In this dissertation, we propose several security testing approaches that evaluate web applications and services for vulnerabilities and common IT infrastructure components such as for their resilience against attacks. Each of the presented approaches covers a different aspect of security testing, e.g. the generation of test cases or the definition of test oracles, and in combination they provide a holistic approach. The work presented in this dissertation was conducted in collaboration with SIX Payment Services (formerly CETREL S.A.). SIX Payment Services is a leading provider of financial services in the area of payment processing, e.g. issuing of credit and debit cards, settlement of card transactions, online payments, and point-of-sale payment terminals. We analyse the challenges SIX is facing in security testing and base our testing approaches on assumptions inferred from our findings. Specifically, the devised testing approaches are automated, applicable in black box testing scenarios, able to assess and bypass Web Application Firewalls (WAF), and use an accurate test oracle. The devised testing approaches are evaluated with SIX’ IT platform, which consists of various web services that process several thousand financial transactions daily. The main research contributions in this dissertation are: - An assessment of the impact of Web Application Firewalls and Database Intrusion Detection Systems on the accuracy of SQLi testing. - An input mutation technique that can generate a diverse set of test cases. We propose a set of mutation operators that are specifically designed to increase the likelihood of generating successful attacks. - A testing technique that assesses the attack detection capabilities of a Web Application Firewall (WAF) by systematically generating attacks that try to bypass it. - An approach that increases the attack detection capabilities of a WAF by inferring a filter rule from a set of bypassing attacks. The inferred filter rule can be added to the WAF’s rule set to prevent attacks from bypassing. - An automated test oracle that is designed to meet the specific requirements of testing in an industrial context and that is independent of any specific test case generation technique. [less ▲]

Detailed reference viewed: 544 (60 UL)
See detailAspekte der Mehrsprachigkeit in Luxemburg. Positionen, Funktionen und Bewertungen der deutschen Sprache. Eine diskursanalytische Untersuchung (1983-2015).
Scheer, Fabienne UL

Doctoral thesis (2016)

The thesis provides a broad insight into the current position of the German language in Luxembourg. It describes the linguistic knowledge and behavior of the different speech groups, that are acting in ... [more ▼]

The thesis provides a broad insight into the current position of the German language in Luxembourg. It describes the linguistic knowledge and behavior of the different speech groups, that are acting in the domains "education", "mass media", "immigration and integration", "xenophobic discourse", "language policy", "language and literature", "PR" and "languages for publicity" and shows how the dominant, luxembourgish, speech group is adapting linguistically to the evolution of the society. The conclusions are based on a corpus of 835 press articles, on interviews with experts from the different fields of the society and on further material (statistics, parlamentiary debates, administrative writings, examples of german exercises written by pupils ...). [less ▲]

Detailed reference viewed: 127 (24 UL)
Full Text
See detailTopology and Parameter Estimation in Power Systems Though Inverter Based Broadband Stimulations
Neshvad, Surena UL

Doctoral thesis (2016)

During the last decade, a substantial growth in renewable, distributed energy production has been observed in industrial countries. This phenomenon, coupled with the adoption of open energy markets has ... [more ▼]

During the last decade, a substantial growth in renewable, distributed energy production has been observed in industrial countries. This phenomenon, coupled with the adoption of open energy markets has signi cantly complicated the power ows on the distribution network, requiring advanced and intelligent system monitoring in order to optimize the e ciency, quality and reliability of the system. This thesis proposes a solution several power network challenges encountered with increasing Distributed Generation (DG) penetration. The three problems that are addressed are islanding detection, online transmission line parameter identi cation and system topology identi cation. These tasks are performed by requesting the DGs to provide ancillary services to the network operator. A novel and intelligent method has been proposed for reprogramming the DGs Pulse Width Modulator, requesting each DG to inject a uniquely coded Pseudo-Random Binary Sequence along with the fundamental. Islanding detection is obtained by measuring the equivalent Thevenin impedance at the inverters Point of Common Coupling, while system characterization is obtained by measuring the induced current frequencies at various locations in the grid. To process and evaluate the measured signals, a novelWeighed Least-Squares aggregation method is developed, through which measurements are combined and correlated in order to obtain an accurate snapshot of the power network parameters. [less ▲]

Detailed reference viewed: 88 (9 UL)
Full Text
See detailDynamics of viscoelastic colloidal suspensions
Dannert, Rick UL

Doctoral thesis (2016)

The influence of different types of nanoparticles on the dynamics of glass forming matrices has been studied by small oscillatory shear rheology. Experimental measurements reveal that besides the glass ... [more ▼]

The influence of different types of nanoparticles on the dynamics of glass forming matrices has been studied by small oscillatory shear rheology. Experimental measurements reveal that besides the glass transition process of the matrix an additional relaxation process occurs in presence of nanoparticles. The latter is identified as the macroscopic signature of the microscopic temporal fluctuations of the intrinsic stress and is called Brownian relaxation. Besides the fact that Brownian relaxation has so far not been observed in colloidal suspensions with a matrix exhibiting viscoelastic behaviour in the frequency range of the experimental probe, the study reveals another important feature to be highlighted: the evolution of the Brownian relaxation times depends non-monotonously on the filler concentration. This finding challenges the use of the classical Peclet-time as a characteristic timescale for Brownian relaxation. Literature defines the Peclet-time as the specific time needed by a particle to cover –via self-diffusion- a distance comparable to its own size. As a main result it will be shown that after replacing the particle size which is relevant for the Peclet time by the mean interparticle distance depending on the filler content the non-monotonic evolution of the relaxation times can be fully described. Moreover, the introduction of the new characteristic length scale allows to include data from literature into the phenomenological description. [less ▲]

Detailed reference viewed: 188 (19 UL)
See detailSingularités orbifoldes de la variété des caractères
Guerin, Clément UL

Doctoral thesis (2016)

In this thesis, we want to understand some singularities in the character variety. In a first chapter, we justify that the characters of irreducible representations from a Fuchsian group to a complex semi ... [more ▼]

In this thesis, we want to understand some singularities in the character variety. In a first chapter, we justify that the characters of irreducible representations from a Fuchsian group to a complex semi-simple Lie group is an orbifold. The orbifold locus is, then, the characters of bad representations. In the second chapter, we focus on the case where the Lie group is PSL(p,C) with p a prime number. In particular we give an explicit description of this locus. In the third and fourth chapter, we describe the isotropy groups (i.e. the centralizers of bad subgroups) arising in the cases when the Lie group is a quotient SL(n,C) (third chapter) and when the Lie group is a quotient of Spin(n,C) in the fourth chapter. [less ▲]

Detailed reference viewed: 146 (7 UL)
Full Text
See detailTopic Identification Considering Word Order by Using Markov Chains
Kampas, Dimitrios UL

Doctoral thesis (2016)

Automated topic identification of text has gained a significant attention since a vast amount of documents in digital forms are widespread and continuously increasing. Probabilistic topic models are a ... [more ▼]

Automated topic identification of text has gained a significant attention since a vast amount of documents in digital forms are widespread and continuously increasing. Probabilistic topic models are a family of statistical methods that unveil the latent structure of the documents defining the model that generates the text a priori. They infer about the topic(s) of a document considering the bag-of-words assumption, which is unrealistic considering the sophisticated structure of the language. The result of such a simplification is the extraction of topics that are vague in terms of their interpretability since they disregard any relations among the words that may settle word ambiguity. Topic models miss significant structural information inherent in the word order of a document. In this thesis we introduce a novel stochastic topic identifier for text data that addresses the above shortcomings. The primary motivation of this work is initiated by the assertion that word order reveals text semantics in a human-like way. Our approach recognizes an on-topic document trained solely on the experience of an on-class corpus. It incorporates the word order in terms of word groups to deal with data sparsity of conventional n-gram language models that usually require a large volume of training data. Markov chains hereby provide a reliable potential to capture short and long range language dependencies for topic identification. Words are deterministically associated with classes to improve the probability estimates of the infrequent ones. We demonstrate our approach and motivate its eligibility on several datasets of different domains and languages. Moreover, we present a pioneering work by introducing a hypothesis testing experiment that strengthens the claim that word order is a significant factor for topic identification. Stochastic topic identifiers are a promising initiative for building more sophisticated topic identification systems in the future. [less ▲]

Detailed reference viewed: 125 (15 UL)
Full Text
See detailEnergy minimising multi-crack growth in linear-elastic materials using the extended finite element method with application to Smart-CutTM silicon wafer splitting
Sutula, Danas UL

Doctoral thesis (2016)

We investigate multiple crack evolution under quasi-static conditions in an isotropic linear-elastic solid based on the principle of minimum total energy, i.e. the sum of the potential and fracture ... [more ▼]

We investigate multiple crack evolution under quasi-static conditions in an isotropic linear-elastic solid based on the principle of minimum total energy, i.e. the sum of the potential and fracture energies, which stems directly from the Griffith’s theory of cracks. The technique, which has been implemented within the extended finite element method, enables minimisation of the total energy of the mechanical system with respect to the crack extension directions. This is achieved by finding the orientations of the discrete crack-tip extensions that yield vanishing rotational energy release rates about their roots. In addition, the proposed energy minimisation technique can be used to resolve competing crack growth problems. Comparisons of the fracture paths obtained by the maximum tension (hoop-stress) criterion and the energy minimisation approach via a multitude of numerical case studies show that both criteria converge to virtually the same fracture solutions albeit from opposite directions. In other words, it is found that the converged fracture path lies in between those obtained by each criterion on coarser numerical discretisations. Upon further investigation of the energy minimisation approach within the discrete framework, a modified crack growth direction criterion is proposed that assumes the average direction of the directions obtained by the maximum hoop stress and the minimum energy criteria. The numerical results show significant improvements in accuracy (especially on coarse discretisations) and convergence rates of the fracture paths. The XFEM implementation is subsequently applied to model an industry relevant problem of silicon wafer cutting based on the physical process of Smart-CutTM technology where wafer splitting is the result of the coalescence of multiple pressure-driven micro-crack growth within a narrow layer of the prevailing micro-crack distribution. A parametric study is carried out to assess the influence of some of the Smart-CutTM process parameters on the post-split fracture surface roughness. The parameters that have been investigated, include: mean depth of micro-crack distribution, distribution of micro-cracks about the mean depth, damage (isotropic) in the region of micro-crack distribution, and the influence of the depth of the buried-oxide layer (a layer of reduced stiffness) beneath the micro-crack distribution. Numerical results agree acceptably well with experimental observations. [less ▲]

Detailed reference viewed: 108 (15 UL)
Full Text
See detailInstitutionalisierung der Naturwissenschaften in Preußen als Investition in die Zukunft. Die Friedrich-Wilhelms-Universität in Bonn und die Leopoldina (1818-1830)
Röther, Bastian UL

Doctoral thesis (2016)

Analysing the genesis of institutions of scientific research and information in Prussia using the university location of Bonn as an example 1818 – 1830 // The 19th century represented a profound turning ... [more ▼]

Analysing the genesis of institutions of scientific research and information in Prussia using the university location of Bonn as an example 1818 – 1830 // The 19th century represented a profound turning point in the development of the sciences that was characterized by an enormous surge in development and the emergence of modern scientific disciplines. In the natural sciences, this development was evident by their gradual emancipation from the medical faculty and an increasing differentiation and scientification of the curriculum. During this period of reform in the early 19th century, Prussia faced the task of reforming its traditional education system and transferring this to its provinces in the Rhineland and in Westphalia. These efforts to create an educational state centred around the Central University in Berlin, founded in 1810, the University of Breslau and, above all, Friedrich Wilhelm University, which was founded on the Rhine in 1818 and was one of Prussia’s most important provincial universities. The Imperial Leopoldina Carolina German Academy of Natural Scientists was simultaneously moved, creating conditions that were on par with Prussia’s capital. By 1818, Bonn had two important scientific institutions at its disposal, one of which had an explicit medical-scientific connotation. The institutional collaboration that arose between the academy and the university therefore promised to give the natural sciences an opportunity to receive special support as explicitly stipulated in the development concept drawn up for the university by the head of the department, Altenstein. The fact that the scientific academies fell within the jurisdiction of the Ministry of Culture appeared particularly auspicious. Altenstein’s concept of emancipation and promotion of the natural sciences included a high degree of integration of the applied sciences. One question that had received little attention was how this concentration of institutions impacted the development of the service and research institutions of the natural sciences and to what extent Berlin was able to model this tighter relationship between the institutions. Older papers have placed the development of the individual scientific subjects during this phase of the university’s foundation in a much more negative light, referring time and again to the way in which Bonn’s representatives of natural philosophy hampered this development. In contrast, more recent research findings recognize the tremendously important role the natural sciences played in the ministerial concept of 1818 and the resulting extensive and excellent conditions for these subjects which were modelled after Berlin. As a result, they represent a much more differentiated picture of the foundation years. A source-based and cross-subject study that also scrutinized the role of the Academy of Sciences Leopoldina remained a desideratum. Moving the academy has generally been regarded as extremely necessary for the further development of Bonn as a location of science; its collections and facilities signifying an important foundation for good institutional development. The basis of this paper was the official correspondence between Bonn’s scientists and Altenstein, the Prussian department head, as well as Prussia’s State Chancellor Hardenberg. It focuses on the key aspects of the analysis of appointment policy, organisation of the institutes, reform efforts and the relationship to extramural institutions. Of great importance is the exchange of letters, edited as part of a Leopoldina project, between Altenstein and the director of the botanical garden in Bonn, Christian Gottfried Nees von Esenbeck who, as president of the Leopoldina, was critically important. The analysis includes the facilities of these natural sciences at the university, e.g. the chemical laboratory and the observatory, as well as the collections and institutes for natural history subjects, like botany, zoology and minerology, and the Natural History Museum and the Botanical Gardens. Furthermore, the paper looks at the university’s relationship to societies organised outside the university, in particular the Leopoldina, as well as the Niederrheinischen Gesellschaft für Natur- und Heilkunde (Lower Rhine Society for Natural History and Medical Studies), which was founded in 1818, and Verein zur Beförderung der Naturstudien (the Society for Promoting the Study of Nature). Investigations have revealed that the culture minister’s ideals to broadly support the sciences and their practical application during the university’s phase of establishment extended far beyond the realm of financial possibilities. Nevertheless, it was possible to establish some excellently equipped institutes in Bonn. The already widely developed plans to incorporate practice-oriented education, however, could not be achieved in these initial years. These locational conditions, established on the basis of political and financial necessity, are reflected in the statistics on the frequency and attendance of lectures, analysed for the first time for the natural sciences in Bonn. Despite the formal separation of the natural sciences from the medical faculty, the physicians were integral to the success of the introductory lectures. On the other hand, from a statistical perspective, these special events are characterised by a high failure rate across all subjects due to a lack of participants. This particularly affected the lectures on natural philosophy which were accepted to a lesser degree by the students during the period under investigation. The general reproach, based on Justus Liebig’s philippic on “The State of Chemistry in Prussia” from 1840, that natural philosophy had hampered development, cannot be concretely substantiated for Bonn. Unlike in Berlin, which had better locational conditions, during these foundation years, Bonn lacked grammar school leavers and university freshmen who were prepared for studying the sciences and who could adequately take advantage of the good basic conditions established in Bonn. The complaints from the instructors in Bonn about the students’ low level of education quickly led to various reform projects that targeted grammar school education and which were almost entirely unknown to research. The Seminar für die gesammten Naturwissenschaften (Seminar of General Sciences), established in 1825 should be mentioned first and foremost. It was the first of its kind in Germany to teach natural sciences as part of a cross-discipline education. Surprisingly, moving the German Academy of Sciences Leopoldina to Bonn played an insignificant role in the development of the scientific location. Conditions can hardly be compared to those in the capital. Ultimately the academy’s institutions proved to be insufficient in supporting the establishment of modern service institutes for the natural sciences. The locational advantage was not exploited in the scientific practice of the natural science and medical disciplines. Thanks to Prussian subsidies, the Leopoldina was able to weather an existential crisis at the end of the 19th century in Bonn. Profound structural reforms were postponed by the academy’s leadership indicating the society’s need to consolidate. With its only main task being the publishing of the journal Nova acta, the academy was punching far below its weight. Using Prussia’s provincial university in Bonn as an example, this paper reveals the extensive efforts made by the education reformers to provide an excellent institutional basis for the young scientific disciplines. The opportunities created in the years when the university was being established could only be utilised by a handful of students due to a lack of scientific schooling, particularly since a link to practical and scientific educational concepts could not be financed. The service institutions therefore remained a promise for a future based on scientific research and teaching that wasn’t to begin in Prussia’s Rhineland until the second half of the 19th century. [less ▲]

Detailed reference viewed: 138 (11 UL)
See detailFunctional characterization of novel RhoT1 variants, which are associated with Parkinson's disease.
Grossmann, Dajana UL

Doctoral thesis (2016)

Parkinson’s disease (PD) is a common neurodegenerative disease affecting up to 2 % of the population older than 65 years. Most PD cases are sporadic with unknown cause, and about 10 % are familial ... [more ▼]

Parkinson’s disease (PD) is a common neurodegenerative disease affecting up to 2 % of the population older than 65 years. Most PD cases are sporadic with unknown cause, and about 10 % are familial inherited. PD is a progressive neurodegenerative disease characterized by loss of predominantly dopaminergic neurons, leading to typical symptoms like rigidity and tremor. Commonly involved pathogenic pathways are linked to mitochondrial dysfunction, e.g. increased oxidative stress, disruption of calcium homeostasis, decreased energy supply and mitochondrial-controlled apoptosis. The mitochondrial outer membrane protein Miro1 is important for mitochondrial distribution, quality control and maintenance. To date Miro1 is not established as risk factor for PD. Using a comprehensive mutation screening of RhoT1 in German PD patients we dissected the role of the first PD-associated mutations in RhoT1, the gene encoding for Miro1. Three mutations in RhoT1 have been identified in three PD patients with positive family history for PD. For analysis of mitochondrial phenotypes patient-derived fibroblasts from two of the three patients were available. As independent cell model served the neuroblastoma cell line M17 with stable knockdown of endogenous RhoT1 and transiently overexpression of the RhoT1 mutant variants. Investigation of yeast with knockout of endogenous Gem1 (the yeast orthologue of Miro1) and overexpression of mutant Gem1 revealed that growth on non-fermentable carbon source was impaired. These findings suggest that Miro1-mutant1 is a loss of function mutation. Interestingly, the Miro1 protein amount was significantly reduced in Miro1-mutant1 and Miro1-mutant2 fibroblast lines compared to controls. Functional analysis revealed that mitochondrial mass was decreased in Miro1-mutant2, but not in Miro1-mutant1 fibroblasts, whereas mitochondrial biogenesis was increased in Miro1-mutant2 fibroblasts, as indicated by elevation of PGC1α. A similar phenotype with reduction of mitochondrial mass was also observed in M17 cells overexpressing Miro1-mutant1 or Miro1-mutant2. Additionally, spare respiratory capacity was reduced in Miro1-mutant1 fibroblasts compared to Ctrl 1 fibroblasts. In contrast, Miro1-mutant2 fibroblasts showed increased respiratory activity compared to Ctrl 1, despite citrate synthase activity was significantly reduced. Both alterations of respiratory activity lead to mitochondrial membrane hyperpolarization in Miro1-mutant1 and Miro1-mutant2 fibroblasts, a phenotype which was also found in M17 cells with knockdown of RhoT1. Both Miro1 mutant fibroblasts lines displayed different problems with cytosolic calcium buffering: in Miro1-mutant1 fibroblasts histamine treatment increased cytosolic calcium concentration significantly compared to Ctrl 1 fibroblasts, indicating that calcium homeostasis was impaired, whereas in Miro1-mutant2 fibroblasts the buffering capacity for cytosolic calcium was impaired. The results indicate that mutations in Miro1 cause significant mitochondrial dysfunction, which are likely contributing to neurodegeneration in PD and underline the importance of Miro1 for mitochondrial maintenance. [less ▲]

Detailed reference viewed: 188 (24 UL)
Full Text
See detailTRANSMISSION OPTIMIZATION FOR HIGH THROUGHPUT SATELLITE SYSTEMS
Gharanjik, Ahmad UL

Doctoral thesis (2016)

Demands on broadband data service are increasing dramatically each year. Following terrestrial trends, satellite communication systems have moved from the traditional TV broadcasting to provide ... [more ▼]

Demands on broadband data service are increasing dramatically each year. Following terrestrial trends, satellite communication systems have moved from the traditional TV broadcasting to provide interactive broadband services even to urban users. While cellular and land-line networks are mainly designed to deliver broadband services to metropolitan and large urban centers, satellite based solutions have the advantage of covering these demands over a wide geography including rural and remote users. However, to stay competitive with economical terrestrial solutions, it is necessary to reduce the cost per transmitted bit by increasing the capacity of the satellite systems. The objective of this thesis is to design and develop techniques capable of enhancing the capacity of next generation high throughput satellite systems. Specifically, the thesis focuses on three main topics: 1) Q/V band feeder link design, 2) robust precoding design for multibeam satellite systems, and 3) developing techniques for tackling related optimization problems. Design of high bandwidth and reliable feeder links is central towards provisioning new services on the user link of a multibeam SatCom system. Towards this, utilization of the Q/V band and an exploitation of multiple gateway as a transmit diversity measure for overcoming severe propagation effects are being considered. In this context, the thesis deals with the design of a feeder link comprising N + P gateways (N active and P redundant gateways). Towards satisfying the desired availability, a novel switching scheme is analyzed and practical aspects such as prediction based switching and switching rate are discussed. Building on this result, an analysis for the N + P scenario leading to a quantification of the end-to-end performance is provided. On the other hand, frequency reuse in multibeam satellite systems along with precoding techniques can increase the capacity at the user link. Similar to terrestrial communication channels, satellite based communication channels are time-varying and for typical precoding applications, the transmitter needs to know the channel state information (CSI) of the downlink channel. Due to fluctuations of the phase components, the channel is time-varying resulting in outdated CSI at the transmitter because of the long round trip delay. This thesis studies a robust precoder design framework considering requirements on availability and average signal to interference and noise ratio (SINR). Probabilistic and expectation based approaches are used to formulate the design criteria which are solved using convex optimization tools. The performance of the resulting precoder is evaluated through extensive simulations. Although a satellite channel is considered, the presented analysis is valid for any vector channel with phase uncertainty. In general, the precoder design problem can be cast as power minimization problem or max-min fairness problem depending on the objectives and requirements of design. The power minimization problem can typically be formulated as a non-convex quadratically constrained quadratic programming (QCQP) problem and the max-min fairness problem as a fractional quadratic program. These problems are known to be NP-hard in general. In this thesis, the original design problem is transformed to an unconstrained optimizationproblem using the specialized penalty terms. The efficient iterative optimization frameworks are proposed based on a separate optimization of the penalized objective function over its partition of variables at each iteration. Various aspects of the proposed approach including performance of the algorithm and its implementation complexity are studied. This thesis is made under joint supervision agreement between KTH Royal Institute of Technology, School of Electrical Engineering, Stockholm, Sweden and University of Luxembourg, Luxembourg. [less ▲]

Detailed reference viewed: 154 (40 UL)
Full Text
See detailSIGNAL PROCESSING FOR PHYSICAL LAYER SECURITY WITH APPLICATION IN SATELLITE COMMUNICATIONS
Kalantari, Ashkan UL

Doctoral thesis (2016)

Wireless broadcast allows widespread and easy information transfer. However, it may expose the information to unintended receivers, which could include eavesdroppers. As a solution, cryptography at the ... [more ▼]

Wireless broadcast allows widespread and easy information transfer. However, it may expose the information to unintended receivers, which could include eavesdroppers. As a solution, cryptography at the higher network levels has been used to encrypt and protect data. Cryptography relies on the fact that the computational power of the adversary is not enough to break the encryption. However, due to increasing computing power, the adversary power also increases. To further strengthen the security and complement the encryption, the concept of physical layer security has been introduced and surged an enormous amount of research. Widely speaking, the research in physical layer security can be divided into two directions: the information-theoretic and signal processing paradigms. This thesis starts with an overview of the physical layer security literature and continues with the contributions which are divided into the two following parts. In the first part, we investigate the information-theoretic secrecy rate. In the first scenario, we study the confidentiality of a bidirectional satellite network consisting of two mobile users who exchange two messages via a multibeam satellite using the XOR network coding protocol. We maximize the sum secrecy rate by designing the optimal beamforming vector along with optimizing the return and forward link time allocation. In the second scenario, we study the effect of interference on the secrecy rate. We investigate the secrecy rate in a two-user interference network where one of the users, namely user 1, requires to establish a confidential connection. User 1 wants to prevent an unintended user of the network to decode its transmission. User 1 has to adjust its transmission power such that its secrecy rate is maximized while the quality of service at the destination of the other user, user 2, is satisfied. We obtain closed-form solutions for optimal joint power control. In the third scenario, we study secrecy rate over power ratio, namely ``secrecy energy efficiency''. We design the optimal beamformer for a multiple-input single-output system with and without considering the minimum required secrecy rate at the destination. In the second part, we follow the signal processing paradigm to improve the security. We employ the directional modulation concept to enhance the security of a multi-user multiple-input multiple-output communication system in the presence of a multi-antenna eavesdropper. Enhancing the security is accomplished by increasing the symbol error rate at the eavesdropper without the eavesdropper's CSI. We show that when the eavesdropper has less antennas than the users, regardless of the received signal SNR, it cannot recover any useful information; in addition, it has to go through extra noise enhancing processes to estimate the symbols when it has more antennas than the users. Finally, we summarize the conclusions and discuss the promising research directions in the physical layer security. [less ▲]

Detailed reference viewed: 611 (50 UL)
Full Text
See detailModel-Based Test Automation Strategies for Data Processing Systems
Di Nardo, Daniel UL

Doctoral thesis (2016)

Data processing software is an essential component of systems that aggregate and analyse real-world data, thereby enabling automated interaction between such systems and the real world. In data processing ... [more ▼]

Data processing software is an essential component of systems that aggregate and analyse real-world data, thereby enabling automated interaction between such systems and the real world. In data processing systems, inputs are often big and complex files that have a well-defined structure, and that often have dependencies between several of their fields. Testing of data processing systems is complex. Software engineers, in charge of testing these systems, have to handcraft complex data files of nontrivial size, while ensuring compliance with the multiple constraints to prevent the generation of trivially invalid inputs. In addition, assessing test results often means analysing complex output and log data. Complex inputs pose a challenge for the adoption of automated test data generation techniques; the adopted techniques should be able to deal with the generation of a nontrivial number of data items having complex nested structures while preserving the constraints between data fields. An additional challenge regards the automated validation of execution results. To address the challenges of testing data processing systems, this dissertation presents a set of approaches based on data modelling and data mutation to automate testing. We propose a modelling methodology that captures the input and output data and the dependencies between them by using Unified Modeling Language (UML) class diagrams and constraints expressed in the Object Constraint Language (OCL). The UML class diagram captures the structure of the data, while the OCL constraints formally describe the interactions and associations between the data fields within the different subcomponents. The work of this dissertation was motived by the testing needs of an industrial satellite Data Acquisition (DAQ) system; this system is the subject of the empirical studies used within this dissertation to demonstrate the application and suitability of the approaches that we propose. We present four model-driven approaches that address the challenges of automatically testing data processing systems. These approaches are supported by the data models generated according to our modelling methodology. The results of an empirical evaluation show that the application of the modelling methodology is scalable as the size of the model and constraints was manageable for the subject system. The first approach is a technique for the automated validation of test inputs and oracles; an empirical evaluation shows that the approach is scalable as the input and oracle validation process executed within reasonable times on real input files. The second approach is a model-based technique that automatically generates faulty test inputs for the purpose of robustness testing, by relying upon generic mutation operators that alter data collected in the field; an empirical evaluation shows that our automated approach achieves slightly better instruction coverage than the manual testing taking place in practice. The third approach is an evolutionary algorithm to automate the robustness testing of data processing systems through optimised test suites; the empirical results obtained by applying our search-based testing approach show that it outperforms approaches based on fault coverage and random generation: higher coverage is achieved with smaller test suites. Finally, the fourth approach is an automated, model-based approach that reuses field data to generate test inputs that fit new data requirements for the purpose of testing data processing systems; the empirical evaluation shows that the input generation algorithm based on model slicing and constraint solving scales in the presence of complex data structures. [less ▲]

Detailed reference viewed: 227 (45 UL)
Full Text
See detailRegional Continental Water Storage Variations Inferred from Three-dimensional GPS Coordinates Time Series
Wang, Lin UL

Doctoral thesis (2016)

Recent advances in space geodetic techniques (including but is not limited to, Global Navigation Satellite System, GNSS, Very-long-baseline interferometry, VLBI) allow us to observe changes in continental ... [more ▼]

Recent advances in space geodetic techniques (including but is not limited to, Global Navigation Satellite System, GNSS, Very-long-baseline interferometry, VLBI) allow us to observe changes in continental water storage (CWS) depending on the extent and the amplitude of the load. Among the geodetic techniques, GPS is the most common observational tool because of its global distribution. GPS observations are used for many fields of studies, including seismology and tectonics. This thesis presents a method to obtain regional changes in continental water storage by inverting the three-dimensional GPS time series. The error sources from a regional study are studied first. In theory, the surface motions from each GPS station are caused by loads acting over the entire surface of the Earth. As we are only interested in the changing water storage in a particular basin, the loading signal from the far field, outside the region of interest, must be accounted for. From our simulation studies, we conclude that the mass changes locate outside of the study region cannot be neglected. We find that the coverage of the area needs to extend to about 20 degrees (about 20 000 km) of the basin center for a regional study. The second concern is the GPS time series. We find discrepancies over the globe between GPS observed displacements and forward modelled displacements using models of water storage. At annual periods, the thermal expansion of the GPS monuments and underlying bedrock, atmospheric loading, and the draconitic signal if not accounted for will introduce an error into the inversion. These errors may contribute to the disagreement between our forward modelled and observed ground motions. For 88% of the stations analyzed, we are able to reduce the WRMS on the GPS vertical time series by removing the modelled displacements using estimates CWS loading obtained from WaterGAP. We conclude that the most likely cause of the discrepancies come from the GPS observations themselves. Due to the observed discrepancy, we find that the uncertainties of the GPS time series should be re-estimated in any inversion study. Finally, we determine monthly CWS variations from GPS three-dimensional coordinate time series for the major river basins in Europe and North America. The results at the basin scale are validated against GRACE and hydrological models, the correlation between inferred CWS and GRACE or models are close to 0.9 and WRMSR are as high as 50% for some basins. We also demonstrate that the relative contributions of the GPS horizontal coordinates are about one third those of the vertical signals. We prove that by including the horizontal coordinates in the inversion that we are able to improve the inversion results. [less ▲]

Detailed reference viewed: 73 (14 UL)
Full Text
See detailConscience du temps, sentiment de passage du temps: une approche métacognitive de la perception du temps
Lamotte, Mathilde UL

Doctoral thesis (2016)

Metacognition concerns both individuals' knowledge about their cognitive functioning and the processes that regulate them (Koriat, 2007). The study of the perception of time showed that many factors cause ... [more ▼]

Metacognition concerns both individuals' knowledge about their cognitive functioning and the processes that regulate them (Koriat, 2007). The study of the perception of time showed that many factors cause temporal distortions, including, for example, attention or feedback. The purpose of this work is thus to propose an integrative model of metacognition of time perception; i.e. to integrate data based on conventional research on the perception of time in a metacognitive model (Nelson and Narens 1990). Our first question was to verify the existence of knowledge about the perception of time, especially on the factors responsible for temporal distortions. The three experiments of our first study led us to create and validate the Metacognitive Questionnaire on Time (MQT). The latter consists of 24 items that highlights the existence of knowledge, more reliable for oneself (subscale Self, 12 items) than for others (subscale Others, 12 items), on two factors known to affect time perception: an Emotion Factor (4 items) and an Attention factor (8 items). Secondly, we studied the influence of metacognitive processes on temporal judgments. Thus, we are interested in the influence of metacognitive Control process on performances in two temporal tasks. Our hypothesis was that knowledge about time allowed regulating the temporal judgments. The results of our studies (Study 2 and 3, respectively composed of one and two experiments) confirmed the importance of the Control process on temporal judgments. Thus, mere awareness of the role of attention on perception of time causes a reduction of the attentional effect generally observed (Study 2). Moreover, explicit erroneous knowledge given to participants causes a reduction or even disappearance of automatic emotional effect of anger on the temporal judgments (Study 3). Finally, we have explored the link between Monitoring process and temporal judgments. Our fourth study demonstrated the ability of individuals to accurately estimate the accuracy of their temporal judgments under certain conditions. Indeed, it appears that individuals are sensitive to task difficulty and duration range. These two dimensions affect both temporal judgments and confidence level estimates. Overall, the results of this study emphasize the importance to take account of metacognitive processes in the study of the perception of time. [less ▲]

Detailed reference viewed: 45 (3 UL)
See detailChanging Commuter Behaviour through Gamification
Kracheel, Martin UL

Doctoral thesis (2016)

This thesis explores how the dynamic context of mobility, more specifically the commute to and from work in the region of Luxembourg, can be changed through gamified mobile applications. The goal is to ... [more ▼]

This thesis explores how the dynamic context of mobility, more specifically the commute to and from work in the region of Luxembourg, can be changed through gamified mobile applications. The goal is to get a better understanding of the innovative application area of gamified mobility and its potential, as well as to describe its implications for research and practice. This applied research is inspired by a participatory design approach, where information is gained by adopting a user perspective and through the process of conceptualising and applying methods in empirical studies. The four empirical studies described in this thesis employed a mixed-methodology approach consisting of focus group interviews, questionnaires and mobile applications. Within these studies four prototypes were developed and tested, namely Coffee Games, Driver Diaries, Commutastic and Leave Now. The studies show concrete possibilities and difficulties in the interdisciplinary field of gamifying mobility behaviour. This dissertation is composed of seven chapters: Chapter I introduces the topics mobility, games and behaviour; Chapter II presents a proof of concept study (Using Gamification and Metaphor to Design a Mobility Platform for Commuters); Chapter III explains the development and validation of a mobility research tool (Driver Diaries: a Multimodal Mobility Behaviour Logging Methodology); Chapter IV describes the development of a new gamified mobility application and its evaluation (Studying Commuter Behaviour for Gamifying Mobility); Chapter V provides an empirical assessment of the relevance of gamification and incentives for the evaluation of a mobile application (Changing Mobility Behaviour through Recommendations) and Chapter VI is a summary on how to change mobility behaviour through a multilevel design approach (Using Gamification to change Mobility Behaviour: Lessons Learned from two Approaches). The four prototypes help to address the primary goal of this thesis, which is to contribute to new approaches to urban mobility by exploring gamified mobility applications. Coffee games is a proof of concept, low-fidelity implementation of a real-life game that tests gamification elements and incentives for changing indoor-mobility behaviour. The findings of two iterations with a total of 19 participants show the adaptability of the concept to different contexts. The approach to change indoor-mobility behaviour with this mock-up game was successful. Driver Diaries is a methodology to assess mobility behaviour in Luxembourg. The aim with this mobile, digital travel diary is to study features of cross-border commuter mobility and activities in Luxembourg in order to identify suitable elements (activities etc.) for a gamified mobility application, such as Commutastic. After two rounds of data collection (Android and iOS) the records of 187 participants were analysed and the results illustrate the mobility habits of the target audience. Commutastic is a mobility game application that motivates users to avoid peak-hour traffic by proposing alternative after work activities. Analysing the data of 90 participants, we find that the timely offer of an activity in the proximity along with gamification elements involves users and motivates a third of them to engage in alternative mobility behaviours. Leave Now is a gamified recommendation application, which rewards users for leaving their workplace outside of their usual schedule and explores the role of specific gamification elements on user motivation. The study, which was conducted with 19 participants, shows differences between an individual play and a group play condition regarding leaving time changes. The contributions of this thesis to gamification and mobility research and practice span from mobility participations as a game and integral part of our everyday life to methodologies of its successful implementation in the Luxemburgish context. The results show the advantages, disadvantages, and restrictions of gamification in urban mobility contexts. This is an important step towards gamifying mobility behaviour change and therefore towards research aiming at a wellbeing in a better urban life. [less ▲]

Detailed reference viewed: 307 (22 UL)