References of "Dissertations and theses"
     in
Bookmark and Share    
See detailEntwicklung eines EDV-basierten Frühwarnsystems für die Blankaalabwanderung an der Mosel
Wendling, David UL

Doctoral thesis (2017)

The eel (Anguilla anguilla L.) is a fish that is mainly found in European waters. The River Moselle is among the bodies of water inhabited by this specimen. During the downstream migration into their ... [more ▼]

The eel (Anguilla anguilla L.) is a fish that is mainly found in European waters. The River Moselle is among the bodies of water inhabited by this specimen. During the downstream migration into their Atlantic spawning ground, silver eels often experience severe to fatal injuries while passing the turbines at the barrages. This annual migration takes place in a relatively narrow timeframe. Therefore, knowing the trigger or beginning of said migration, the mortality rate of the eels could be reduced by a fish-adapted turbine control or comparable protective measures. This thesis introduces an early warning system, which predicts the periods of silver eel emigration by means of certain abiotic factors. On the basis of the information gleaned from different studies and the experience gained from many years of professional fishing, those environmental factors were identified which are connected with the migration of the silver eel. Extensive data analyses were used to substantiate these findings. The water flow, the flow differences and the lunar phase were particularly relevant. Furthermore, the season and the water temperature were taken into account. In view of the different sources of information (experience and expert knowledge, data sets and the findings derived from it), a hybrid structure of the early warning system was realized. After examining different methods from the fields of soft computing and mathematics or statistics, the fuzzy logic (knowledge-based), the case-based reasoning (casebased) and the artificial neural networks (data-based) were selected. With each of these methods, an independent prediction model was designed, tested and optimized. Special characteristics were found during the data analysis and were taken into account by the use of adequate modifiers. The models were tested on the basis of the present data sets for the Moselle. It was shown that it is possible to correctly predict most of the situations with increased catches (suggesting a migration). Threshold values for a migration were defined based on the catches. The same was done for the forecast values. Thus, for the 1963 to 1973 data record, a total of 63% (artificial neural networks), 74% (fuzzy logic), and 83% (case-based reasoning) of the events with increased catches could be detected. Since not every situation with a favorable constellation of abiotic factors also led to a migration or higher catches, a lot of "false" forecasts (up to 50%) were made as well. Good results have also been achieved when using data from recent years and most events were identified. A stand-alone program was developed for the practical application of the prognosis models. This early warning system is a software which contains a user interface for reading data and displaying prognosis values and into which the developed prognosis models are implemented. In addition, recommendations for use were compiled and presented. [less ▲]

Detailed reference viewed: 16 (6 UL)
See detailMicroRNA regulation of hypoxia-induced tumorigenicity and metastatic potential of colon tumor-initiating cells
Ullmann, Pit UL

Doctoral thesis (2017)

The initiaton and progression of colorectal cancer (CRC), which is the second most common cause of cancer mortality in Western countries, are driven by a subpopulation of highly tumorigenic cells, known ... [more ▼]

The initiaton and progression of colorectal cancer (CRC), which is the second most common cause of cancer mortality in Western countries, are driven by a subpopulation of highly tumorigenic cells, known as cancer stem cells or tumor-initiating cells (TICs). These self-renewing TICs are, to a large extent, responsible for therapy resistance, cancer recurrence, and metastasis formation. TICs are known to extensively interact with their microenvironment and can be influenced by various extrinsic factors, such as inflammatory signaling or tumor hypoxia. Previous expression profiling studies have shown that microRNAs (miRNAs) are involved in the regulation of CRC inititation and metastatic progression. Moreover, specifc miRNAs have been identified as potential mediators of the cellular response to hypoxia. On the other hand, the molecular mechanisms that link hypoxia, miRNA expression, colon TIC regulation, and CRC progression, remain poorly understood. Thus, the main objectives of this work were to analyze the effects of hypoxia on the miRNA expression of colon TICs and to identify miRNAs that regulate metastasis initiation. In a first phase, we generated and thoroughly characterized different stable TIC-enriched spheroid cultures (SCs), both from CRC cell lines and from primary patient material. Each established SC was thereby shown to display key TIC properties, including substantial plasticity, in vitro and in vivo self-renewal capacity and, most importantly, extensive tumorigenic potential. Moreover, the individual SCs displayed increased chemoresistance capacity, compared to adherent counterpart cultures. Taken together, we could demonstrate that the spheroid system is a suitable model to study colon TICs, thereby laying the methodological foundation for the following subparts of this project. In a second step, we studied the influence of hypoxia on the miRNA expression profile of our established SCs. MiR-210-3p was thereby identified as the miRNA with the strongest response to hypoxia. Importantly, both hypoxic culture conditions and stable overexpression of miR-210 were shown to promote in vitro and in vivo self-renewal capacity of our colon TIC-enriched cultures. Moreover, by promoting lactate production and by repressing mitochondrial respiration, miR-210 was found to trigger the metabolic reprogramming of colon TICs towards a glycolytic and aggressive phenotype. Finally, we studied the role of miRNAs in the context of TIC-driven metastasis formation. By comparing primary tumor- and lymph node metastasis-derived SCs, we were able to identify the miR-371~373 cluster as an important regulator of tumorigenic and metastatic potential. Stable overexpression of the entire miR-371~373 cluster, followed by gene and protein expression analysis, enabled us to uncover the transforming growth factor beta receptor II (TGF-βRII) and the inhibitor of DNA binding 1 (Id1) as miR-371~373 cluster-responsive proteins. Most importantly, different sphere, tumor, and metastasis formation assays revealed that the miR-371~373/TGF-βRII/Id1 signaling axis regulates the self-renewal capacity and metastatic colonization potential of colon TICs. Taken together, our findings emphasize the strong plasticity of colon TICs and clearly illustrate that miRNAs can act as potent modulators of essential TIC properties. Accordingly, we could show that miR-210 and the miR-371~373 cluster are involved in metabolic reprogramming of TICs and in the regulation of metastasis formation, respectively. Altogether, our study contributes to a better understanding of the molecular mechanisms that drive TIC-induced tumor progression and may provide indications for interesting miRNA biomarker candidates and target molecules for future TIC-specific therapies. [less ▲]

Detailed reference viewed: 23 (9 UL)
Full Text
See detailScarring effects across the life course and the transition to retirement
Ponomarenko, Valentina UL

Doctoral thesis (2017)

This thesis investigates the long-term negative effects of unemployment, labour market inactivity and atypical employment. Within the theoretical framework of cumulative advantages and disadvantages, it ... [more ▼]

This thesis investigates the long-term negative effects of unemployment, labour market inactivity and atypical employment. Within the theoretical framework of cumulative advantages and disadvantages, it is outlined how life-course differentiation creates gaps between age peers and cohorts and how this leads to social inequality in old age. In the three separate, but linked studies, disadvantages across the career and their associations to retirement are analysed. The focus of the analyses is laid on the outcomes of career disadvantages in form of subjective and financial well-being. The three studies all use the Survey of Health, Ageing and Retirement in Europe. This large and multidimensional panel study provides not only prospective, but also retrospective data on European countries. The data base is employed in different combinations in the studies. In the first and second study, the retrospective wave SHARELIFE provides information on employment biography and is related to well-being indicators of the regular waves. In the third study, the persistence of disadvantages upon retirement is observed with a causal model. The first study investigates how disadvantages are affecting careers and subjective well-being of older Europeans. In two complementary analyses, first the employment history of older Europeans is studied with sequence analysis methods to show how non-employment and part-time work shape careers and to illustrate gender differences. In a second step, indicators of timing and duration, exemplifying the accumulation mechanisms, are related to subjective well-being in old age. The results indicate that women experience more turbulent careers with more periods of non-employment and part-time employment. However, this is not reflected in lower subjective well-being in old age. Accumulation of non-employment disadvantages is far more comprehensive for men than for women. Part-time employment has an ambiguous effect for women, but is not relevant for men. In the second study, the household level is added and it is analysed how an adverse employment history is related to wealth accumulation. The results show that cumulative non-employment and employment in lower occupations has significant disadvantages for wealth accumulation in old age. However, large differences for men and women remain. Particularly, the household composition and household factors are decisive in the effectuality of these disadvantages. The third study includes the scarring question, that means if career disadvantages continue beyond the working life. The study examines whether non-employment disadvantages are still found in retirement and the extent to which well-being levels change in the transition to retirement. Well-being scores before and after retirement are obtained and unbiased effects of the retirement transition are identified. Results indicate that being unemployed before retirement is associated with an increase in life satisfaction, but presents mainly a catching-up effect compared to employed persons transitioning to retirement. Findings are robust to selection into unemployment and country differences. [less ▲]

Detailed reference viewed: 24 (7 UL)
See detailESSAYS IN PRICE DISCOVERY
Wells, René Joseph Guy UL

Doctoral thesis (2017)

I claim that uninformed traders prefer ending the size of their orders with a zero (e.g. 110 shares) but it is not the case for informed traders, creating an information channel and providing a signal. I ... [more ▼]

I claim that uninformed traders prefer ending the size of their orders with a zero (e.g. 110 shares) but it is not the case for informed traders, creating an information channel and providing a signal. I propose the Last Digit Hypothesis (LDH): i) some traders exhibit a last digit preference for the digit 0 and other traders do not while ii) the latter are better able to trade on information than the former. The LDH predicts that a trade arising from a marketable order with a size ending with a 0 on average contributes less to price discovery than other trades. My empirical findings support the LDH. However, the LDH is not an equilibrium since informed traders have an incentive to mimic the preferences of uninformed traders to avoid detection and face little constraints or costs to do so. It is puzzling that I find no evidence of such mimicking. I offer plausible explanations for this finding. I carefully test the Stealth Trading Hypothesis (STH) using comprehensive datasets for the three largest European equity markets over 2002 to 2015, a period that saw trading moved into a new era. I find little support for the STH and, in fact, the commonality between these three distinct markets is the convergence over time of price discovery by trade size. It could be explained by informed traders once facing less frictions are better able to mimic the trade size choice of uninformed traders and/or more price discovery now going through resting limit orders. [less ▲]

Detailed reference viewed: 22 (3 UL)
Full Text
See detailSLA Violation Detection Model and SLA Assured Service Brokering (SLaB) in Multi-Cloud Architecture
Wagle, Shyam Sharan UL

Doctoral thesis (2017)

Cloud brokering facilitates Cloud Service Users (CSUs) to find cloud services according to their requirements. In the current practice, CSUs or Cloud Service Brokers (CSBs) select cloud services according ... [more ▼]

Cloud brokering facilitates Cloud Service Users (CSUs) to find cloud services according to their requirements. In the current practice, CSUs or Cloud Service Brokers (CSBs) select cloud services according to Service Level Agreement (SLA) committed by Cloud Service Providers (CSPs) in their website. In our observation, it is found that most of the CSPs do not fulfill the service commitment mentioned in the SLA agreement. Verified cloud service performances against their SLA commitment of CSPs provide an additional trust on CSBs to recommend services to the CSUs. In this thesis work, we propose a SLA assured service-brokering framework, which considers both committed and delivered SLA by CSPs in cloud service recommendation to the users. For the evaluation of the performance of CSPs, two evaluation techniques: Heat Map and Intuinistic Fuzzy Logic (IFL) are proposed, which include both directly measurable and non-measurable parameters in the performance evaluation CSPs. These two techniques are implemented using real data measured from CSPs. Both performance evaluation techniques rank/- sort CSPs according to their service performances. The result shows that Heat Map technique is more transparent and consistent in CSP performance evaluation than IFL technique. As cloud computing is location independent technology, CSPs should respect the current regulatory framework in delivering services to the users. In this work, regulatory compliance status of the CSPs is also analyzed and visualized in performance heat map table to provide legal status of CSPs. Moreover, missing points in their terms of service and SLA document are analyzed and recommended to add in the contract document. In the revised European data protection regulation (GPDR), data protection impact assessment (DPIA) is going to be mandatory for all organizations/tools. The decision recommendation tool developed using above mentioned evaluation techniques may cause potential harm to individuals in assessing data from multiple CSPs. So, DPIA is carried out to assess the potential harm/risks to individuals due to decision recommendation tool and necessary precaution to be taken in decision recommendation tool to minimize possible data privacy risks. To help CSUs in easy decision making to select cloud services from multi-cloud environment, service pattern analysis techniques and prediction of future performance behavior of CSPs are also proposed in the thesis work. Prediction patterns and error measurement shows that automatic prediction methods can be implemented for short time period as well as longer time period. [less ▲]

Detailed reference viewed: 29 (1 UL)
Full Text
See detailDevelopment of an integrated omics in silico workflow and its application for studying bacteria-phage interactions in a model microbial community
Narayanasamy, Shaman UL

Doctoral thesis (2017)

Microbial communities are ubiquitous and dynamic systems that inhabit a multitude of environments. They underpin natural as well as biotechnological processes, and are also implicated in human health. The ... [more ▼]

Microbial communities are ubiquitous and dynamic systems that inhabit a multitude of environments. They underpin natural as well as biotechnological processes, and are also implicated in human health. The elucidation and understanding of these structurally and functionally complex microbial systems using a broad spectrum of toolkits ranging from in situ sampling, high-throughput data generation ("omics"), bioinformatic analyses, computational modelling and laboratory experiments is the aim of the emerging discipline of Eco-Systems Biology. Integrated workflows which allow the systematic investigation of microbial consortia are being developed. However, in silico methods for analysing multi-omic data sets are so far typically lab-specific, applied ad hoc, limited in terms of their reproducibility by different research groups and suboptimal in the amount of data actually being exploited. To address these limitations, the present work initially focused on the development of the Integrated Meta-omic Pipeline (IMP), a large-scale reference-independent bioinformatic analyses pipeline for the integrated analysis of coupled metagenomic and metatranscriptomic data. IMP is an elaborate pipeline that incorporates robust read preprocessing, iterative co-assembly, analyses of microbial community structure and function, automated binning as well as genomic signature-based visualizations. The IMP-based data integration strategy greatly enhances overall data usage, output volume and quality as demonstrated using relevant use-cases. Finally, IMP is encapsulated within a user-friendly implementation using Python while relying on Docker for reproducibility. The IMP pipeline was then applied to a longitudinal multi-omic dataset derived from a model microbial community from an activated sludge biological wastewater treatment plant with the explicit aim of following bacteria-phage interaction dynamics using information from the CRISPR-Cas system. This work provides a multi-omic perspective of community-level CRISPR dynamics, namely changes in CRISPR repeat and spacer complements over time, demonstrating that these are heterogeneous, dynamic and transcribed genomic regions. Population-level analysis of two lipid accumulating bacterial species associated with 158 putative bacteriophage sequences enabled the observation of phage-host population dynamics. Several putatively identified bacteriophages were found to occur at much higher abundances compared to other phages and these specific peaks usually do not overlap with other putative phages. In addition, there were several RNA-based CRISPR targets that were found to occur in high abundances. In summary, the present work describes the development of a new bioinformatic pipeline for the analysis of coupled metagenomic and metatranscriptomic datasets derived from microbial communities and its application to a study focused on the dynamics of bacteria-virus interactions. Finally, this work demonstrates the power of integrated multi-omic investigation of microbial consortia towards the conversion of high-throughput next-generation sequencing data into new insights. [less ▲]

Detailed reference viewed: 87 (12 UL)
Full Text
See detailFinancial Intermediation and Macroeconomic Fluctuations
Chevallier, Claire Océane UL

Doctoral thesis (2017)

Detailed reference viewed: 6 (2 UL)
Full Text
See detailPrivate Functional Encryption – Hiding What Cannot Be Learned Through Function Evaluation
Delerue Arriaga, Afonso UL

Doctoral thesis (2017)

Functional encryption (FE) is a generalization of many commonly employed crypto- graphic primitives, such as keyword search encryption (KS), identity-based encryption (IBE), inner-product encryption (IPE ... [more ▼]

Functional encryption (FE) is a generalization of many commonly employed crypto- graphic primitives, such as keyword search encryption (KS), identity-based encryption (IBE), inner-product encryption (IPE) and attribute-based encryption (ABE). In an FE scheme, the holder of a master secret key can issue tokens associated with functions of its choice. Possessing a token for f allows one to recover f(m), given an encryption of m. As it is important that ciphertexts preserve data privacy, in various scenarios it is also important that tokens do not expose their associated function. A notable example being the usage of FE to search over encrypted data without revealing the search query. Function privacy is an emerging new notion that aims to address this problem. The difficulty of formalizing it lies in the verification functionality, as the holder of a token for function f may encrypt arbitrary messages using the public key, and obtain a large number of evaluations of f. Prior privacy models in the literature were fine-tuned for specific functionalities, did not model correlations between ciphertexts and decryption tokens, or fell under strong uninstantiability results. Our first contribution is a new indistinguishability-based privacy notion that overcomes these limitations and is flexible enough to capture all previously proposed indistinguishability-based definitions as particular cases. The second contribution of this thesis is five constructions of private functional encryption supporting different classes of functions and meeting varying degrees of security: (1) a white-box construction of an Anonymous IBE scheme based on composite-order groups, shown to be secure in the absence of correlated messages; (2) a simple and functionality- agnostic black-box construction from obfuscation, also shown to be secure in the absence of correlated messages; (3) a more evolved and still functionality-agnostic construction that achieves a form of function privacy that tolerates limited correlations between messages and functions; (4) a KS scheme achieving privacy in the presence of correlated messages beyond all previously proposed indistinguishability-based security definitions; (5) a KS construction that achieves our strongest notion of privacy (but relies on a more expressive form of obfuscation than the previous construction). The standard approach in FE is to model complex functions as circuits, which yields inefficient evaluations over large inputs. As our third contribution, we propose a new primitive that we call “updatable functional encryption” (UFE), where instead of circuits we deal with RAM programs, which are closer to how programs are expressed in von Neumann architecture. We impose strict efficiency constrains and we envision tokens that are capable of updating the ciphertext, over which other tokens can be subsequently executed. We define a security notion for our primitive and propose a candidate construction from obfuscation, which serves as a starting point towards the realization of other schemes and contributes to the study on how to compute RAM programs over public-key encrypted data. [less ▲]

Detailed reference viewed: 33 (10 UL)
Full Text
See detailMULTI-OBJECTIVE CLOUD BROKERING OPTIMIZATION TAKING INTO ACCOUNT UNCERTAINTY AND LOAD PREDICTION
Nguyen, Anh Quan UL

Doctoral thesis (2017)

Cloud broker optimization for energy-aware in multi-clouds system is to use a metaheuristic method for this multi-objective optimization problem that focuses on reducing the cost as well as improving the ... [more ▼]

Cloud broker optimization for energy-aware in multi-clouds system is to use a metaheuristic method for this multi-objective optimization problem that focuses on reducing the cost as well as improving the energy efficiency. This broad topic has been motivated by the energy-aware challenge at the level of cloud brokerage service. The cloud broker bases on multi-objectives optimization is characterized by a tightly coupled constraints, a dynamic environment, and changing objectives and priorities. That results in investigating specific aspects of the cloud brokerage service - virtual machine placement problem. [less ▲]

Detailed reference viewed: 41 (12 UL)
Full Text
See detailTax havens under international pressure: a game theoretical approach
Pulina, Giuseppe UL

Doctoral thesis (2017)

Detailed reference viewed: 12 (3 UL)
Full Text
See detailOptical Characterization of Cu2ZnSnSe4 Thin Films
Sendler, Jan Michael UL

Doctoral thesis (2017)

Detailed reference viewed: 15 (5 UL)
Full Text
See detailImages of Galois representations and p-adic models of Shimura curves
Amoros Carafi, Laia UL

Doctoral thesis (2016)

The thesis treats two questions situated in the Langlands program, which is one of the most active and important areas in current number theory and arithmetic geometry. The first question concerns the ... [more ▼]

The thesis treats two questions situated in the Langlands program, which is one of the most active and important areas in current number theory and arithmetic geometry. The first question concerns the study of images of Galois representations into Hecke algebras coming from modular forms over finite fields, and the second one deals with p-adic models of Shimura curves and its bad reduction. Consequently, the thesis is divided in two parts. The first part is concerned with the study of images of Galois representations that take values in Hecke algebras of modular forms over finite fields. The main result of this part is a complete classification of the possible images of 2-dimensional Galois representations with coefficients in local algebras over finite fields under the hypotheses that: (i) the square of the maximal ideal is zero, (ii) that the residual image is big (in a precise sense), and (iii) that the coefficient ring is generated by the traces. In odd characteristic, the image is completely determined by these conditions; in even characteristic the classification is much richer. In this case, the image is uniquely determined by the number of different traces of the representation, a number which is given by an easy formula. As an application of these results, the existence of certain p-elementary abelian extensions of big non-solvable number fields can be deduced. Whereas some aspects of class field theory are accessible through this approach, it can be applied to huge fields for which standard techniques totally fail. The second part of the thesis consists of an approach to p-adic uniformisations of Shimura curves X(Dp,N) through a combination of different techniques concerning rigid analytic geometry and arithmetic of quaternion orders. The results in this direction lean on two methods: one is based on the information provided by certain Mumford curves covering Shimura curves and the second one on the study of Eichler orders of level N in the definite quaternion algebra of discriminant D. Combining these methods, an explicit description of fundamental domains associated to p-adic uniformisation of families of Shimura curves of discriminant Dp and level N ≥ 1, for which the one-sided ideal class number h(D,N) is 1, is given. The method presented in this thesis enables one to find Mumford curves covering Shimura curves, together with a free system of generators for the associated Schottky groups, p-adic good fundamental domains and their stable reduction-graphs. As an application, general formulas for the reduction-graphs with lengths at p of the considered families of Shimura curves can be computed. [less ▲]

Detailed reference viewed: 19 (8 UL)
Full Text
See detailCohomologies and derived brackets of Leibniz algebras
Cai, Xiongwei UL

Doctoral thesis (2016)

In this thesis, we work on the structure of Leibniz algebras and develop cohomology theories for them. The motivation comes from: • Roytenberg, Stienon-Xu and Ginot-Grutzmann's work on standard and naive ... [more ▼]

In this thesis, we work on the structure of Leibniz algebras and develop cohomology theories for them. The motivation comes from: • Roytenberg, Stienon-Xu and Ginot-Grutzmann's work on standard and naive cohomology of Courant algebroids (Courant-Dorfman algebras). • Kosmann-Schwarzbach, Roytenberg and Alekseev-Xu's constructions of derived brackets for Courant algebroids. • The classical equivariant cohomology theory and the generalized geometry theory. This thesis consists of three parts: 1. We introduce standard cohomology and naive cohomology for a Leibniz algebra. We discuss their properties and show that they are isomorphic. By similar methods, we prove a generalization of Ginot-Grutzmann's theorem on transitive Courant algebroids, which was conjectured by Stienon-Xu. The relation between standard complexes of a Leibniz algebra and its corresponding crossed product is also discussed. 2. We observe a canonical 3-cochain in the standard complex of a Leibniz algebra. We construct a bracket on the subspace consisting of so-called representable cochains, and prove that the subspace becomes a graded Poisson algebra. Finally we show that for a fat Leibniz algebra, the Leibniz bracket can be represented as a derived bracket. 3. In spired by the notion of a Lie algebra action and the idea of generalized geometry, we introduce the notion of a generalized action of a Lie algebra g on a smooth manifold M, to be a homomorphism of Leibniz algebras from g to the generalized tangent bundle TM+T*M. We define the interior product and Lie derivative so that the standard complex of TM+T*M becomes a g differential algebra, then we discuss its equivariant cohomology. We also study the equivariant cohomology for a subcomplex of a Leibniz complex. [less ▲]

Detailed reference viewed: 30 (10 UL)
Full Text
See detailEssays on Inequality, Public Policy, and Banking
Mavridis, Dimitrios UL

Doctoral thesis (2016)

Detailed reference viewed: 10 (5 UL)
Full Text
See detailCOLLABORATIVE RULE-BASED PROACTIVE SYSTEMS: MODEL, INFORMATION SHARING STRATEGY AND CASE STUDIES
Dobrican, Remus-Alexandru UL

Doctoral thesis (2016)

The Proactive Computing paradigm provides us with a new way to make the multitude of computing systems, devices and sensors spread through our modern environment, work for/pro the human beings and be ... [more ▼]

The Proactive Computing paradigm provides us with a new way to make the multitude of computing systems, devices and sensors spread through our modern environment, work for/pro the human beings and be active on our behalf. In this paradigm, users are put on top of the interactive loop and the underlying IT systems are automated for performing even the most complex tasks in a more autonomous way. This dissertation focuses on providing further means, at both theoretical and applied levels, to design and implement Proactive Systems. It is shown how smart mobile, wearable and/or server applications can be developed with the proposed Rule-Based Middleware Model for computing pro-actively and for operating on multiple platforms. In order to represent and to reason about the information that the proactive system needs to know about its environment where it performs its computations, a new technique called Proactive Scenario is proposed. As an extension of its scope and properties, and for achieving global reasoning over inter-connected proactive systems, a new collaborative technique called Global Proactive Scenario is then proposed. Furthermore, to show their potential, three real world case studies of (collaborative) proactive systems have been explored for validating the proposed development methodology and its related technological framework in various domains like e-Learning, e-Business and e-Health. Results from these experiments con rm that software applications designed along the lines of the proposed rule-based proactive system model together with the concepts of local and global proactive scenarios, are capable of actively searching for the information they need, of automating tasks and procedures that do not require the user's input, of detecting various changes in their context and of taking measures to adapt to it for addressing the needs of the people which use these systems, and of performing collaboration and global reasoning over multiple proactive engines spread across different networks. [less ▲]

Detailed reference viewed: 43 (18 UL)
See detailDepression and ostracism: the role of attachment, self-esteem and rejection sensitivity for treatment success and depressive symptom deterioration
Borlinghaus, Jannika UL

Doctoral thesis (2016)

The current research programme is based on three studies investigating ramifications of ostracism in inpatients diagnosed with depression. It aims at understanding responses to ostracism in depressed ... [more ▼]

The current research programme is based on three studies investigating ramifications of ostracism in inpatients diagnosed with depression. It aims at understanding responses to ostracism in depressed patients, and the implications for psychotherapy and symptom deterioration using experimental (study 1) and longitudinal (studies 2 and 3) research designs. Investigating psychological factors such as attachment, self-esteem and Rejection Sensitivity, we found that attachment affects the immediate, physiological reactions to ostracism (study 1), that state self-esteem after an ostracism experience impacts therapy outcome (study 2) and that Rejection Sensitivity, the cognitive- affective disposition to anxiously expect and overact to rejection, predicts deterioration of depressive symptoms 6 months after treatment (study 3). These results highlight the salience of attachment when investigating reactions to ostracism, and the importance of Rejection Sensitivity over the course of therapy as an indicator for therapy outcome and risk for relapse. [less ▲]

Detailed reference viewed: 28 (7 UL)
Full Text
See detailAnalysis of the impact of ROS in networks describing neurodegenerative diseases
Ignatenko, Andrew UL

Doctoral thesis (2016)

In the current thesis the model of the ROS management network is built using the domino principle. The model offers insight into the design principles underlying the ROS management network and enlightens ... [more ▼]

In the current thesis the model of the ROS management network is built using the domino principle. The model offers insight into the design principles underlying the ROS management network and enlightens its functionality in the diseases such as cancer and Parkinson’s disease (PD). It is validated using experimental data. The model is used for in silico study of the ROS management dynamics under the stress conditions (oxidative stress). This highlights the phenomena of both adaptation to stress and the stress accumulation effect in case of repeated stress. This study also helps to discover the potential ways to a personalized treatment of the insufficient ROS management. The different ways of a control of the ROS management network are shown using the optimal control approach. Obtained results could be used for a seeking of the treatment strategies to fix the ROS management failures caused by an oxidative stress, neurodegenerative diseases, etc. Or, in vice versa, to develop the ways of a controllable cell death that might be used in cancer research. [less ▲]

Detailed reference viewed: 12 (6 UL)
See detailLa conversion de CO2, NOx et/ou SO2 en utilisant la technologie du charbon actif
Chaouni, Wafaâ

Doctoral thesis (2016)

Detailed reference viewed: 8 (3 UL)
Full Text
See detailBoosting Static Security Analysis of Android Apps through Code Instrumentation
Li, Li UL

Doctoral thesis (2016)

Within a few years, Android has been established as a leading platform in the mobile market with over one billion monthly active Android users. To serve these users, the official market, Google Play ... [more ▼]

Within a few years, Android has been established as a leading platform in the mobile market with over one billion monthly active Android users. To serve these users, the official market, Google Play, hosts around 2 million apps which have penetrated into a variety of user activities and have played an essential role in their daily life. However, this penetration has also opened doors for malicious apps, presenting big threats that can lead to severe damages. To alleviate the security threats posed by Android apps, the literature has proposed a large body of works which propose static and dynamic approaches for identifying and managing security issues in the mobile ecosystem. Static analysis in particular, which does not require to actually execute code of Android apps, has been used extensively for market-scale analysis. In order to have a better understanding on how static analysis is applied, we conduct a systematic literature review (SLR) of related researches for Android. We studied influential research papers published in the last five years (from 2011 to 2015). Our in-depth examination on those papers reveals, among other findings, that static analysis is largely performed to uncover security and privacy issues. The SLR also highlights that no single work has been proposed to tackle all the challenges for static analysis of Android apps. Existing approaches indeed fail to yield sound results in various analysis cases, given the different specificities of Android programming. Our objective is thus to reduce the analysis complexity of Android apps in a way that existing approaches can also succeed on their failed cases. To this end, we propose to instrument the app code for transforming a given hard problem to an easily-resolvable one (e.g., reducing an inter-app analysis problem to an intra-app analysis problem). As a result, our code instrumentation boosts existing static analyzers in a non-invasive manner (i.e., no need to modify those analyzers). In this dissertation, we apply code instrumentation to solve three well-known challenges of static analysis of Android apps, allowing existing static security analyses to 1) be inter-component communication (ICC) aware; 2) be reflection aware; and 3) cut out common libraries. ICC is a challenge for static analysis. Indeed, the ICC mechanism is driven at the framework level rather than the app level, leaving it invisible to app-targeted static analyzers. As a consequence, static analyzers can only build an incomplete control-flow graph (CFG) which prevents a sound analysis. To support ICC-aware analysis, we devise an approach called IccTA, which instruments app code by adding glue code that directly connects components using traditional Java class access mechanism (e.g., explicit new instantiation of target components). Reflection is a challenge for static analysis as well because it also confuses the analysis context. To support reflection-aware analysis, we provide DroidRA, a tool-based approach, which instruments Android apps to explicitly replace reflective calls with their corresponding traditional Java calls. The mapping from reflective calls to traditional Java calls is inferred through a solver, where the resolution of reflective calls is reduced to a composite constant propagation problem. Libraries are pervasively used in Android apps. On the one hand, their presence increases time/memory consumption of static analysis. On the other hand, they may lead to false positives and false negatives for static approaches (e.g., clone detection and machine learning-based malware detection). To mitigate this, we propose to instrument Android apps to cut out a set of automatically identified common libraries from the app code, so as to improve static analyzer’s performance in terms of time/memory as well as accuracy. To sum up, in this dissertation, we leverage code instrumentation to boost existing static analyzers, allowing them to yield more sound results and to perform quicker analyses. Thanks to the afore- mentioned approaches, we are now able to automatically identify malicious apps. However, it is still unknown how malicious payloads are introduced into those malicious apps. As a perspective for our future research, we conduct a thorough dissection on piggybacked apps (whose malicious payloads are easily identifiable) in the end of this dissertation, in an attempt to understand how malicious apps are actually built. [less ▲]

Detailed reference viewed: 70 (5 UL)
See detailAstrocyte phenotype during differentiation: implication of the NFkB pathway
Birck, Cindy UL

Doctoral thesis (2016)

Detailed reference viewed: 18 (7 UL)
Full Text
See detailTHREE-DIMENSIONAL MICROFLUIDIC CELL CULTURE OF STEM CELL-DERIVED NEURONAL MODELS OF PARKINSON'S DISEASE
Lucumi-Moreno, Edinson UL

Doctoral thesis (2016)

Cell culture models in 3D have become an essential tool for the implementation of cellular models of neurodegenerative diseases. Parkinson’s disease (PD) is characterized by the loss of dopaminergic ... [more ▼]

Cell culture models in 3D have become an essential tool for the implementation of cellular models of neurodegenerative diseases. Parkinson’s disease (PD) is characterized by the loss of dopaminergic neurons from the substantia nigra. The study of PD at the cellular level requires a cellular model that recapitulates the complexity of those neurons affected in PD. Induced Pluripotent Stem Cells (iPSC) technology is an efficient method for the derivation of dopaminergic neurons from human neuroepithelial stem cells (hNESC), hence proving to be a suitable tool to develop cellular models of PD. To obtain DA neurons from hNESC in a 3D culture, a protocol based on the use of small molecules and growth factors was implemented in a microfluidic device (OrganoPlate). This non PDMS device is based on the use of phaseguide (capillary pressure barriers that guide the liquid air interface) technology and the hydrogel matrigel as an extra cellular matrix surrogate. To characterize the morphological features and the electrophysiological activity of wild type hNESCs differentiated neuronal population, with those differentiated neurons carrying the LRRK2 mutation G2019S, a calcium imaging assay based on the use of a calcium sensitive dye (Fluo-4) and image analysis methods, were implemented. Additionally, several aspects of fluid flow dynamics, rheological properties of matrigel and its use as surrogate extracellular matrix were investigated. Final characterization of the differentiated neuronal population was done using an immunostaining assay and microscopy techniques. The yields of differentiated dopaminergic neurons in the 2 lane OrganoPlate were in the range of 13% to 27%. Morphological (length of processes) and electrophysiological (firing patterns) characteristics of wild type differentiated neurons and those carrying the LRRK2 mutation G2019S, were determined applying an image analysis pipeline. Velocity profiles and shear stress of fluorescent beads in matrigel flowing in culture lanes of the 2 lane OrganoPlate, were estimated using particle image velocimetry techniques. In this thesis, we integrate two new technologies to establish a new in vitro 3D cell based model to study several aspects of PD at the cellular level, aiming to establish a microfluidic cell culture experimental platform to study PD, using a systems biology approach. [less ▲]

Detailed reference viewed: 16 (6 UL)
Full Text
See detailTo Share or not to Share: Access Control and Information Inference in Social Networks
Zhang, Yang UL

Doctoral thesis (2016)

Online social networks (OSNs) have been the most successful online applications during the past decade. Leading players in the business, including Facebook, Twitter and Instagram, attract a huge number of ... [more ▼]

Online social networks (OSNs) have been the most successful online applications during the past decade. Leading players in the business, including Facebook, Twitter and Instagram, attract a huge number of users. Nowadays, OSNs have become a primary way for people to connect, communicate and share life moments. Although OSNs have brought a lot of convenience to our life, users' privacy, on the other hand, has become a major concern due to the large amount of personal data shared online. In this thesis, we study users' privacy in social networks from two aspects, namely access control and information inference. Access control is a mechanism, provided by OSNs, for users themselves to regulate who can view their resources. Access control schemes in OSNs are relationship-based, i.e., a user can define access control policies to allow others who are in a certain relationship with him to access his resources. Current OSNs have deployed multiple access control schemes, however most of these schemes do not satisfy users' expectations, due to expressiveness and usability. There are mainly two types of information that users share in OSNs, namely their activities and social relations. The information has provided an unprecedented chance for academia to understand human society and for industry to build appealing applications, such as personalized recommendation. However, the large quantity of data can also be used to infer a user's personal information, even though not shared by the user in OSNs. This thesis concentrates on users' privacy in online social networks from two aspects, i.e., access control and information inference, it is organized into two parts. The first part of this thesis addresses access control in social networks from three perspectives. First, we propose a formal framework based on a hybrid logic to model users' access control policies. This framework incorporates the notion of public information and provides users with a fine-grained way to control who can view their resources. Second, we design cryptographic protocols to enforce access control policies in OSNs. Under these protocols, a user can allow others to view his resources without leaking private information. Third, major OSN companies have deployed blacklist for users to enforce extra access control besides the normal access control policies. We formally model blacklist with the help of a hybrid logic and propose efficient algorithms to implement it in OSNs. The second part of this thesis concentrates on the inference of users' information in OSNs, using machine learning techniques. The targets of our inference are users' activities, represented by mobility, and social relations. First, we propose a method which uses a user's social relations to predict his locations. This method adopts a user's social community information to construct the location predictor, and perform the inference with machine learning techniques. Second, we focus on inferring the friendship between two users based on the common locations they have been to. We propose a notion namely location sociality that characterizes to which extent a location is suitable for conducting social activities, and use this notion for friendship prediction. Experiments on real life social network datasets have demonstrated the effectiveness of our two inferences. [less ▲]

Detailed reference viewed: 26 (9 UL)
See detailMitarbeiterführung und Social-Media-Nutzung im Führungsalltag von Generation-Y-Führungskräften - Eine explorative Analyse mittels Mixed-Methods-Ansatz
Feltes, Florian UL

Doctoral thesis (2016)

The topic of this thesis is the qualitative and quantitative evaluation of leadership behaviour and therefore the leadership style of Generation Y (GenY) considering the use of social media in day-to-day ... [more ▼]

The topic of this thesis is the qualitative and quantitative evaluation of leadership behaviour and therefore the leadership style of Generation Y (GenY) considering the use of social media in day-to-day management. It examines the question of how GenY leaders lead and how they use social media in this context. It explores the topic based on a sequential mixed methods approach of qualitative interviews and a quantitative online questionnaire. Using the qualitative content analysis, it examines 25 qualitative interviews concerning the following aspects: leadership behaviour of generation Y, generation-based differences in the leadership and different strength of leadership styles, influence of contextual factors like hierarchies, sector and company size on the leadership style and use of social media, use of social media on day-to-day management, and, finally, connections between applied leadership styles and social media usage of GenY leaders. The findings and tendencies were then verified in an online questionnaire. The results of the online questionnaire [self-evaluation of leaders (N=406), bottom-up evaluation by employees (N=622)] show a significant discrepancy between the leaders’ statements and those of the employees. However, there are clear results and tendencies that confirm the findings of the qualitative study. It was established that GenY leaders show characteristics of task-oriented, person-oriented, transactional and transformational leadership. GenY leadership is characterised by clear outcome orientation, flat hierarchies and feedback. The use of social media varies considerably, depending for example on the context in which the leader works, e. g. sector and level of management. In summary, it can be stated that there is a connection between the strength of the leadership style and the usage of social media in day-to-day management. [less ▲]

Detailed reference viewed: 20 (6 UL)
Full Text
See detailDynamic Vehicular Routing in Urban Environments
Codeca, Lara UL

Doctoral thesis (2016)

Traffic congestion is a persistent issue that most of the people living in a city have to face every day. Traffic density is constantly increasing and, in many metropolitan areas, the road network has ... [more ▼]

Traffic congestion is a persistent issue that most of the people living in a city have to face every day. Traffic density is constantly increasing and, in many metropolitan areas, the road network has reached its limits and cannot easily be extended to meet the growing traffic demand. Intelligent Transportation System (ITS) is a world wide trend in traffic monitoring that uses technology and infrastructure improvements in advanced communication and sensors to tackle transportation issues such as mobility efficiency, safety, and traffic congestion. The purpose of ITS is to take advantage of all available technologies to improve every aspect of mobility and traffic. Our focus in this thesis is to use these advancements in technology and infrastructure to mitigate traffic congestion. We discuss the state of the art in traffic flow optimization methods, their limitations, and the benefits of a new point of view. The traffic monitoring mechanism that we propose uses vehicular telecommunication to gather the traffic information that is fundamental to the creation of a consistent overview of the traffic situation, to provision real-time information to drivers, and to optimizing their routes. In order to study the impact of dynamic rerouting on the traffic congestion experienced in the urban environment, we need a reliable representation of the traffic situation. In this thesis, traffic flow theory, together with mobility models and propagation models, are the basis to providing a simulation environment capable of providing a realistic and interactive urban mobility, which is used to test and validate our solution for mitigating traffic congestion. The topology of the urban environment plays a fundamental role in traffic optimization, not only in terms of mobility patterns, but also in the connectivity and infrastructure available. Given the complexity of the problem, we start by defining the main parameters we want to optimize, and the user interaction required, in order to achieve the goal. We aim to optimize the travel time from origin to destination with a selfish approach, focusing on each driver. We then evaluated constraints and added values of the proposed optimization, providing a preliminary study on its impact on a simple scenario. Our evaluation is made in a best-case scenario using complete information, then in a more realistic scenario with partial information on the global traffic situation, where connectivity and coverage play a major role. The lack of a general-purpose, freely-available, realistic and dependable scenario for Vehicular Ad Hoc Networks (VANETs) creates many problems in the research community in providing and comparing realistic results. To address these issues, we implemented a synthetic traffic scenario, based on a real city, to evaluate dynamic routing in a realistic urban environment. The Luxembourg SUMO Traffic (LuST) Scenario is based on the mobility derived from the City of Luxembourg. The scenario is built for the Simulator of Urban MObiltiy (SUMO) and it is compatible with Vehicles in Network Simulation (VEINS) and Objective Modular Network Testbed in C++ (OMNet++), allowing it to be used in VANET simulations. In this thesis we present a selfish traffic optimization approach based on dynamic rerouting, able to mitigate the impact of traffic congestion in urban environments on a global scale. The general-purpose traffic scenario built to validate our results is already being used by the research community, and is freely-available under the MIT licence, and is hosted on GitHub. [less ▲]

Detailed reference viewed: 87 (21 UL)
Full Text
See detailNovel Methods for Multi-Shape Analysis
Bernard, Florian UL

Doctoral thesis (2016)

Multi-shape analysis has the objective to recognise, classify, or quantify morphological patterns or regularities within a set of shapes of a particular object class in order to better understand the ... [more ▼]

Multi-shape analysis has the objective to recognise, classify, or quantify morphological patterns or regularities within a set of shapes of a particular object class in order to better understand the object class of interest. One important aspect of multi-shape analysis are Statistical Shape Models (SSMs), where a collection of shapes is analysed and modelled within a statistical framework. SSMs can be used as (statistical) prior that describes which shapes are more likely and which shapes are less likely to be plausible instances of the object class of interest. Assuming that the object class of interest is known, such a prior can for example be used in order to reconstruct a three-dimensional surface from only a few known surface points. One relevant application of this surface reconstruction is 3D image segmentation in medical imaging, where the anatomical structure of interest is known a-priori and the surface points are obtained (either automatically or manually) from images. Frequently, Point Distribution Models (PDMs) are used to represent the distribution of shapes, where each shape is discretised and represented as labelled point set. With that, a shape can be interpreted as an element of a vector space, the so-called shape space, and the shape distribution in shape space can be estimated from a collection of given shape samples. One crucial aspect for the creation of PDMs that is tackled in this thesis is how to establish (bijective) correspondences across the collection of training shapes. Evaluated on brain shapes, the proposed method results in an improved model quality compared to existing approaches whilst at the same time being superior with respect to runtime. The second aspect considered in this work is how to learn a low-dimensional subspace of the shape space that is close to the training shapes, where all factors spanning this subspace have local support. Compared to previous work, the proposed method models the local support regions implicitly, such that no initialisation of the size and location of these regions is necessary, which is advantageous in scenarios where this information is not available. The third topic covered in this thesis is how to use an SSM in order to reconstruct a surface from only few surface points. By using a Gaussian Mixture Model (GMM) with anisotropic covariance matrices, which are oriented according to the surface normals, a more surface-oriented fitting is achieved compared to a purely point-based fitting when using the common Iterative Closest Point (ICP) algorithm. In comparison to ICP we find that the GMM-based approach gives superior accuracy and robustness on sparse data. Furthermore, this work covers the transformation synchronisation method, which is a procedure for removing noise that accounts for transitive inconsistency in the set of pairwise linear transformations. One interesting application of this methodology that is relevant in the context of multi-shape analysis is to solve the multi-alignment problem in an unbiased/reference-free manner. Moreover, by introducing an improvement of the numerical stability, the methodology can be used to solve the (affine) multi-image registration problem from pairwise registrations. Compared to reference-based multi-image registration, the proposed approach leads to an improved registration accuracy and is unbiased/reference-free, which makes it ideal for statistical analyses. [less ▲]

Detailed reference viewed: 54 (13 UL)
Full Text
See detailEmotion Regulation and Job Burnout: Investigating the relationship between emotion regulation knowledge, abilities and dispositions and their role in the prediction of Job Burnout
Seixas, Rita UL

Doctoral thesis (2016)

The present thesis has two goals: 1) to understand the relationship between three levels of emotion regulation - knowledge, abilities and dispositions - as proposed by the Three-level model of emotional ... [more ▼]

The present thesis has two goals: 1) to understand the relationship between three levels of emotion regulation - knowledge, abilities and dispositions - as proposed by the Three-level model of emotional competences (Mikolajczak, 2009) and 2) to investigate the role of these three levels in the prediction of job burnout – while accounting for the moderator role of the emotional labor of the job, and by distinguishing these effects in two professional sectors (finance and health-care sector). Methodologically, besides emotion regulation knowledge, specific emotion regulation strategies - reappraisal, suppression, enhancement and expressive flexibility – are considered and assessed both as abilities and as dispositions. Results from goal 1 indicate that: a) knowledge, abilities and dispositions are not hierarchically structured; b) different strategies are independent from each other (both in terms of ability and in terms of disposition); c) the disposition to reappraise and to enhance do not depend on a priori knowledge or ability, while the disposition to suppress decreases as the emotion regulation knowledge and the ability to enhance increase. Results from goal 2 indicate that emotion regulation knowledge, abilities and dispositions are incremental predictors of job burnout. Specifically: a) emotion regulation knowledge decreases emotional exhaustion, and reappraisal ability increases the sense of professional efficacy; b) expressive flexibility increases professional efficacy for workers in high emotional labor jobs, while its effect is detrimental for workers in low emotional labor jobs; c) suppression disposition protects individuals from professional inefficacy while suppression ability is detrimental in this regard. Finally, the results point out that different strategies have different impacts in different professional sectors, notably suppression which appears as a detrimental strategy for finance workers and as a protective strategy for health-care workers. Overall, these results point out that several dimensions of emotion regulation are relevant in the prediction of job burnout. Specifically, knowledge, as well as abilities and dispositions seem to play an incremental role in explaining variability in job burnout symptoms. The effects of the specific strategies should not be analyzed in a simplistic way but instead, are better understood when taking into account the specificities of the job and the professional context. [less ▲]

Detailed reference viewed: 77 (13 UL)
Full Text
See detailEnabling Model-Driven Live Analytics For Cyber-Physical Systems: The Case of Smart Grids
Hartmann, Thomas UL

Doctoral thesis (2016)

Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded ... [more ▼]

Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded systems. They will be entrusted with increasingly complex tasks like controlling electric grids or autonomously driving cars. These systems have the potential to lay the foundations for tomorrow’s critical infrastructures, to form the basis of emerging and future smart services, and to improve the quality of our everyday lives in many areas. In order to solve their tasks, they have to continuously monitor and collect data from physical processes, analyse this data, and make decisions based on it. Making smart decisions requires a deep understanding of the environment, internal state, and the impacts of actions. Such deep understanding relies on efficient data models to organise the sensed data and on advanced analytics. Considering that cyber-physical systems are controlling physical processes, decisions need to be taken very fast. This makes it necessary to analyse data in live, as opposed to conventional batch analytics. However, the complex nature combined with the massive amount of data generated by such systems impose fundamental challenges. While data in the context of cyber-physical systems has some similar characteristics as big data, it holds a particular complexity. This complexity results from the complicated physical phenomena described by this data, which makes it difficult to extract a model able to explain such data and its various multi-layered relationships. Existing solutions fail to provide sustainable mechanisms to analyse such data in live. This dissertation presents a novel approach, named model-driven live analytics. The main contribution of this thesis is a multi-dimensional graph data model that brings raw data, domain knowledge, and machine learning together in a single model, which can drive live analytic processes. This model is continuously updated with the sensed data and can be leveraged by live analytic processes to support decision-making of cyber-physical systems. The presented approach has been developed in collaboration with an industrial partner and, in form of a prototype, applied to the domain of smart grids. The addressed challenges are derived from this collaboration as a response to shortcomings in the current state of the art. More specifically, this dissertation provides solutions for the following challenges: First, data handled by cyber-physical systems is usually dynamic—data in motion as opposed to traditional data at rest—and changes frequently and at different paces. Analysing such data is challenging since data models usually can only represent a snapshot of a system at one specific point in time. A common approach consists in a discretisation, which regularly samples and stores such snapshots at specific timestamps to keep track of the history. Continuously changing data is then represented as a finite sequence of such snapshots. Such data representations would be very inefficient to analyse, since it would require to mine the snapshots, extract a relevant dataset, and finally analyse it. For this problem, this thesis presents a temporal graph data model and storage system, which consider time as a first-class property. A time-relative navigation concept enables to analyse frequently changing data very efficiently. Secondly, making sustainable decisions requires to anticipate what impacts certain actions would have. Considering complex cyber-physical systems, it can come to situations where hundreds or thousands of such hypothetical actions must be explored before a solid decision can be made. Every action leads to an independent alternative from where a set of other actions can be applied and so forth. Finding the sequence of actions that leads to the desired alternative, requires to efficiently create, represent, and analyse many different alternatives. Given that every alternative has its own history, this creates a very high combinatorial complexity of alternatives and histories, which is hard to analyse. To tackle this problem, this dissertation introduces a multi-dimensional graph data model (as an extension of the temporal graph data model) that enables to efficiently represent, store, and analyse many different alternatives in live. Thirdly, complex cyber-physical systems are often distributed, but to fulfil their tasks these systems typically need to share context information between computational entities. This requires analytic algorithms to reason over distributed data, which is a complex task since it relies on the aggregation and processing of various distributed and constantly changing data. To address this challenge, this dissertation proposes an approach to transparently distribute the presented multi-dimensional graph data model in a peer-to-peer manner and defines a stream processing concept to efficiently handle frequent changes. Fourthly, to meet future needs, cyber-physical systems need to become increasingly intelligent. To make smart decisions, these systems have to continuously refine behavioural models that are known at design time, with what can only be learned from live data. Machine learning algorithms can help to solve this unknown behaviour by extracting commonalities over massive datasets. Nevertheless, searching a coarse-grained common behaviour model can be very inaccurate for cyber-physical systems, which are composed of completely different entities with very different behaviour. For these systems, fine-grained learning can be significantly more accurate. However, modelling, structuring, and synchronising many fine-grained learning units is challenging. To tackle this, this thesis presents an approach to define reusable, chainable, and independently computable fine-grained learning units, which can be modelled together with and on the same level as domain data. This allows to weave machine learning directly into the presented multi-dimensional graph data model. In summary, this thesis provides an efficient multi-dimensional graph data model to enable live analytics of complex, frequently changing, and distributed data of cyber-physical systems. This model can significantly improve data analytics for such systems and empower cyber-physical systems to make smart decisions in live. The presented solutions combine and extend methods from model-driven engineering, models@run.time, data analytics, database systems, and machine learning. [less ▲]

Detailed reference viewed: 126 (36 UL)
See detailLa sanction de l'obligation légale d'information en droit des contrats de consommation : Étude de droit français et luxembourgeois.
Pitzalis Épouse Welch, Cécile Elise UL

Doctoral thesis (2016)

Numerous legal duties to disclose information are promulgated in consumer contract law by the legislational body of the European Union and are thus common to French and Luxembourgish laws. In this context ... [more ▼]

Numerous legal duties to disclose information are promulgated in consumer contract law by the legislational body of the European Union and are thus common to French and Luxembourgish laws. In this context, the legal duty to disclose information possesses a double objective to protect the consumer by enlightening their consent, and regulating the market by favoring loyal competition. A breach of obligatory information disclosures by a professional must be sanctioned to ensure the effectiveness of the obligation. The penalty for breaching the legal obligation to disclose information in consumer contract law must be analyzed using its angle of efficiency within the capacity of its effects to reach the assigned goals. Analyzing French and Luxembourgish consumer contract laws, both similar but with specificities, surmounts a perspective of legislatory choices in terms of sanctioning the legal duties to disclose information, and also aids by informing proposals to improve these current systems of sanction. [less ▲]

Detailed reference viewed: 14 (3 UL)
Full Text
See detailEnergy-efficient Communications in Cloud, Mobile Cloud and Fog Computing
Fiandrino, Claudio UL

Doctoral thesis (2016)

This thesis studies the problem of energy efficiency of communications in distributed computing paradigms, including cloud computing, mobile cloud computing and fog/edge computing. Distributed computing ... [more ▼]

This thesis studies the problem of energy efficiency of communications in distributed computing paradigms, including cloud computing, mobile cloud computing and fog/edge computing. Distributed computing paradigms have significantly changed the way of doing business. With cloud computing, companies and end users can access the vast majority services online through a virtualized environment in a pay-as-you-go basis. %Three are the main services typically consumed by cloud users are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Mobile cloud and fog/edge computing are the natural extension of the cloud computing paradigm for mobile and Internet of Things (IoT) devices. Based on offloading, the process of outsourcing computing tasks from mobile devices to the cloud, mobile cloud and fog/edge computing paradigms have become popular techniques to augment the capabilities of the mobile devices and to reduce their battery drain. Being equipped with a number of sensors, the proliferation of mobile and IoT devices has given rise to a new cloud-based paradigm for collecting data, which is called mobile crowdsensing as for proper operation it requires a large number of participants. A plethora of communication technologies is applicable to distributing computing paradigms. For example, cloud data centers typically implement wired technologies while mobile cloud and fog/edge environments exploit wireless technologies such as 3G/4G, WiFi and Bluetooth. Communication technologies directly impact the performance and the energy drain of the system. This Ph.D. thesis analyzes from a global perspective the efficiency in using energy of communications systems in distributed computing paradigms. In particular, the following contributions are proposed: - A new framework of performance metrics for communication systems of cloud computing data centers. The proposed framework allows a fine-grain analysis and comparison of communication systems, processes, and protocols, defining their influence on the performance of cloud applications. - A novel model for the problem of computation offloading, which describes the workflow of mobile applications through a new Directed Acyclic Graph (DAG) technique. This methodology is suitable for IoT devices working in fog computing environments and was used to design an Android application, called TreeGlass, which performs recognition of trees using Google Glass. TreeGlass is evaluated experimentally in different offloading scenarios by measuring battery drain and time of execution as key performance indicators. - In mobile crowdsensing systems, novel performance metrics and a new framework for data acquisition, which exploits a new policy for user recruitment. Performance of the framework are validated through CrowdSenSim, which is a new simulator designed for mobile crowdsensing activities in large scale urban scenarios. [less ▲]

Detailed reference viewed: 107 (12 UL)
Full Text
See detailGAMES AND STRATEGIES IN ANALYSIS OF SECURITY PROPERTIES
Tabatabaei, Masoud UL

Doctoral thesis (2016)

Information security problems typically involve decision makers who choose and adjust their behaviors in the interaction with each other in order to achieve their goals. Consequently, game theoretic ... [more ▼]

Information security problems typically involve decision makers who choose and adjust their behaviors in the interaction with each other in order to achieve their goals. Consequently, game theoretic models can potentially be a suitable tool for better understanding the challenges that the interaction of participants in information security scenarios bring about. In this dissertation, we employ models and concepts of game theory to study a number of subjects in the field of information security. In the first part, we take a game-theoretic approach to the matter of preventing coercion in elections. Our game models for the election involve an honest election authority that chooses between various protection methods with different levels of resistance and different implementation costs. By analysing these games, it turns out that the society is better off if the security policy is publicly announced, and the authorities commit to it. Our focus in the second part is on the property of noninterference in information flow security. Noninterference is a property that captures confidentiality of actions executed by a given process. However, the property is hard to guarantee in realistic scenarios. We show that the security of a system can be seen as an interplay between functionality requirements and the strategies adopted by users, and based on this we propose a weaker notion of noninterference, which we call strategic noninterference. We also give a characterisation of strategic noninterference through unwinding relations for specific subclasses of goals and for the simplified setting where a strategy is given as a parameter. In the third part, we study the security of information flow based on the consequences of information leakage to the adversary. Models of information flow security commonly prevent any information leakage, regardless of how grave or harmless the consequences the leakage can be. Even in models where each piece of information is classified as either sensitive or insensitive, the classification is “hardwired” and given as a parameter of the analysis, rather than derived from more fundamental features of the system. We suggest that information security is not a goal in itself, but rather a means of preventing potential attackers from compromising the correct behavior of the system. To formalize this, we first show how two information flows can be compared by looking at the adversary’s ability to harm the system. Then, we propose that the information flow in a system is effectively secure if it is as good as its idealized variant based on the classical notion of noninterference. Finally, we shift our focus to the strategic aspect of information security in voting procedures. We argue that the notions of receipt-freeness and coercion resistance are underpinned by existence (or nonexistence) of a suitable strategy for some participants of the voting process. In order toback the argument formally, we provide logical “transcriptions” of the informal intuitions behind coercion-related properties that can be found in the existing literature. The transcriptions are formulatedin the modal game logic ATL*, well known in the area of multi-agent systems. [less ▲]

Detailed reference viewed: 29 (9 UL)
Full Text
See detailA Model-Driven Approach to Offline Trace Checking of Temporal Properties
Dou, Wei UL

Doctoral thesis (2016)

Offline trace checking is a procedure for evaluating requirements over a log of events produced by a system. The goal of this thesis is to present a practical and scalable solution for the offline ... [more ▼]

Offline trace checking is a procedure for evaluating requirements over a log of events produced by a system. The goal of this thesis is to present a practical and scalable solution for the offline checking of the temporal requirements of a system, which can be used in contexts where model-driven engineering is already a practice, where temporal specifications should be written in a domain-specific language not requiring a strong mathematical background, and where relying on standards and industry-strength tools for property checking is a fundamental prerequisite. The main contributions of this thesis are: i) the TemPsy (Temporal Properties made easy) language, a pattern-based domain-specific language for the specification of temporal properties; ii) a model-driven trace checking procedure, which relies on an optimized mapping of temporal requirements written in TemPsy into Object Constraint Language (OCL) constraints on a conceptual model of execution traces; iii) a model-driven approach to violation information collection, which relies on the evaluation of OCL queries on an instance of the trace model; iv) three publicly-available tools: 1) TemPsy-Check and 2) TemPsy-Report, implementing, respectively, the trace checking and violation information collection procedures; 3) an interactive visualization tool for navigating and analyzing the violation information collected by TemPsy-Report; v) an evaluation of the scalability of TemPsy-Check and TemPsy-Report, when applied to the verification of real properties. The proposed approaches have been applied to and evaluated on a case study developed in collaboration with a public service organization, active in the domain of business process modeling for eGovernment. The experimental results show that TemPsy-Check is able to analyze traces with one million events in about two seconds, and TemPsy-Report can collect violation information from such large traces in less than ten seconds; both tools scale linearly with respect to the length of the trace. [less ▲]

Detailed reference viewed: 34 (19 UL)
Full Text
See detailMining Software Artefact Variants for Product Line Migration and Analysis
Martinez, Jabier UL

Doctoral thesis (2016)

Software Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy ... [more ▼]

Software Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets associated to the features, to allow systematic and planned reuse. SPLs are reported to have numerous benefits such as time-to-market reduction, productivity increase or product quality improvement. However, the barriers to adopt an SPL are equally numerous requiring a high up-front investment in domain analysis and implementation. In this context, to create variants, companies more commonly rely on ad-hoc reuse techniques such as copy-paste-modify. Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Extractive SPL adoption allows the migration from single-system development mentality to SPL practices. Several activities are involved to achieve this goal. Due to the complexity of artefact variants, feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed as well. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations (e.g., one feature requires or excludes another). Then, the reusable assets associated to the feature should be constructed. And finally, to facilitate the communication among stakeholders, a comprehensive feature model need to be synthesized. While several approaches have been proposed for the above-mentioned activities, extractive SPL adoption remains challenging. A recurring barrier consists in the limitation of existing techniques to be used beyond the specific types of artefacts that they initially targeted, requiring inputs and providing outputs at different granularity levels and with different representations. Seamlessly address the activities within the same environment is a challenge by itself. This dissertation presents a unified, generic and extensible framework for mining software artefact variants in the context of extractive SPL adoption. We describe both its principles and its realization in Bottom-Up Technologies for Reuse (BUT4Reuse). Special attention is paid to model-driven development scenarios. A unified process and representation would enable practitioners and researchers to empirically analyse and compare different techniques. Therefore, we also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We also present visualisation paradigms to support domain experts on feature naming during feature identification and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments. [less ▲]

Detailed reference viewed: 136 (24 UL)
Full Text
See detailEssays on Financial Market and Banking Regulation.
El Joueidi, Sarah UL

Doctoral thesis (2016)

Detailed reference viewed: 29 (11 UL)
Full Text
See detailAUTOMATED ANALYSIS OF NATURAL-LANGUAGE REQUIREMENTS USING NATURAL LANGUAGE PROCESSING
Arora, Chetan UL

Doctoral thesis (2016)

Natural Language (NL) is arguably the most common vehicle for specifying requirements. This dissertation devises automated assistance for some important tasks that requirements engineers need to perform ... [more ▼]

Natural Language (NL) is arguably the most common vehicle for specifying requirements. This dissertation devises automated assistance for some important tasks that requirements engineers need to perform in order to structure, manage, and elaborate NL requirements in a sound and effective manner. The key enabling technology underlying the work in this dissertation is Natural Language Processing (NLP). All the solutions presented herein have been developed and empirically evaluated in close collaboration with industrial partners. The dissertation addresses four different facets of requirements analysis: • Checking conformance to templates. Requirements templates are an effective tool for improving the structure and quality of NL requirements statements. When templates are used for specifying the requirements, an important quality assurance task is to ensure that the requirements conform to the intended templates. We develop an automated solution for checking the conformance of requirements to templates. • Extraction of glossary terms. Requirements glossaries (dictionaries) improve the understandability of requirements, and mitigate vagueness and ambiguity. We develop an auto- mated solution for supporting requirements analysts in the selection of glossary terms and their related terms. • Extraction of domain models. By providing a precise representation of the main concepts in a software project and the relationships between these concepts, a domain model serves as an important artifact for systematic requirements elaboration. We propose an automated approach for domain model extraction from requirements. The extraction rules in our approach encompass both the rules already described in the literature as well as a number of important extensions developed in this dissertation. • Identifying the impact of requirements changes. Uncontrolled change in requirements presents a major risk to the success of software projects. We address two different dimen- sions of requirements change analysis in this dissertation: First, we develop an automated approach for predicting how a change to one requirement impacts other requirements. Next, we consider the propagation of change from requirements to design. To this end, we develop an automated approach for predicting how the design of a system is impacted by changes made to the requirements. [less ▲]

Detailed reference viewed: 153 (41 UL)
Full Text
See detailCOMPLEX PROBLEM SOLVING IN UNIVERSITY SELECTION
Stadler, Matthias Johannes UL

Doctoral thesis (2016)

Detailed reference viewed: 18 (5 UL)
See detailHIGHER MOMENT ASSET PRICING: RISK PREMIUMS, METHODOLOGY AND ANOMALIES
Lin, Yuehao UL

Doctoral thesis (2016)

Detailed reference viewed: 34 (7 UL)
Full Text
See detailNew approaches to understand conductive and polar domain walls by Raman spectroscopy and low energy electron microscopy
Nataf, Guillaume UL

Doctoral thesis (2016)

We investigate the structural and electronic properties of domain walls to achieve a better understanding of the conduction mechanisms in domain walls of lithium niobate and the polarity of domain walls ... [more ▼]

We investigate the structural and electronic properties of domain walls to achieve a better understanding of the conduction mechanisms in domain walls of lithium niobate and the polarity of domain walls in calcium titanate. In a first part, we discuss the interaction between defects and domain walls in lithium niobate. A dielectric resonance with a low activation energy is observed, which vanishes under thermal annealing in monodomain samples while it remains stable in periodically poled samples. Therefore we propose that domain walls stabilize polaronic states. We also report the evolution of Raman modes with increasing amount of magnesium in congruent lithium niobate. We identified specific frequency shifts of the modes at the domain walls. The domains walls appear then as spaces where polar defects are stabilized. In a second step, we use mirror electron microscopy (MEM) and low energy electron microscopy (LEEM) to characterize the domains and domain walls at the surface of magnesium-doped lithium niobate. We demonstrate that out of focus settings can be used to determine the domain polarization. At domain walls, a local stray, lateral electric field arising from different surface charge states is observed. In a second part, we investigate the polarity of domain walls in calcium titanate. We use resonant piezoelectric spectroscopy to detect elastic resonances induced by an electric field, which is interpreted as a piezoelectric response of the walls. A direct image of the domain walls in calcium titanate is also obtained by LEEM, showing a clear contrast in surface potential between domains and walls. This contrast is observed to change reversibly upon electron irradiation due to the screening of polarization charges at domain walls. [less ▲]

Detailed reference viewed: 52 (10 UL)
See detailSpatial modelling of feedback effects between urban structure and traffic-induced air pollution - Insights from quantitative geography and urban economics
Schindler, Mirjam UL

Doctoral thesis (2016)

Urban air pollution is among the largest environmental health risk and its major source is traffic, which is also the main cause of spatial variation of pollution concerns within cities. Spatial responses ... [more ▼]

Urban air pollution is among the largest environmental health risk and its major source is traffic, which is also the main cause of spatial variation of pollution concerns within cities. Spatial responses by residents to such a risk factor have important consequences on urban structures and, in turn, on the spatial distribution of air pollution and population exposure. These spatial interactions and feedbacks need to be understood comprehensively in order to design spatial planning policies to mitigate local health effects. This dissertation focusses on how residents take their location decisions when they are concerned about health effects associated with traffic-induced air pollution and how these decisions shape future cities. Theoretical analytical and simulation models integrating urban economics and quantitative geography are developed to analyse and simulate the feedback effect between urban structure and population exposure to traffic-induced air pollution. Based on these, spatial impacts of policy, socio-economic and technological frameworks are analysed. Building upon an empirical exploratory analysis, a chain of theoretical models simulates in 2D how the preference of households for green amenities as indirect appraisal of local air quality and local neighbourhood design impact the environment, residents' health and well-being. In order to study the feedback effect of households' aversion to traffic-induced pollution exposure on urban structure, a 1D theoretical urban economics model is developed. Feedback effects on pollution and exposure distributions and intra-urban equity are analysed. Equilibrium, first- and second-best are compared and discussed as to their population distributions, spatial extents and environmental and health implications. Finally, a dynamic agent-based simulation model in 2D further integrates geographical elements into the urban economics framework. Thus, it enhances the spatial representation of the spatial interactions between the location of households and traffic-induced air pollution within cities. Simulations contrast neighbourhood and distance effects of the pollution externality and emphasise the role of local urban characteristics to mitigate population exposure and to consolidate health and environmental effects. The dissertation argues that the consideration of local health concerns due to traffic-induced air pollution in policy design challenges the concept of high urban densification both locally and with respect to distance and advises spatial differentiation. [less ▲]

Detailed reference viewed: 85 (11 UL)
See detailRegulating Hedge Funds in the EU
Seretakis, Alexandros UL

Doctoral thesis (2016)

Praised for enhancing the liquidity of the markets in which they trade and improving the corporate governance of the companies which they target and criticized for contributing to the instability of the ... [more ▼]

Praised for enhancing the liquidity of the markets in which they trade and improving the corporate governance of the companies which they target and criticized for contributing to the instability of the financial system hedge funds remain the most controversial vehicles of the modern financial system. Unconstrained until recently by regulation, operating under the radar of securities laws and with highly incentivized managers, hedge funds have managed to attract ever-increasing amounts of capital from sophisticated investors and have attracted the attention of the public, regulators and politicians. The financial crisis of 2007-2008, the most severe financial crisis after the Great Depression, prompted politicians and regulators both in the U.S. and Europe to redesign the financial system. The unregulated hedge fund industry heavily criticized for contributing or even causing the financial crisis was one of the first to come under the regulator?s ambit. The result was the adoption of the Dodd-Frank Act in the U.S. and the Alternative Investment Fund Managers Directive in the European Union. These two pieces of legislation are the first ever attempt to directly regulate the hedge fund industry. Taking into account the exponential growth of the hedge fund industry, its beneficial effects and its importance for certain countries such as U.S and Luxembourg, one can easily understand the considerable impact of these regulations. A comparative and critical examination of these major pieces of regulation and their potential impact on the hedge fund industry in Europe and the U.S. is absent from the academic literature something completely excusable when considering that the Dodd-Frank was adopted in 2010 and the AIFM Directive in 2009. Our Phd thesis will attempt to fill this gap and offer a critical assessment of both the Dodd-Frank Act and the AIFM Directive and their impact on the hedge fund industry across the Atlantic. Furthermore, our thesis will seek to offer concrete proposals for the amelioration of the current EU regime with respect to hedge funds building upon US regulations. [less ▲]

Detailed reference viewed: 27 (2 UL)
Full Text
See detailThe C*-algebras of certain Lie groups
Günther, Janne-Kathrin UL

Doctoral thesis (2016)

In this doctoral thesis, the C*-algebras of the connected real two-step nilpotent Lie groups and the Lie group SL(2,R) are characterized. Furthermore, as a preparation for an analysis of its C*-algebra ... [more ▼]

In this doctoral thesis, the C*-algebras of the connected real two-step nilpotent Lie groups and the Lie group SL(2,R) are characterized. Furthermore, as a preparation for an analysis of its C*-algebra, the topology of the spectrum of the semidirect product U(n) x H_n is described, where H_n denotes the Heisenberg Lie group and U(n) the unitary group acting by automorphisms on H_n. For the determination of the group C*-algebras, the operator valued Fourier transform is used in order to map the respective C*-algebra into the algebra of all bounded operator fields over its spectrum. One has to find the conditions that are satisfied by the image of this C*-algebra under the Fourier transform and the aim is to characterize it through these conditions. In the present thesis, it is proved that both the C*-algebras of the connected real two-step nilpotent Lie groups and the C*-algebra of SL(2,R) fulfill the same conditions, namely the “norm controlled dual limit” conditions. Thereby, these C*-algebras are described in this work and the “norm controlled dual limit” conditions are explicitly computed in both cases. The methods used for the two-step nilpotent Lie groups and the group SL(2,R) are completely different from each other. For the two-step nilpotent Lie groups, one regards their coadjoint orbits and uses the Kirillov theory, while for the group SL(2,R) one can accomplish the calculations more directly. [less ▲]

Detailed reference viewed: 29 (8 UL)
Full Text
See detailTorsion and purity on non-integral schemes and singular sheaves in the fine Simpson moduli spaces of one-dimensional sheaves on the projective plane
Leytem, Alain UL

Doctoral thesis (2016)

This thesis consists of two individual parts, each one having an interest in itself, but which are also related to each other. In Part I we analyze the general notions of the torsion of a module over a ... [more ▼]

This thesis consists of two individual parts, each one having an interest in itself, but which are also related to each other. In Part I we analyze the general notions of the torsion of a module over a non-integral ring and the torsion of a sheaf on a non-integral scheme. We give an explicit definition of the torsion subsheaf of a quasi-coherent O_X-module and prove a condition under which it is also quasi-coherent. Using the associated primes of a module and the primary decomposition of ideals in Noetherian rings, we review the main criteria for torsion-freeness and purity of a sheaf that have been established by Grothendieck and Huybrechts-Lehn. These allow to study the relations between both concepts. It turns out that they are equivalent in "nice" situations, but they can be quite different as soon as the scheme does not have equidimensional components. We illustrate the main differences on various examples. We also discuss some properties of the restriction of a coherent sheaf to its annihilator and its Fitting support and finally prove that sheaves of pure dimension are torsion-free on their support, no matter which closed subscheme structure it is given. Part II deals with the problem of determining "how many" sheaves in the fine Simpson moduli spaces M = M_{dm-1}(P2) of stable sheaves on the projective plane P2 with linear Hilbert polynomial dm-1 for d\geq 4 are not locally free on their support. Such sheaves are called singular and form a closed subvariety M' in M. Using results of Maican and Drézet, the open subset M0 of sheaves in M without global sections may be identified with an open subvariety of a projective bundle over a variety of Kronecker modules N. By the Theorem of Hilbert-Burch we can describe sheaves in an open subvariety of M0 as twisted ideal sheaves of curves of degree d. In order to determine the singular ones, we look at ideals of points on planar curves. In the case of simple and fat curvilinear points, we characterize free ideals in terms of the absence of two coeffcients in the polynomial defining the curve. This allows to show that a generic fiber of M0\cap M' over N is a union of projective subspaces of codimension 2 and finally that M' is singular of codimension 2. [less ▲]

Detailed reference viewed: 127 (55 UL)
Full Text
See detailBerezin-Toeplitz Quantization on K3 Surfaces and Hyperkähler Berezin-Toeplitz Quantization
Castejon-Diaz, Hector UL

Doctoral thesis (2016)

Given a quantizable Kähler manifold, the Berezin-Toeplitz quantization scheme constructs a quantization in a canonical way. In their seminal paper Martin Bordemann, Eckhard Meinrenken and Martin ... [more ▼]

Given a quantizable Kähler manifold, the Berezin-Toeplitz quantization scheme constructs a quantization in a canonical way. In their seminal paper Martin Bordemann, Eckhard Meinrenken and Martin Schlichenmaier proved that for a compact Kähler manifold such scheme is a well defined quantization which has the correct semiclassical limit. However, there are some manifolds which admit more than one (non-equivalent) Kähler structure. The question arises then, whether the choice of a different Kähler structure gives rise to a completely different quantizations or the resulting quantizations are related. An example of such objects are the so called K3 surfaces, which have some extra relations between the different Kähler structures. In this work, we consider the family of K3 surfaces which admit more than one quantizable Kähler structure and we use the relations between the different Kähler structures to study whether the corresponding quantizations are related or not. In particular, we prove that such K3 surfaces have always Picard number 20, which implies that their moduli space is discrete, and that the resulting quantum Hilbert spaces are always isomorphic, although not always in a canonical way. However, there exists an infinite subfamily of K3 surfaces for which the isomorphism is canonical. We also define new quantization operators on the product of the different quantum Hilbert spaces and we call this process Hyperkähler quantization. We prove that these new operators have the semiclassical limit, as well as new properties inherited from the quaternionic numbers. [less ▲]

Detailed reference viewed: 73 (8 UL)
Full Text
See detailEntre régions : Le Maroc et le Mexique face aux migrations , dans les contextes d'intégration régionale
Nanga, Emeline Modeste UL

Doctoral thesis (2016)

L’objet de cette thèse vise à analyser d’un point de vue comparé, les liens étroits qui existent entre le phénomène de l’immigration (clandestine), les processus d’intégration régionale actuellement en ... [more ▼]

L’objet de cette thèse vise à analyser d’un point de vue comparé, les liens étroits qui existent entre le phénomène de l’immigration (clandestine), les processus d’intégration régionale actuellement en cours dans l’espace EuroMed et les Amériques et la sécurité (nationale/humaine). Et parallèlement, l’impact de ces considérations sur les droits fondamentaux des migrants en transit ou en situation irrégulière dans ces espaces ; ainsi que sur le rôle et les responsabilités traditionnellement reconnus à l’État. L’accent ici étant mis sur l'UE et les Etats-Unis en tant que pays d’accueil ; le Mexique et le Maroc, simultanément en tant que pays d’émigration, d’immigration et de transit. [less ▲]

Detailed reference viewed: 57 (4 UL)
Full Text
See detailFULL 3D RECONSTRUCTION OF DYNAMIC NON-RIGID SCENES: ACQUISITION AND ENHANCEMENT
Afzal, Hassan UL

Doctoral thesis (2016)

Recent advances in commodity depth or 3D sensing technologies have enabled us to move closer to the goal of accurately sensing and modeling the 3D representations of complex dynamic scenes. Indeed, in ... [more ▼]

Recent advances in commodity depth or 3D sensing technologies have enabled us to move closer to the goal of accurately sensing and modeling the 3D representations of complex dynamic scenes. Indeed, in domains such as virtual reality, security, surveillance and e-health, there is now a greater demand for aff ordable and flexible vision systems which are capable of acquiring high quality 3D reconstructions. Available commodity RGB-D cameras, though easily accessible, have limited fi eld-of-view, and acquire noisy and low-resolution measurements which restricts their direct usage in building such vision systems. This thesis targets these limitations and builds approaches around commodity 3D sensing technologies to acquire noise-free and feature preserving full 3D reconstructions of dynamic scenes containing, static or moving, rigid or non-rigid objects. A mono-view system based on a single RGB-D camera is incapable of acquiring full 360 degrees 3D reconstruction of a dynamic scene instantaneously. For this purpose, a multi-view system composed of several RGB-D cameras covering the whole scene is used. In the first part of this thesis, the domain of correctly aligning the information acquired from RGB-D cameras in a multi-view system to provide full and textured 3D reconstructions of dynamic scenes, instantaneously, is explored. This is achieved by solving the extrinsic calibration problem. This thesis proposes an extrinsic calibration framework which uses the 2D photometric and 3D geometric information, acquired with RGB-D cameras, according to their relative (in)accuracies, a ffected by the presence of noise, in a single weighted bi-objective optimization. An iterative scheme is also proposed, which estimates the parameters of noise model aff ecting both 2D and 3D measurements, and solves the extrinsic calibration problem simultaneously. Results show improvement in calibration accuracy as compared to state-of-art methods. In the second part of this thesis, the domain of enhancement of noisy and low-resolution 3D data acquired with commodity RGB-D cameras in both mono-view and multi-view systems is explored. This thesis extends the state-of-art in mono-view template-free recursive 3D data enhancement which targets dynamic scenes containing rigid-objects, and thus requires tracking only the global motions of those objects for view-dependent surface representation and fi ltering. This thesis proposes to target dynamic scenes containing non-rigid objects which introduces the complex requirements of tracking relatively large local motions and maintaining data organization for view-dependent surface representation. The proposed method is shown to be e ffective in handling non-rigid objects of changing topologies. Building upon the previous work, this thesis overcomes the requirement of data organization by proposing an approach based on view-independent surface representation. View-independence decreases the complexity of the proposed algorithm and allows it the flexibility to process and enhance noisy data, acquired with multiple cameras in a multi-view system, simultaneously. Moreover, qualitative and quantitative experimental analysis shows this method to be more accurate in removing noise to produce enhanced 3D reconstructions of non-rigid objects. Although, extending this method to a multi-view system would allow for obtaining instantaneous enhanced full 360 degrees 3D reconstructions of non-rigid objects, it still lacks the ability to explicitly handle low-resolution data. Therefore, this thesis proposes a novel recursive dynamic multi-frame 3D super-resolution algorithm together with a novel 3D bilateral total variation regularization to filter out the noise, recover details and enhance the resolution of data acquired from commodity cameras in a multi-view system. Results show that this method is able to build accurate, smooth and feature preserving full 360 degrees 3D reconstructions of the dynamic scenes containing non-rigid objects. [less ▲]

Detailed reference viewed: 94 (10 UL)
Full Text
See detailFast reconsonstruction of compact context-specific network models
Pacheco, Maria Irene UL

Doctoral thesis (2016)

Recent progress in high-throughput data acquisition has shifted the focus from data generation to the processing and understanding of now easily collected patient-specific information. Metabolic models ... [more ▼]

Recent progress in high-throughput data acquisition has shifted the focus from data generation to the processing and understanding of now easily collected patient-specific information. Metabolic models, which have already proven to be very powerful for the integration and analysis of such data sets, might be successfully applied in precision medicine in the near future. Context-specific reconstructions extracted from generic genome-scale models like Reconstruction X (ReconX) (Duarte et al., 2007; Thiele et al., 2013) or Human Metabolic Reconstruction (HMR) (Agren et al., 2012; Mardinoglu et al., 2014a) thereby have the potential to become a diagnostic and treatment tool tailored to the analysis of specific groups of individuals. The use of computational algorithms as a tool for the routinely diagnosis and analysis of metabolic diseases requires a high level of predictive power, robustness and sensitivity. Although multiple context-specific reconstruction algorithms were published in the last ten years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. The aim of this thesis was to create a family of robust and fast algorithms for the building of context-specific models that could be used for the integration of different types of omics data and which should be sensitive enough to be used in the framework of precision medicine. FASTCORE (Vlassis et al., 2014), which was developed in the frame of this thesis is among the first context-specific building algorithms that do not optimize for a biological function and that has a computational time around seconds. Furthermore, FASTCORE is devoid of heuristic parameter settings. FASTCORE requires as input a set of reactions that are known to be active in the context of interest (core reactions) and a genome-scale reconstruction. FASTCORE uses an approximation of the cardinality function to force the core set of reactions to carry a flux above a threshold. Then an L1-minimization is applied to penalize the activation of reactions with low confidence level while still constraining the set of core reactions to carry a flux. The rationale behind FASTCORE is to reconstruct a compact consistent (all the reactions of the model have the potential to carry non zero-flux) output model that contains all the core reactions and a small number of non-core reactions. Then, in order to cope with the non-negligible amount of noise that impede direct comparison within genes, FASTCORE was extended to the FASTCORMICS workflow (Pires Pacheco and Sauter, 2014; Pires Pacheco et al., 2015a) for the building of models via the integration of microarray data . FASTCORMICS was applied to reveal control points regulated by genes under high regulatory load in the metabolic network of monocyte derived macrophages (Pires Pacheco et al., 2015a) and to investigate the effect of the TRIM32 mutation on the metabolism of brain cells of mice (Hillje et al., 2013). The use of metabolic modelling in the frame of personalized medicine, high-throughput data analysis and integration of omics data calls for a significant improvement in quality of existing algorithms and generic metabolic reconstructions used as input for the former. To this aim and to initiate a discussion in the community on how to improve the quality of context-specific reconstruction, benchmarking procedures were proposed and applied to seven recent contextspecific algorithms including FASTCORE and FASTCORMICS (Pires Pacheco et al., 2015a). Further, the problems arising from a lack of standardization of building and annotation pipelines and the use of non-specific identifiers was discussed in the frame of a review. In this review, we also advocated for a switch from gene-centred protein rules (GPR rules) to transcript-centred protein rules (Pfau et al., 2015). [less ▲]

Detailed reference viewed: 30 (15 UL)
Full Text
See detailEssays on the macro-analysis of international migration
Delogu, Marco UL

Doctoral thesis (2016)

This dissertation consists of three chapters, all of them are self-contained works. The first chapter, “Globalizing labor and the world economy: the role of human capital” is a joint work with Prof. Dr ... [more ▼]

This dissertation consists of three chapters, all of them are self-contained works. The first chapter, “Globalizing labor and the world economy: the role of human capital” is a joint work with Prof. Dr. Frédéric Docquier and Dr. Joël Machado. We develop a microfounded model of the world economy aiming to compare short- and long-run effects of migration restrictions on the world distribution of income. We find that a complete removal of migration barriers would increase the world average level of GDP per worker by 13% in the short run and by about 54% after one century. These results are very robust to our identification strategy and technological assumptions. The second chapter, titled “Infrastructure Policy: the role of informality and brain drain” analyses the effectiveness of infrastructure policy in developing countries. I show that, at low level of development, the possibility to work informally has a detrimental impact on infrastructure accumulation. I find that increasing the tax rate or enlarging the tax base can reduce the macroeconomic performance in the short run, while inducing long-run gains. These effects are amplified when brain drain is endogenous. The last chapter, titled “The role of fees in foreign education: evidence from Italy and the UK” is mainly empirical. Relying upon a discrete choice model, together with Prof. Dr. Michel Beine and Prof. Dr. Lionel Ragot I assess the determinants of international students mobility exploiting, for the first time in the literature, data at the university level. We focus on student inflows to Italy and the UK, countries on which tuition fees varies across universities. We obtain evidence for a clear and negative impact of tuition fees on international students inflows and confirm the positive impact of quality of education. The estimations find also support for an important role of additional destination-specific variables such as host capacity, expected return of education and cost of living in the vicinity of the university. [less ▲]

Detailed reference viewed: 49 (10 UL)
Full Text
See detailAUTOMATED TESTING OF SIMULINK/STATEFLOW MODELS IN THE AUTOMOTIVE DOMAIN
Matinnejad, Reza UL

Doctoral thesis (2016)

Context. Simulink/Stateflow is an advanced system modeling platform which is prevalently used in the Cyber Physical Systems domain, e.g., automotive industry, to implement software con- trollers. Testing ... [more ▼]

Context. Simulink/Stateflow is an advanced system modeling platform which is prevalently used in the Cyber Physical Systems domain, e.g., automotive industry, to implement software con- trollers. Testing Simulink models is complex and poses several challenges to research and prac- tice. Simulink models often have mixed discrete-continuous behaviors and their correct behav- ior crucially depends on time. Inputs and outputs of Simulink models are signals, i.e., values evolving over time, rather than discrete values. Further, Simulink models are required to operate satisfactory for a large variety of hardware configurations. Finally, developing test oracles for Simulink models is challenging, particularly for requirements capturing their continuous aspects. In this dissertation, we focus on testing mixed discrete-continuous aspects of Simulink models, an important, yet not well-studied, problem. The existing Simulink testing techniques are more amenable to testing and verification of logical and state-based properties. Further, they are mostly incompatible with Simulink models containing time-continuos blocks, and floating point and non- linear computations. In addition, they often rely on the presence of formal specifications, which are expensive and rare in practice, to automate test oracles. Approach. In this dissertation, we propose a set of approaches based on meta-heuristic search and machine learning techniques to automate testing of software controllers implemented in Simulink. The work presented in this dissertation is motived by Simulink testing needs at Delphi Automotive Systems, a world leading part supplier to the automotive industry. To address the above-mentioned challenges, we rely on discrete-continuous output signals of Simulink models and provide output- based black-box test generation techniques to produce test cases with high fault-revealing ability. Our algorithms are black-box, hence, compatible with Simulink/Stateflow models in their en- tirety. Further, we do not rely on the presence of formal specifications to automate test oracles. Specifically, we propose two sets of test generation algorithms for closed-loop and open-loop con- trollers implemented in Simulink: (1) For closed-loop controllers, test oracles can be formalized and automated relying on the feedback received from the controlled system. We characterize the desired behavior of closed-loop controllers in a set of common requirements, and then use search to identify the worst-case test scenarios of the controller with respect to each requirement. (2) For open-loop controllers, we cannot automate test oracles since the feedback is not available, and test oracles are manual. Hence, we focus on providing test generation algorithms that develop small effective test suites with high fault revealing ability. We further provide a test case prioriti- zation algorithm to rank the generated test cases based on their fault revealing ability and lower the manual oracle cost. Our test generation and prioritization algorithms are evaluated with several industrial and publicly available Simulink models. Specifically, we showed that fault revealing ability of our our approach outperforms that of Simulink Design Verifier (SLDV), the only test generation toolbox of Simulink and a well-known commercial Simulink testing tool. In addition, using our approach, we were able to detect several real faults in Simulink models from our industry partner, Delphi, which had not been previously found by manual testing based on domain expertise and existing Simulink testing tools. Contributions. The main research contributions in this dissertation are: 1. An automated approach for testing closed-loop controllers that characterize the desired be- havior of such controllers in a set of common requirements, and combines random explo- ration and search to effectively identify the worst-case test scenarios of the controller with respect to each requirement. 2. An automated approach for testing highly configurable closed-loop controllers by account- ing for all their feasible configurations and providing strategies to scale the search to large multi-dimensional spaces relying on dimensionality reduction and surrogate modelling 3. A black-box output-based test generation algorithm for open-loop Simulink models which uses search to maximize the likelihood of presence of specific failure patterns (i.e., anti- patterns) in Simulink output signals. 4. A black-box output-based test generation algorithm for open-loop Simulink models that maximizes output diversity to develop small test suites with diverse output signal shapes and, hence, high fault revealing ability. 5. A test case prioritization algorithm which relies on output diversity of the generated test suites, in addition to the dynamic structural coverage achieved by individual tests, to rank test cases and help engineers identify faults faster by inspecting a few test cases. 6. Two test generation tools, namely CoCoTest and SimCoTest, that respectively implement our test generation approaches for closed-loop and open-loop controllers. [less ▲]

Detailed reference viewed: 68 (13 UL)
Full Text
See detailDomain Completeness of Model Transformations and Synchronisations
Nachtigall, Nico UL

Doctoral thesis (2016)

The intrinsic question of most activities in information science, in practice or science, is “Does a given system satisfy the requirements regarding its application?” Commonly, requirements are expressed ... [more ▼]

The intrinsic question of most activities in information science, in practice or science, is “Does a given system satisfy the requirements regarding its application?” Commonly, requirements are expressed and accessible by means of models, mostly in a diagrammatic representation by visual models. The requirements may change over time and are often defined from different perspectives and within different domains. This implies that models may be transformed either within the same domain-specific visual modelling language or into models in another language. Furthermore, model updates may be synchronised between different models. Most types of visual models can be represented by graphs where model transformations and synchronisations are performed by graph transformations. The theory of graph transformations emerged from its origins in the late 1960s and early 1970s as a generalisation of term and tree rewriting systems to an important field in (theoretical) computer science with applications particularly in visual modelling techniques, model transformations, synchronisations and behavioural specifications of models. Its formal foundations but likewise visual notation enable both precise definitions and proofs of important properties of model transformations and synchronisations from a theoretical point of view and an intuitive approach for specifying transformations and model updates from an engineer’s point of view. The recent results were presented in the EATCS monographs “Fundamentals of Algebraic Graph Transformation” (FAGT) in 2006 and its sequel “Graph and Model Transformation: General Framework and Applications” (GraMoT) in 2015. This thesis concentrates on one important property of model transformations and synchronisations, i.e., syntactical completeness. Syntactical completeness of model transformations means that given a specification for transforming models from a source modelling language into models in a target language, then all source models can be completely transformed into corresponding target models. In the same given context, syntactical completeness of model synchronisations means that all source model updates can be completely synchronised, resulting in corresponding target model updates. This work is essentially based on the GraMoT book and mainly extends its results for model transformations and synchronisations based on triple graph grammars by a new more general notion of syntactical completeness, namely domain completeness, together with corresponding verification techniques. Furthermore, the results are instantiated to the verification of the syntactical completeness of software transformations and synchronisations. The well-known transformation of UML class diagrams into relational database models and the transformation of programs of a small object-oriented programming language into class diagrams serve as running examples. The existing AGG tool is used to support the verification of the given examples in practice. [less ▲]

Detailed reference viewed: 52 (12 UL)
Full Text
See detailDevelopment of biospecimen quality control tools and disease diagnostic markers by metabolic profiling
Trezzi, Jean-Pierre UL

Doctoral thesis (2016)

In metabolomics-based biomarker studies, the monitoring of pre-analytical variations is crucial and requires quality control tools to enable proper sample quality evaluation. In this dissertation work ... [more ▼]

In metabolomics-based biomarker studies, the monitoring of pre-analytical variations is crucial and requires quality control tools to enable proper sample quality evaluation. In this dissertation work, biospecimen research and machine learning algorithms are applied (1) to develop sample quality assessment tools and (2) to develop disease-specific diagnostic models. In this regard, a novel plasma sample quality assessment tool, the LacaScore, is presented. The LacaScore plasma quality assessment is based on the plasma levels of ascorbic acid and lactic acid. The biggest challenge in metabolomics analyses is that the sample quality is often not known. The presented tool enhances the knowledge and importance of the monitoring of pre-analytical variations, such as pre-centrifugation time and temperature, prior to sample analysis in the emerging field of metabolomics. Based on the LacaScore, decisions on the suitability/fit-for-purpose of a given sample or sample cohort can be made. In this dissertation work, the knowledge on sample quality was applied in a biomarker discovery study based on cerebrospinal fluid (CSF) from early-stage Parkinson’s disease (PD) patients. To date, no markers for the diagnosis of Parkinson’s disease are available. In this work, a non-targeted GC-MS approach is presented and shows significant changes in the metabolic profile in CSF from early-stage PD patients compared to matched healthy control subjects. Based on these findings, a biomarker signature for the prediction of earlystage PD has been developed by the application of sophisticated machine learning algorithms. This disease-specific signature is composed of metabolites involved in inflammation, glycosylation/glycation and oxidative stress response. In summary, this dissertation illustrates the importance of sample quality monitoring in biomarker studies that are often limited by small amounts of human body fluids. The monitoring of sample quality enhances the robustness and reproducibility of biomarker discovery studies. In addition, proper data analysis and powerful machine learning algorithms enable the generation of potential disease diagnosis biomarker signatures. [less ▲]

Detailed reference viewed: 58 (13 UL)
Full Text
See detailIn vitro Metabolic Studies of Dopamine Synthesis and the Toxicity of L-DOPA in Human Cells
Delcambre, Sylvie UL

Doctoral thesis (2016)

This work is divided in two parts. In the first, I investigated the effects of 2,3- dihydroxy-L-phenylalanine (L-DOPA) on the metabolism of human tyrosine hydroxylase (TH)-positive neuronal LUHMES cells ... [more ▼]

This work is divided in two parts. In the first, I investigated the effects of 2,3- dihydroxy-L-phenylalanine (L-DOPA) on the metabolism of human tyrosine hydroxylase (TH)-positive neuronal LUHMES cells. L-DOPA is the gold standard treatment for Parkinson’s disease (PD) and its effects on cellular metabolism are controversial. It induced a re-routing of intracellular carbon supplies. While glutamine contribution to tricarboxylic acid (TCA) cycle intermediates increased, glucose contribution to the same metabolites decreased. Carbon contribution from glucose was decreased in lactate and was compensated by an increased pyruvate contribution. Pyruvate reacted with hydrogen peroxide generated during the auto-oxidation of L-DOPA and lead to an increase of acetate in the medium. In the presence of L-DOPA, this acetate was taken up by the cells. In combination with an increased glutamate secretion, all these results seem to point towards a mitochondrial complex II inhibition. In the second part of this work, I studied and compared dopamine (DA)-producing in vitro systems. First, I compared gene and protein expression of catecholamine (CA)- related genes. Then, I performed molecular engineering to increase TH expression in LUHMES and SH-SY5Y cells. This was sufficient to induce DA production in SH-SY5Y, but not in LUHMES cells, indicating that TH expression is not sufficient to characterize dopaminergic neurons. Therefore I used SH-SY5Y cells overexpressing TH to study substrates for DA production. Upon overexpression of aromatic amino acid decarboxylase (AADC), LUHMES cells produced DA after L-DOPA supplementation. This model was useful to study L-DOPA uptake in LUHMES cells and I showed that L-DOPA is imported via large amino acid transporter (LAT). In conclusion, the expression of TH is not sufficient to obtain a DA-producing cell system and this work opened many and answered some questions about DA metabolism. [less ▲]

Detailed reference viewed: 63 (19 UL)
See detailIP Box Regime im Europäischen Steuerrecht
Schwarz, Paloma Natascha UL

Doctoral thesis (2016)

Detailed reference viewed: 9 (3 UL)
Full Text
See detailDoping, Defects And Solar Cell Performance Of Cu-rich Grown CuInSe2
Bertram, Tobias UL

Doctoral thesis (2016)

Cu-rich grown CuInSe2 thin-film solar cells can be as efficient as Cu-poor ones. However record lab cells and commercial modules are grown exclusively under Cu-poor conditions. While the Cu-rich ... [more ▼]

Cu-rich grown CuInSe2 thin-film solar cells can be as efficient as Cu-poor ones. However record lab cells and commercial modules are grown exclusively under Cu-poor conditions. While the Cu-rich material’s bulk properties show advantages, e.g. higher minority carrier mobilities and quasi-Fermi level splitting - both indicating a superior performance - it also features some inherent problems that led to its widespread dismissal for solar cell use. Two major challenges can be identified that negatively impact the Curich’s performance: a too high doping density and recombination close to the interface. In this work electrical characterisation techniques were employed to investigate the mechanisms that cause the low performance. Capacitance measurements are especially well suited to probe the electrically active defects within the space-charge region. Under a variation of applied DC bias they give insights into the shallow doping density, while frequency and temperature dependent measurements are powerful in revealing deep levels within the bandgap. CuInSe2 samples were produced via a thermal co-evaporation process and subsequently characterized utilizing the aforementioned techniques. The results have been grouped into two partial studies. First the influence of the Se overpressure during growth on the shallow doping and deep defects is investigated and how this impacts solar cell performance. The second study revolves around samples that feature a surface treatment to produce a bilayer structure - a Cu-rich bulk and a Cu-poor interface. It is shown that via a reduction of the Se flux during absorber preparation the doping density can be reduced and while this certainly benefits solar cell efficiency, a high deficit in open-circuit voltage still results in lower performance compared to the Cu-poor devices. Supplementary measurements trace this back to recombination close to the interface. Furthermore a defect signature is identified, that is not present in Cu-poor material. These two results are tied together via the investigation of the surface treated samples, which do not show interface recombination and reach the same high voltage as the Cu-poor samples. The defect signature, normally native to the Cu-rich material, however is not found in the surface treated samples. It is concluded that this deep trap acts as a recombination centre close to the interface. Shifting it towards the bulk via the treatment is then related to the observed increase in voltage. Within this thesis a conclusive picture is derived to unite all measurement results and show the mechanisms that work together and made it possible to produce a high efficient Cu-rich thin-film solar cell. [less ▲]

Detailed reference viewed: 67 (22 UL)
Full Text
See detailAccession Treaties in the EU legal order
Prek, Miro UL

Doctoral thesis (2016)

In the present thesis, it is argued that (1) the Accession Treaties have been used in accordance with their nature and proclaimed objective: they only brought about limited changes to primary law, proper ... [more ▼]

In the present thesis, it is argued that (1) the Accession Treaties have been used in accordance with their nature and proclaimed objective: they only brought about limited changes to primary law, proper to the needs of accession and have not introduced any fundamental changes. The numerous and still growing arrangements that depart from the principle of the application of the acquis in toto on accession do not alter this conclusion. (2) The evolution, especially from the 2004 Accession Treaty onwards and predictable for the future Accession Treaties (e.g. with Turkey), shows a tendency of diversification of that legal instrument by (a) adding new and/or reinforced elements of conditionality, protracted from the pre-accession phase to the membership phase, devising new mechanisms of conditionality and control (general and specific safeguard clauses, monitoring and verification mechanisms, membership postponement clause) and thus (b) contributing to a further differentiation in two respects - as among the Member States and with regard to the core acquis. Such differentiation exists already on the basis of the constitutive treaties (“in-built constitutive treaties induced differentiation”) and is accentuated by the Accession Treaties and their transitional arrangements (“Accession Treaties induced differentiation”). Questions of differentiation acquired another dimension with the introduction of the citizenship of the EU. (3) Finally, negotiations with certain candidate countries will show whether additional innovations are to be expected: a) whether future instruments of accession would be used in order to increase the existing level of differentiation (and protract the pre-accession phase logic well into the membership phase) with the conditionality becoming the most important element of the relations within an enlarged EU and thus paradoxically negating the nature of the integration itself, b) whether they will perhaps be used to bring about more important modifications to the treaties, or c) whether they will go as far as to provide a legal basis for permanent derogations with regard to certain new Member States (as explicitly envisaged, for instance, in the negotiating framework for Turkey). [less ▲]

Detailed reference viewed: 43 (11 UL)
See detailTowards harmonization of proteomics methods. Implication for high resolution/accurate mass targeted methods.
Bourmaud, Adèle Gaëlle Annabelle UL

Doctoral thesis (2016)

Mass spectrometry plays a central role in proteomics studies which has allowed its expansion to biomedical research. In an effort to accelerate the understanding of the various aspects of protein biology ... [more ▼]

Mass spectrometry plays a central role in proteomics studies which has allowed its expansion to biomedical research. In an effort to accelerate the understanding of the various aspects of protein biology, the comparison and integration of results across laboratories have gained importance. However, the variety of laboratory-specific protocols, instruments, and data processing methods limits the reliability and reproducibility of the proteomics datasets. The harmonization of LC-MS based proteomics experiments is thus urgently needed to ensure that the workflows used are suitable for the intended purpose of the experiments and that they generate consistent and reproducible results. In a first step towards this harmonization, the critical components of each step of the workflow must be identified. Consolidated sample preparation methods with defined recovery and qualified platforms along with systematic assessment of their performance have to be established. They should ultimately rely on well-defined recommendations and reference materials. Towards these goals, the present project aimed to define, based on current proteomics practices and recent technologies, experimental protocols that will constitute reference methods for the community. The associated results will represent a baseline that can be used to benchmark workflows and platforms, and to conduct routine experiments. A quality control procedure was developed to routinely assess the uniformity of proteomics analyses. The combination of a simple protocol and the addition of reference materials at different stages of the workflow allowed a straightforward monitoring of both sample preparation and LC-MS performance. In addition, as high resolution/accurate mass instruments with fast scanning capabilities turned out to be particularly suited to targeted quantitative experiments, a significant part of the work has consisted in the evaluation of the capabilities offered by such mass spectrometers to promote good practice upon their inception. The methods developed based on these emerging technologies were compared to the existing workflows allowing recommendations to be made for their implementation for fit-for-purpose experiments. [less ▲]

Detailed reference viewed: 45 (6 UL)
Full Text
See detailSoil Fatigue Due To Cyclically Loaded Foundations
Pytlik, Robert Stanislaw UL

Doctoral thesis (2016)

Cyclic loading on civil structures can lead to a reduction of strength of the used materials. A literature study showed that, in contrast to steel structures and material engineering, there are no design ... [more ▼]

Cyclic loading on civil structures can lead to a reduction of strength of the used materials. A literature study showed that, in contrast to steel structures and material engineering, there are no design codes or standards for fatigue of foundations and the surrounding ground masses in terms of shear strength reduction. Scientific efforts to study the fatigue behaviour of geomaterials are mainly focused on strain accumulation, while the reduction of shear strength of geomaterials has not been fully investigated. It has to be mentioned that a number of laboratory investigation have been done and some models have been already proposed for strain accumulation and pore pressure increase which can lead to liquefaction. Laboratory triaxial tests have been performed in order to evaluate the fatigue of soils and rocks by comparing the shear strength parameters obtained in cyclic triaxial tests with the static one. Correlations of fatigue with both, the number of cycles and cyclic stress ratio have been given. In order to apply cyclic movements in a triaxial apparatus, a machine setup and configuration was made. A special program was written in LabVIEW to control the applied stresses and the speed of loading, which allowed simulating the natural loading frequencies. Matlab scripts were also written to reduce the time required for the data processing. Both cohesive and cohesionless geomaterials were tested: artificial gypsum and mortar as cohesive geomaterials, and sedimentary limestone, and different sands, as cohesionless and low-cohesive natural materials. The artificial gypsum, mortar and natural limestone exhibit mostly brittle behaviour, where the crumbled limestone and other sand typical ductile one. All the sands as well as the crumbled limestone were slightly densified before testing therefore; they can be treated as dense sands. The UCS for the crumbled limestone is 0.17 MPa and standard error of estimate σest = 0.021 MPa, where for mortar UCS = 9.11 MPa with σest = 0.18 MPa and for gypsum UCS = 6.02 MPa with standard deviation = 0.53. All triaxial tests were conducted on dry samples in the natural state, without presence of water (no pore pressure). The range of the confining pressure was between 0 MPa and 0.5 MPa. The cyclic tests carried out were typical multiple loading tests with constant displacement ratio up to a certain stress level. The frequency was kept low to allow for precise application of cyclic load and accurate readings. What is more, the frequency of the cyclic loading corresponds to the natural loading of waves and winds. The number of applied cycles was from few cycles up to few hundred thousand (max number of applied cycles was 370 000). Due to the complex behaviour of materials and high scatter of the results, many tests were required. Two different strategies were used to investigate fatigue of geomaterials: 1) the remaining shear strength curve; after a given number of cycles, a final single load test was done until failure in order to measure the remaining shear strength of the sample. 2) the typical S-N curve (Wöhler curves); there is simply a constant loading until failure. The remaining shear strength (or strength reduction) curve has been compared with the standard S-N curve, and is found to be very similar because the cyclic stress ratio has little influence. The cyclic loading on geomaterials, being an assemblage of different sizes and shapes of grains with voids etc., showed different types of effects. Cohesionless materials show a shear strength increase during the cyclic loading, while cohesive ones show a shear strength decrease. For the cohesive materials the assumption was made that the friction angle remains constant; so, the fatigue of geomaterials can be seen as a reduction of the cohesion. In this way, the fatigue of a cohesive geomaterial can be described by a remaining cohesion. The imperfections in the artificial gypsum have a significant impact on the results of the (especially cyclic) strength tests. Therefore another man made materials was used – a mixture of sand and cement (mortar). As the first static test results were very promising, mortar was used in further tests. The cyclic tests, however, presented similar, high scatter of results as for artificial gypsum. An unexpected observation for both materials was a lack of dependency of the remaining shear strength on the cyclic stress ratio. The strain-stress relationship in cyclic loading shows that the fatigue life of the geomaterials can be divided into three stages, just as for creep. The last phase with a fast increase in plastic strains could be an indicator of an incoming failure. The accumulation of strains and increase of internal energy could be good indicators too, but no strong correlation, has been found. Similar to the shear strength, the stiffness changes during cyclic loading; for cohesive materials the stiffness increase, while for cohesionless it decreases. This could help to predict the remaining shear strength of a geomaterial by using a non-destructive method. [less ▲]

Detailed reference viewed: 40 (2 UL)
Full Text
See detailSocio-Technical Aspects of Security Analysis
Huynen, Jean-Louis UL

Doctoral thesis (2016)

This thesis seeks to establish a semi-automatic methodology for security analysis when users are considered part of the system. The thesis explores this challenge, which we refer to as ‘socio-technical ... [more ▼]

This thesis seeks to establish a semi-automatic methodology for security analysis when users are considered part of the system. The thesis explores this challenge, which we refer to as ‘socio-technical security analysis’. We consider that a socio-technical vulnerability is the conjunction of a human behaviour, the factors that foster the occurrence of this behaviour, and a system. Therefore, the aim of the thesis is to investigate which human-related factors should be considered in system security, and how to incorporate these identified factors into an analysis framework. Finding a way to systematically detect, in a system, the socio-technical vulnerabilities that can stem from insecure human behaviours, along with the factors that influence users into engaging in these behaviours is a long journey that we can summarise in three research questions: 1. How can we detect a socio-technical vulnerability in a system? 2. How can we identify in the interactions between a system and its users, the human behaviours that can harm this system’s security? 3. How can we identify the factors that foster human behaviours that are harmful to a system’s security? A review of works that aim at bringing social sciences findings into security analysis reveals that there is no unified way to do it. Identifying the points where users can harm a system’s security, and clarifying what factors can foster an insecure behaviour is a complex matter. Hypotheses can arise about the usability of the system, aspects pertaining to the user or the organisational context but there is no way to find and test them all. Further, there is currently no way to systematically integrate the results regarding hypotheses we tested in a security analysis. Thus, we identify two objectives related to these methodological challenges that this thesis aims at fulfilling in its contributions: 1. What form should a framework that intends to identify harmful behaviours for security, and to investigate the factors that foster their occurrence take? 2. What form should a semi-automatic, or tool-assisted methodology for the security analysis of socio-technical systems take? The thesis provides partial answers to the questions. First it defines a methodological framework called STEAL that provides a common ground for an interdisciplinary approach to security analysis. STEAL supports the interaction between computer scientists and social scientists by providing a common reference model to describe a system with its human and non-human components, potential attacks and defences, and the surrounding context. We validate STEAL in a two experimental studies, showing the role of the context and graphical cues in Wi-Fi networks’ security. Then the thesis complements STEAL with a Root Cause Analysis (RCA) methodology for security inspired from the ones used in safety. This methodology, called S·CREAM aims at being more systematic than the research methods that can be used with STEAL (surveys for instance) and at providing reusable findings for analysing security. To do so, S·CREAM provides a retrospective analysis to identify the factors that can explain the success of past attacks and a methodology to compile these factors in a form that allows for the consideration of their potential effects on a system’s security, given an attacker Threat Model. The thesis also illustrates how we developed a tool—the S·CREAM assistant— that supports the methodology with an extensible knowledge base and computer-supported reasoning. [less ▲]

Detailed reference viewed: 26 (8 UL)
Full Text
See detailLogic and Games of Norms: a Computational Perspective
Sun, Xin UL

Doctoral thesis (2016)

Detailed reference viewed: 35 (10 UL)
Full Text
See detailInfluence of interface conditioning and dopants on Cd-free buffers for Cu(In,Ga)(S,Se)2 solar cells
Hönes Geb. Wiese, Christian UL

Doctoral thesis (2016)

In the search for a non-toxic replacement of the commonly employed CdS buffer layer for Cu(In,Ga)(S,Se)2 (CIGSSe) based solar cells, indium sulfide thin films, deposited via thermal evaporation, and ... [more ▼]

In the search for a non-toxic replacement of the commonly employed CdS buffer layer for Cu(In,Ga)(S,Se)2 (CIGSSe) based solar cells, indium sulfide thin films, deposited via thermal evaporation, and chemical bath deposited (CBD) Zn(O,S) thin films are promising materials. However, while both materials have already been successfully utilized in highly efficient cells, solar cells with both materials usually need an ill-defined post-treatment step in order to reach maximum efficiencies, putting them at a disadvantage for mass production. In this thesis the influence of interface conditioning and dopants on the need for post-treatments is investigated for both materials, giving new insights into the underlying mechanisms and paving the way for solar cells with higher initial efficiencies. First, CIGSSe solar cells with In2S3 thin film buffer layers, deposited by thermal evaporation, are presented in chapter 3. The distinctive improvement of these buffer layers upon annealing of the completed solar cell and the change of this annealing behavior when the CIGSSe surface is treated with wet-chemical means prior to buffer layer deposition is investigated. Additional model simulations lead to a two-part explanation for the observed effects, involving a reduction of interface recombination, and the removal of a highly p-doped CIGSSe surface layer. chapter 4 introduces a novel, fast process for the deposition of Zn(O,S) buffer layers on submodule sized substrates. The resulting solar cell characteristics and the effects of annealing and prolonged illumination are discussed within the framework of theoretical considerations involving an electronic barrier for generated charge carriers. The most important influences on such an electronic barrier are investigated by model simulations and an experimental approach to each parameter. This leads to an improved window layer deposition process, absorber optimization, and intentional buffer layer doping, all reducing the electronic barrier and therefore the necessity for post-treatments to some extent. The energetic barrier discussed above may be avoided altogether by effective interface engineering. Therefore, the controlled incorporation of indium as an additional cation into CBD-Zn(O,S) buffer layers by means of a newly developed alkaline chemical bath deposition process is presented in chapter 5. With increasing amount of incorporated indium, the energetic barrier in the conduction band can be reduced. This is quantitatively assessed by a combination of photoelectron spectroscopy measurements and the determination of the buffer layer's optical band gap. This barrier lowering leads to less distorted current--voltage characteristics and efficiencies above 14 %, comparable to CdS reference cells, without extensive light-soaking. [less ▲]

Detailed reference viewed: 30 (9 UL)
See detailRepräsentationen der Kaiserin Elisabeth von Österreich nach dem Ende des Habsburgerreiches. Eine struktur-funktionale Untersuchung mythisierender Filmdarstellungen
Karczmarzyk, Nicole UL

Doctoral thesis (2016)

By tracing the process of the evolving myth surrounding the empress Elisabeth of Austria since her death one would very likely find an almost continuous sinusoidal curve as a result. The aim of the thesis ... [more ▼]

By tracing the process of the evolving myth surrounding the empress Elisabeth of Austria since her death one would very likely find an almost continuous sinusoidal curve as a result. The aim of the thesis is to explore the question which mediating sociopolitical functions the myth fullfils and where its value lies within the national and historical myth systems (the ones of Austria and also Prussia/Germany) by analysing audio-visual media, especially films. Since the representations of the Empress during the 20. century have been mainly manifested within popular culture, printed media, like e.g. newspapers, magazines, biograhies will also be considered for the analysis as well as theatrical plays like operas and musical comedies. The thesis stresses that the myth of Empress Elizabeth of Austria can be seen as a set of essential kernels, in structuralism theory called ‚mythemes’, that can constantly get reassembled and re-told. The mythemes and the actualizations of the myth itself are to be elaborated as well as its different cultural functionalisations at different times by diachronical and a synchronical readings. The thesis is trying to fill the gap of the frequently stated absence of female figures within the field of myth research and points out functions of a contemporary myth and its different appearances. Beyond that those procedures and strategies of media will be addressed that lead a long constituted myth into serving as a collective character or figure which can continously be applicated into new thematic contexts. The main proposition of the thesis is that the representations of Empress Elizabeth of Austria during the 20th century have gotten adjusted to the sociocultural contexts of the particular time they have appeared in. The character of the Empress has therefore been utilized as a carrier for different ideologies, e.g. the idea of the multiethnic state of the Habsburgian Empire, as well as the national socialists idea of a the ethnic community, the ‚Volksgemeinschaft’. The representations also adjust to altering female stererotypes and even role models like e.g. the ideal wife or in the ending 20th century the progressive feminist. [less ▲]

Detailed reference viewed: 40 (4 UL)
Full Text
See detailAutomated Security Testing of Web-Based Systems Against SQL Injection Attacks
Appelt, Dennis UL

Doctoral thesis (2016)

Injection vulnerabilities, such as SQL injection (SQLi), are ranked amongst the most dangerous types of vulnerabilities. Despite having received much attention from academia and practitioners, the ... [more ▼]

Injection vulnerabilities, such as SQL injection (SQLi), are ranked amongst the most dangerous types of vulnerabilities. Despite having received much attention from academia and practitioners, the prevalence of SQLi is common and the impact of their successful exploitation is severe. In this dissertation, we propose several security testing approaches that evaluate web applications and services for vulnerabilities and common IT infrastructure components such as for their resilience against attacks. Each of the presented approaches covers a different aspect of security testing, e.g. the generation of test cases or the definition of test oracles, and in combination they provide a holistic approach. The work presented in this dissertation was conducted in collaboration with SIX Payment Services (formerly CETREL S.A.). SIX Payment Services is a leading provider of financial services in the area of payment processing, e.g. issuing of credit and debit cards, settlement of card transactions, online payments, and point-of-sale payment terminals. We analyse the challenges SIX is facing in security testing and base our testing approaches on assumptions inferred from our findings. Specifically, the devised testing approaches are automated, applicable in black box testing scenarios, able to assess and bypass Web Application Firewalls (WAF), and use an accurate test oracle. The devised testing approaches are evaluated with SIX’ IT platform, which consists of various web services that process several thousand financial transactions daily. The main research contributions in this dissertation are: - An assessment of the impact of Web Application Firewalls and Database Intrusion Detection Systems on the accuracy of SQLi testing. - An input mutation technique that can generate a diverse set of test cases. We propose a set of mutation operators that are specifically designed to increase the likelihood of generating successful attacks. - A testing technique that assesses the attack detection capabilities of a Web Application Firewall (WAF) by systematically generating attacks that try to bypass it. - An approach that increases the attack detection capabilities of a WAF by inferring a filter rule from a set of bypassing attacks. The inferred filter rule can be added to the WAF’s rule set to prevent attacks from bypassing. - An automated test oracle that is designed to meet the specific requirements of testing in an industrial context and that is independent of any specific test case generation technique. [less ▲]

Detailed reference viewed: 189 (22 UL)
See detailAspekte der Mehrsprachigkeit in Luxemburg. Positionen, Funktionen und Bewertungen der deutschen Sprache. Eine diskursanalytische Untersuchung (1983-2015).
Scheer, Fabienne UL

Doctoral thesis (2016)

The thesis provides a broad insight into the current position of the German language in Luxembourg. It describes the linguistic knowledge and behavior of the different speech groups, that are acting in ... [more ▼]

The thesis provides a broad insight into the current position of the German language in Luxembourg. It describes the linguistic knowledge and behavior of the different speech groups, that are acting in the domains "education", "mass media", "immigration and integration", "xenophobic discourse", "language policy", "language and literature", "PR" and "languages for publicity" and shows how the dominant, luxembourgish, speech group is adapting linguistically to the evolution of the society. The conclusions are based on a corpus of 835 press articles, on interviews with experts from the different fields of the society and on further material (statistics, parlamentiary debates, administrative writings, examples of german exercises written by pupils ...). [less ▲]

Detailed reference viewed: 21 (6 UL)
Full Text
See detailTopology and Parameter Estimation in Power Systems Though Inverter Based Broadband Stimulations
Neshvad, Surena UL

Doctoral thesis (2016)

During the last decade, a substantial growth in renewable, distributed energy production has been observed in industrial countries. This phenomenon, coupled with the adoption of open energy markets has ... [more ▼]

During the last decade, a substantial growth in renewable, distributed energy production has been observed in industrial countries. This phenomenon, coupled with the adoption of open energy markets has signi cantly complicated the power ows on the distribution network, requiring advanced and intelligent system monitoring in order to optimize the e ciency, quality and reliability of the system. This thesis proposes a solution several power network challenges encountered with increasing Distributed Generation (DG) penetration. The three problems that are addressed are islanding detection, online transmission line parameter identi cation and system topology identi cation. These tasks are performed by requesting the DGs to provide ancillary services to the network operator. A novel and intelligent method has been proposed for reprogramming the DGs Pulse Width Modulator, requesting each DG to inject a uniquely coded Pseudo-Random Binary Sequence along with the fundamental. Islanding detection is obtained by measuring the equivalent Thevenin impedance at the inverters Point of Common Coupling, while system characterization is obtained by measuring the induced current frequencies at various locations in the grid. To process and evaluate the measured signals, a novelWeighed Least-Squares aggregation method is developed, through which measurements are combined and correlated in order to obtain an accurate snapshot of the power network parameters. [less ▲]

Detailed reference viewed: 33 (5 UL)
Full Text
See detailDynamics of viscoelastic colloidal suspensions
Dannert, Rick UL

Doctoral thesis (2016)

The influence of different types of nanoparticles on the dynamics of glass forming matrices has been studied by small oscillatory shear rheology. Experimental measurements reveal that besides the glass ... [more ▼]

The influence of different types of nanoparticles on the dynamics of glass forming matrices has been studied by small oscillatory shear rheology. Experimental measurements reveal that besides the glass transition process of the matrix an additional relaxation process occurs in presence of nanoparticles. The latter is identified as the macroscopic signature of the microscopic temporal fluctuations of the intrinsic stress and is called Brownian relaxation. Besides the fact that Brownian relaxation has so far not been observed in colloidal suspensions with a matrix exhibiting viscoelastic behaviour in the frequency range of the experimental probe, the study reveals another important feature to be highlighted: the evolution of the Brownian relaxation times depends non-monotonously on the filler concentration. This finding challenges the use of the classical Peclet-time as a characteristic timescale for Brownian relaxation. Literature defines the Peclet-time as the specific time needed by a particle to cover –via self-diffusion- a distance comparable to its own size. As a main result it will be shown that after replacing the particle size which is relevant for the Peclet time by the mean interparticle distance depending on the filler content the non-monotonic evolution of the relaxation times can be fully described. Moreover, the introduction of the new characteristic length scale allows to include data from literature into the phenomenological description. [less ▲]

Detailed reference viewed: 88 (13 UL)
See detailSingularités orbifoldes de la variété des caractères
Guerin, Clément UL

Doctoral thesis (2016)

In this thesis, we want to understand some singularities in the character variety. In a first chapter, we justify that the characters of irreducible representations from a Fuchsian group to a complex semi ... [more ▼]

In this thesis, we want to understand some singularities in the character variety. In a first chapter, we justify that the characters of irreducible representations from a Fuchsian group to a complex semi-simple Lie group is an orbifold. The orbifold locus is, then, the characters of bad representations. In the second chapter, we focus on the case where the Lie group is PSL(p,C) with p a prime number. In particular we give an explicit description of this locus. In the third and fourth chapter, we describe the isotropy groups (i.e. the centralizers of bad subgroups) arising in the cases when the Lie group is a quotient SL(n,C) (third chapter) and when the Lie group is a quotient of Spin(n,C) in the fourth chapter. [less ▲]

Detailed reference viewed: 48 (4 UL)
Full Text
See detailTopic Identification Considering Word Order by Using Markov Chains
Kampas, Dimitrios UL

Doctoral thesis (2016)

Automated topic identification of text has gained a significant attention since a vast amount of documents in digital forms are widespread and continuously increasing. Probabilistic topic models are a ... [more ▼]

Automated topic identification of text has gained a significant attention since a vast amount of documents in digital forms are widespread and continuously increasing. Probabilistic topic models are a family of statistical methods that unveil the latent structure of the documents defining the model that generates the text a priori. They infer about the topic(s) of a document considering the bag-of-words assumption, which is unrealistic considering the sophisticated structure of the language. The result of such a simplification is the extraction of topics that are vague in terms of their interpretability since they disregard any relations among the words that may settle word ambiguity. Topic models miss significant structural information inherent in the word order of a document. In this thesis we introduce a novel stochastic topic identifier for text data that addresses the above shortcomings. The primary motivation of this work is initiated by the assertion that word order reveals text semantics in a human-like way. Our approach recognizes an on-topic document trained solely on the experience of an on-class corpus. It incorporates the word order in terms of word groups to deal with data sparsity of conventional n-gram language models that usually require a large volume of training data. Markov chains hereby provide a reliable potential to capture short and long range language dependencies for topic identification. Words are deterministically associated with classes to improve the probability estimates of the infrequent ones. We demonstrate our approach and motivate its eligibility on several datasets of different domains and languages. Moreover, we present a pioneering work by introducing a hypothesis testing experiment that strengthens the claim that word order is a significant factor for topic identification. Stochastic topic identifiers are a promising initiative for building more sophisticated topic identification systems in the future. [less ▲]

Detailed reference viewed: 43 (10 UL)
Full Text
See detailEnergy minimising multi-crack growth in linear-elastic materials using the extended finite element method with application to Smart-CutTM silicon wafer splitting
Sutula, Danas UL

Doctoral thesis (2016)

We investigate multiple crack evolution under quasi-static conditions in an isotropic linear-elastic solid based on the principle of minimum total energy, i.e. the sum of the potential and fracture ... [more ▼]

We investigate multiple crack evolution under quasi-static conditions in an isotropic linear-elastic solid based on the principle of minimum total energy, i.e. the sum of the potential and fracture energies, which stems directly from the Griffith’s theory of cracks. The technique, which has been implemented within the extended finite element method, enables minimisation of the total energy of the mechanical system with respect to the crack extension directions. This is achieved by finding the orientations of the discrete crack-tip extensions that yield vanishing rotational energy release rates about their roots. In addition, the proposed energy minimisation technique can be used to resolve competing crack growth problems. Comparisons of the fracture paths obtained by the maximum tension (hoop-stress) criterion and the energy minimisation approach via a multitude of numerical case studies show that both criteria converge to virtually the same fracture solutions albeit from opposite directions. In other words, it is found that the converged fracture path lies in between those obtained by each criterion on coarser numerical discretisations. Upon further investigation of the energy minimisation approach within the discrete framework, a modified crack growth direction criterion is proposed that assumes the average direction of the directions obtained by the maximum hoop stress and the minimum energy criteria. The numerical results show significant improvements in accuracy (especially on coarse discretisations) and convergence rates of the fracture paths. The XFEM implementation is subsequently applied to model an industry relevant problem of silicon wafer cutting based on the physical process of Smart-CutTM technology where wafer splitting is the result of the coalescence of multiple pressure-driven micro-crack growth within a narrow layer of the prevailing micro-crack distribution. A parametric study is carried out to assess the influence of some of the Smart-CutTM process parameters on the post-split fracture surface roughness. The parameters that have been investigated, include: mean depth of micro-crack distribution, distribution of micro-cracks about the mean depth, damage (isotropic) in the region of micro-crack distribution, and the influence of the depth of the buried-oxide layer (a layer of reduced stiffness) beneath the micro-crack distribution. Numerical results agree acceptably well with experimental observations. [less ▲]

Detailed reference viewed: 25 (9 UL)
Full Text
See detailInstitutionalisierung der Naturwissenschaften in Preußen als Investition in die Zukunft. Die Friedrich-Wilhelms-Universität in Bonn und die Leopoldina (1818-1830)
Röther, Bastian UL

Doctoral thesis (2016)

Analysing the genesis of institutions of scientific research and information in Prussia using the university location of Bonn as an example 1818 – 1830 // The 19th century represented a profound turning ... [more ▼]

Analysing the genesis of institutions of scientific research and information in Prussia using the university location of Bonn as an example 1818 – 1830 // The 19th century represented a profound turning point in the development of the sciences that was characterized by an enormous surge in development and the emergence of modern scientific disciplines. In the natural sciences, this development was evident by their gradual emancipation from the medical faculty and an increasing differentiation and scientification of the curriculum. During this period of reform in the early 19th century, Prussia faced the task of reforming its traditional education system and transferring this to its provinces in the Rhineland and in Westphalia. These efforts to create an educational state centred around the Central University in Berlin, founded in 1810, the University of Breslau and, above all, Friedrich Wilhelm University, which was founded on the Rhine in 1818 and was one of Prussia’s most important provincial universities. The Imperial Leopoldina Carolina German Academy of Natural Scientists was simultaneously moved, creating conditions that were on par with Prussia’s capital. By 1818, Bonn had two important scientific institutions at its disposal, one of which had an explicit medical-scientific connotation. The institutional collaboration that arose between the academy and the university therefore promised to give the natural sciences an opportunity to receive special support as explicitly stipulated in the development concept drawn up for the university by the head of the department, Altenstein. The fact that the scientific academies fell within the jurisdiction of the Ministry of Culture appeared particularly auspicious. Altenstein’s concept of emancipation and promotion of the natural sciences included a high degree of integration of the applied sciences. One question that had received little attention was how this concentration of institutions impacted the development of the service and research institutions of the natural sciences and to what extent Berlin was able to model this tighter relationship between the institutions. Older papers have placed the development of the individual scientific subjects during this phase of the university’s foundation in a much more negative light, referring time and again to the way in which Bonn’s representatives of natural philosophy hampered this development. In contrast, more recent research findings recognize the tremendously important role the natural sciences played in the ministerial concept of 1818 and the resulting extensive and excellent conditions for these subjects which were modelled after Berlin. As a result, they represent a much more differentiated picture of the foundation years. A source-based and cross-subject study that also scrutinized the role of the Academy of Sciences Leopoldina remained a desideratum. Moving the academy has generally been regarded as extremely necessary for the further development of Bonn as a location of science; its collections and facilities signifying an important foundation for good institutional development. The basis of this paper was the official correspondence between Bonn’s scientists and Altenstein, the Prussian department head, as well as Prussia’s State Chancellor Hardenberg. It focuses on the key aspects of the analysis of appointment policy, organisation of the institutes, reform efforts and the relationship to extramural institutions. Of great importance is the exchange of letters, edited as part of a Leopoldina project, between Altenstein and the director of the botanical garden in Bonn, Christian Gottfried Nees von Esenbeck who, as president of the Leopoldina, was critically important. The analysis includes the facilities of these natural sciences at the university, e.g. the chemical laboratory and the observatory, as well as the collections and institutes for natural history subjects, like botany, zoology and minerology, and the Natural History Museum and the Botanical Gardens. Furthermore, the paper looks at the university’s relationship to societies organised outside the university, in particular the Leopoldina, as well as the Niederrheinischen Gesellschaft für Natur- und Heilkunde (Lower Rhine Society for Natural History and Medical Studies), which was founded in 1818, and Verein zur Beförderung der Naturstudien (the Society for Promoting the Study of Nature). Investigations have revealed that the culture minister’s ideals to broadly support the sciences and their practical application during the university’s phase of establishment extended far beyond the realm of financial possibilities. Nevertheless, it was possible to establish some excellently equipped institutes in Bonn. The already widely developed plans to incorporate practice-oriented education, however, could not be achieved in these initial years. These locational conditions, established on the basis of political and financial necessity, are reflected in the statistics on the frequency and attendance of lectures, analysed for the first time for the natural sciences in Bonn. Despite the formal separation of the natural sciences from the medical faculty, the physicians were integral to the success of the introductory lectures. On the other hand, from a statistical perspective, these special events are characterised by a high failure rate across all subjects due to a lack of participants. This particularly affected the lectures on natural philosophy which were accepted to a lesser degree by the students during the period under investigation. The general reproach, based on Justus Liebig’s philippic on “The State of Chemistry in Prussia” from 1840, that natural philosophy had hampered development, cannot be concretely substantiated for Bonn. Unlike in Berlin, which had better locational conditions, during these foundation years, Bonn lacked grammar school leavers and university freshmen who were prepared for studying the sciences and who could adequately take advantage of the good basic conditions established in Bonn. The complaints from the instructors in Bonn about the students’ low level of education quickly led to various reform projects that targeted grammar school education and which were almost entirely unknown to research. The Seminar für die gesammten Naturwissenschaften (Seminar of General Sciences), established in 1825 should be mentioned first and foremost. It was the first of its kind in Germany to teach natural sciences as part of a cross-discipline education. Surprisingly, moving the German Academy of Sciences Leopoldina to Bonn played an insignificant role in the development of the scientific location. Conditions can hardly be compared to those in the capital. Ultimately the academy’s institutions proved to be insufficient in supporting the establishment of modern service institutes for the natural sciences. The locational advantage was not exploited in the scientific practice of the natural science and medical disciplines. Thanks to Prussian subsidies, the Leopoldina was able to weather an existential crisis at the end of the 19th century in Bonn. Profound structural reforms were postponed by the academy’s leadership indicating the society’s need to consolidate. With its only main task being the publishing of the journal Nova acta, the academy was punching far below its weight. Using Prussia’s provincial university in Bonn as an example, this paper reveals the extensive efforts made by the education reformers to provide an excellent institutional basis for the young scientific disciplines. The opportunities created in the years when the university was being established could only be utilised by a handful of students due to a lack of scientific schooling, particularly since a link to practical and scientific educational concepts could not be financed. The service institutions therefore remained a promise for a future based on scientific research and teaching that wasn’t to begin in Prussia’s Rhineland until the second half of the 19th century. [less ▲]

Detailed reference viewed: 44 (9 UL)
Full Text
See detailTRANSMISSION OPTIMIZATION FOR HIGH THROUGHPUT SATELLITE SYSTEMS
Gharanjik, Ahmad UL

Doctoral thesis (2016)

Demands on broadband data service are increasing dramatically each year. Following terrestrial trends, satellite communication systems have moved from the traditional TV broadcasting to provide ... [more ▼]

Demands on broadband data service are increasing dramatically each year. Following terrestrial trends, satellite communication systems have moved from the traditional TV broadcasting to provide interactive broadband services even to urban users. While cellular and land-line networks are mainly designed to deliver broadband services to metropolitan and large urban centers, satellite based solutions have the advantage of covering these demands over a wide geography including rural and remote users. However, to stay competitive with economical terrestrial solutions, it is necessary to reduce the cost per transmitted bit by increasing the capacity of the satellite systems. The objective of this thesis is to design and develop techniques capable of enhancing the capacity of next generation high throughput satellite systems. Specifically, the thesis focuses on three main topics: 1) Q/V band feeder link design, 2) robust precoding design for multibeam satellite systems, and 3) developing techniques for tackling related optimization problems. Design of high bandwidth and reliable feeder links is central towards provisioning new services on the user link of a multibeam SatCom system. Towards this, utilization of the Q/V band and an exploitation of multiple gateway as a transmit diversity measure for overcoming severe propagation effects are being considered. In this context, the thesis deals with the design of a feeder link comprising N + P gateways (N active and P redundant gateways). Towards satisfying the desired availability, a novel switching scheme is analyzed and practical aspects such as prediction based switching and switching rate are discussed. Building on this result, an analysis for the N + P scenario leading to a quantification of the end-to-end performance is provided. On the other hand, frequency reuse in multibeam satellite systems along with precoding techniques can increase the capacity at the user link. Similar to terrestrial communication channels, satellite based communication channels are time-varying and for typical precoding applications, the transmitter needs to know the channel state information (CSI) of the downlink channel. Due to fluctuations of the phase components, the channel is time-varying resulting in outdated CSI at the transmitter because of the long round trip delay. This thesis studies a robust precoder design framework considering requirements on availability and average signal to interference and noise ratio (SINR). Probabilistic and expectation based approaches are used to formulate the design criteria which are solved using convex optimization tools. The performance of the resulting precoder is evaluated through extensive simulations. Although a satellite channel is considered, the presented analysis is valid for any vector channel with phase uncertainty. In general, the precoder design problem can be cast as power minimization problem or max-min fairness problem depending on the objectives and requirements of design. The power minimization problem can typically be formulated as a non-convex quadratically constrained quadratic programming (QCQP) problem and the max-min fairness problem as a fractional quadratic program. These problems are known to be NP-hard in general. In this thesis, the original design problem is transformed to an unconstrained optimizationproblem using the specialized penalty terms. The efficient iterative optimization frameworks are proposed based on a separate optimization of the penalized objective function over its partition of variables at each iteration. Various aspects of the proposed approach including performance of the algorithm and its implementation complexity are studied. This thesis is made under joint supervision agreement between KTH Royal Institute of Technology, School of Electrical Engineering, Stockholm, Sweden and University of Luxembourg, Luxembourg. [less ▲]

Detailed reference viewed: 59 (22 UL)
See detailFunctional characterization of novel RhoT1 variants, which are associated with Parkinson's disease.
Grossmann, Dajana UL

Doctoral thesis (2016)

Parkinson’s disease (PD) is a common neurodegenerative disease affecting up to 2 % of the population older than 65 years. Most PD cases are sporadic with unknown cause, and about 10 % are familial ... [more ▼]

Parkinson’s disease (PD) is a common neurodegenerative disease affecting up to 2 % of the population older than 65 years. Most PD cases are sporadic with unknown cause, and about 10 % are familial inherited. PD is a progressive neurodegenerative disease characterized by loss of predominantly dopaminergic neurons, leading to typical symptoms like rigidity and tremor. Commonly involved pathogenic pathways are linked to mitochondrial dysfunction, e.g. increased oxidative stress, disruption of calcium homeostasis, decreased energy supply and mitochondrial-controlled apoptosis. The mitochondrial outer membrane protein Miro1 is important for mitochondrial distribution, quality control and maintenance. To date Miro1 is not established as risk factor for PD. Using a comprehensive mutation screening of RhoT1 in German PD patients we dissected the role of the first PD-associated mutations in RhoT1, the gene encoding for Miro1. Three mutations in RhoT1 have been identified in three PD patients with positive family history for PD. For analysis of mitochondrial phenotypes patient-derived fibroblasts from two of the three patients were available. As independent cell model served the neuroblastoma cell line M17 with stable knockdown of endogenous RhoT1 and transiently overexpression of the RhoT1 mutant variants. Investigation of yeast with knockout of endogenous Gem1 (the yeast orthologue of Miro1) and overexpression of mutant Gem1 revealed that growth on non-fermentable carbon source was impaired. These findings suggest that Miro1-mutant1 is a loss of function mutation. Interestingly, the Miro1 protein amount was significantly reduced in Miro1-mutant1 and Miro1-mutant2 fibroblast lines compared to controls. Functional analysis revealed that mitochondrial mass was decreased in Miro1-mutant2, but not in Miro1-mutant1 fibroblasts, whereas mitochondrial biogenesis was increased in Miro1-mutant2 fibroblasts, as indicated by elevation of PGC1α. A similar phenotype with reduction of mitochondrial mass was also observed in M17 cells overexpressing Miro1-mutant1 or Miro1-mutant2. Additionally, spare respiratory capacity was reduced in Miro1-mutant1 fibroblasts compared to Ctrl 1 fibroblasts. In contrast, Miro1-mutant2 fibroblasts showed increased respiratory activity compared to Ctrl 1, despite citrate synthase activity was significantly reduced. Both alterations of respiratory activity lead to mitochondrial membrane hyperpolarization in Miro1-mutant1 and Miro1-mutant2 fibroblasts, a phenotype which was also found in M17 cells with knockdown of RhoT1. Both Miro1 mutant fibroblasts lines displayed different problems with cytosolic calcium buffering: in Miro1-mutant1 fibroblasts histamine treatment increased cytosolic calcium concentration significantly compared to Ctrl 1 fibroblasts, indicating that calcium homeostasis was impaired, whereas in Miro1-mutant2 fibroblasts the buffering capacity for cytosolic calcium was impaired. The results indicate that mutations in Miro1 cause significant mitochondrial dysfunction, which are likely contributing to neurodegeneration in PD and underline the importance of Miro1 for mitochondrial maintenance. [less ▲]

Detailed reference viewed: 50 (8 UL)
Full Text
See detailModel-Based Test Automation Strategies for Data Processing Systems
Di Nardo, Daniel UL

Doctoral thesis (2016)

Data processing software is an essential component of systems that aggregate and analyse real-world data, thereby enabling automated interaction between such systems and the real world. In data processing ... [more ▼]

Data processing software is an essential component of systems that aggregate and analyse real-world data, thereby enabling automated interaction between such systems and the real world. In data processing systems, inputs are often big and complex files that have a well-defined structure, and that often have dependencies between several of their fields. Testing of data processing systems is complex. Software engineers, in charge of testing these systems, have to handcraft complex data files of nontrivial size, while ensuring compliance with the multiple constraints to prevent the generation of trivially invalid inputs. In addition, assessing test results often means analysing complex output and log data. Complex inputs pose a challenge for the adoption of automated test data generation techniques; the adopted techniques should be able to deal with the generation of a nontrivial number of data items having complex nested structures while preserving the constraints between data fields. An additional challenge regards the automated validation of execution results. To address the challenges of testing data processing systems, this dissertation presents a set of approaches based on data modelling and data mutation to automate testing. We propose a modelling methodology that captures the input and output data and the dependencies between them by using Unified Modeling Language (UML) class diagrams and constraints expressed in the Object Constraint Language (OCL). The UML class diagram captures the structure of the data, while the OCL constraints formally describe the interactions and associations between the data fields within the different subcomponents. The work of this dissertation was motived by the testing needs of an industrial satellite Data Acquisition (DAQ) system; this system is the subject of the empirical studies used within this dissertation to demonstrate the application and suitability of the approaches that we propose. We present four model-driven approaches that address the challenges of automatically testing data processing systems. These approaches are supported by the data models generated according to our modelling methodology. The results of an empirical evaluation show that the application of the modelling methodology is scalable as the size of the model and constraints was manageable for the subject system. The first approach is a technique for the automated validation of test inputs and oracles; an empirical evaluation shows that the approach is scalable as the input and oracle validation process executed within reasonable times on real input files. The second approach is a model-based technique that automatically generates faulty test inputs for the purpose of robustness testing, by relying upon generic mutation operators that alter data collected in the field; an empirical evaluation shows that our automated approach achieves slightly better instruction coverage than the manual testing taking place in practice. The third approach is an evolutionary algorithm to automate the robustness testing of data processing systems through optimised test suites; the empirical results obtained by applying our search-based testing approach show that it outperforms approaches based on fault coverage and random generation: higher coverage is achieved with smaller test suites. Finally, the fourth approach is an automated, model-based approach that reuses field data to generate test inputs that fit new data requirements for the purpose of testing data processing systems; the empirical evaluation shows that the input generation algorithm based on model slicing and constraint solving scales in the presence of complex data structures. [less ▲]

Detailed reference viewed: 116 (33 UL)
Full Text
See detailSIGNAL PROCESSING FOR PHYSICAL LAYER SECURITY WITH APPLICATION IN SATELLITE COMMUNICATIONS
Kalantari, Ashkan UL

Doctoral thesis (2016)

Wireless broadcast allows widespread and easy information transfer. However, it may expose the information to unintended receivers, which could include eavesdroppers. As a solution, cryptography at the ... [more ▼]

Wireless broadcast allows widespread and easy information transfer. However, it may expose the information to unintended receivers, which could include eavesdroppers. As a solution, cryptography at the higher network levels has been used to encrypt and protect data. Cryptography relies on the fact that the computational power of the adversary is not enough to break the encryption. However, due to increasing computing power, the adversary power also increases. To further strengthen the security and complement the encryption, the concept of physical layer security has been introduced and surged an enormous amount of research. Widely speaking, the research in physical layer security can be divided into two directions: the information-theoretic and signal processing paradigms. This thesis starts with an overview of the physical layer security literature and continues with the contributions which are divided into the two following parts. In the first part, we investigate the information-theoretic secrecy rate. In the first scenario, we study the confidentiality of a bidirectional satellite network consisting of two mobile users who exchange two messages via a multibeam satellite using the XOR network coding protocol. We maximize the sum secrecy rate by designing the optimal beamforming vector along with optimizing the return and forward link time allocation. In the second scenario, we study the effect of interference on the secrecy rate. We investigate the secrecy rate in a two-user interference network where one of the users, namely user 1, requires to establish a confidential connection. User 1 wants to prevent an unintended user of the network to decode its transmission. User 1 has to adjust its transmission power such that its secrecy rate is maximized while the quality of service at the destination of the other user, user 2, is satisfied. We obtain closed-form solutions for optimal joint power control. In the third scenario, we study secrecy rate over power ratio, namely ``secrecy energy efficiency''. We design the optimal beamformer for a multiple-input single-output system with and without considering the minimum required secrecy rate at the destination. In the second part, we follow the signal processing paradigm to improve the security. We employ the directional modulation concept to enhance the security of a multi-user multiple-input multiple-output communication system in the presence of a multi-antenna eavesdropper. Enhancing the security is accomplished by increasing the symbol error rate at the eavesdropper without the eavesdropper's CSI. We show that when the eavesdropper has less antennas than the users, regardless of the received signal SNR, it cannot recover any useful information; in addition, it has to go through extra noise enhancing processes to estimate the symbols when it has more antennas than the users. Finally, we summarize the conclusions and discuss the promising research directions in the physical layer security. [less ▲]

Detailed reference viewed: 222 (24 UL)
See detailChanging Commuter Behaviour through Gamification
Kracheel, Martin UL

Doctoral thesis (2016)

This thesis explores how the dynamic context of mobility, more specifically the commute to and from work in the region of Luxembourg, can be changed through gamified mobile applications. The goal is to ... [more ▼]

This thesis explores how the dynamic context of mobility, more specifically the commute to and from work in the region of Luxembourg, can be changed through gamified mobile applications. The goal is to get a better understanding of the innovative application area of gamified mobility and its potential, as well as to describe its implications for research and practice. This applied research is inspired by a participatory design approach, where information is gained by adopting a user perspective and through the process of conceptualising and applying methods in empirical studies. The four empirical studies described in this thesis employed a mixed-methodology approach consisting of focus group interviews, questionnaires and mobile applications. Within these studies four prototypes were developed and tested, namely Coffee Games, Driver Diaries, Commutastic and Leave Now. The studies show concrete possibilities and difficulties in the interdisciplinary field of gamifying mobility behaviour. This dissertation is composed of seven chapters: Chapter I introduces the topics mobility, games and behaviour; Chapter II presents a proof of concept study (Using Gamification and Metaphor to Design a Mobility Platform for Commuters); Chapter III explains the development and validation of a mobility research tool (Driver Diaries: a Multimodal Mobility Behaviour Logging Methodology); Chapter IV describes the development of a new gamified mobility application and its evaluation (Studying Commuter Behaviour for Gamifying Mobility); Chapter V provides an empirical assessment of the relevance of gamification and incentives for the evaluation of a mobile application (Changing Mobility Behaviour through Recommendations) and Chapter VI is a summary on how to change mobility behaviour through a multilevel design approach (Using Gamification to change Mobility Behaviour: Lessons Learned from two Approaches). The four prototypes help to address the primary goal of this thesis, which is to contribute to new approaches to urban mobility by exploring gamified mobility applications. Coffee games is a proof of concept, low-fidelity implementation of a real-life game that tests gamification elements and incentives for changing indoor-mobility behaviour. The findings of two iterations with a total of 19 participants show the adaptability of the concept to different contexts. The approach to change indoor-mobility behaviour with this mock-up game was successful. Driver Diaries is a methodology to assess mobility behaviour in Luxembourg. The aim with this mobile, digital travel diary is to study features of cross-border commuter mobility and activities in Luxembourg in order to identify suitable elements (activities etc.) for a gamified mobility application, such as Commutastic. After two rounds of data collection (Android and iOS) the records of 187 participants were analysed and the results illustrate the mobility habits of the target audience. Commutastic is a mobility game application that motivates users to avoid peak-hour traffic by proposing alternative after work activities. Analysing the data of 90 participants, we find that the timely offer of an activity in the proximity along with gamification elements involves users and motivates a third of them to engage in alternative mobility behaviours. Leave Now is a gamified recommendation application, which rewards users for leaving their workplace outside of their usual schedule and explores the role of specific gamification elements on user motivation. The study, which was conducted with 19 participants, shows differences between an individual play and a group play condition regarding leaving time changes. The contributions of this thesis to gamification and mobility research and practice span from mobility participations as a game and integral part of our everyday life to methodologies of its successful implementation in the Luxemburgish context. The results show the advantages, disadvantages, and restrictions of gamification in urban mobility contexts. This is an important step towards gamifying mobility behaviour change and therefore towards research aiming at a wellbeing in a better urban life. [less ▲]

Detailed reference viewed: 69 (8 UL)
See detailEFFEKTE EINES MULTILINGUALEN UNTERRICHTSANSATZES AUF DIE SPRACHKOMPETENZEN IM DEUTSCHEN
Planta, Eric UL

Doctoral thesis (2016)

Abstract The reason to conduct this study was based on personal experiences in learning and teaching foreign languages, bilingual teaching methods and the steps of development children have to take in ... [more ▼]

Abstract The reason to conduct this study was based on personal experiences in learning and teaching foreign languages, bilingual teaching methods and the steps of development children have to take in order to gain competencies in the German language. My participation in the module creation of the Teacher Education Program for the University of the Greater Region additionally aroused my awareness of the topic of multilingualism and finally was responsible for the decision to work on the scientific questions of this thesis. The dissertation describes the process of research in order to answer the question if pupils who attend an interregional school with a multilingual language concept are able to show better skills in reading and writing in the German language in comparison to pupils who attend a classical monolingual language-oriented High School. Furthermore it was observed how their language skills progressed in one year, how the status of competencies looks like in comparison to the results of a test of reading and writing skills in German which was taken in the school year 1973/74, how girls developed different from boys, how the language the children usually use at home influenced the results of their tests and how these results were affected by the educational level of the household the children originate from. The opinions and suggestions of the teachers and their pupils as the witnesses of school practice were also taken serious to create an overview of the difficulties. The research design of this study is presented as a polymethodological approach, which is combined with Mixed Methods. In the beginning and at the end of the school year 2013/14 the pupils of both school types were tested with reading and writing items of the Allgemeiner Deutscher Sprachtest ADST, which is a standardized German language test. Through this the effectiveness of the treatment, which corresponds to the school’s German curriculum could be tested as well. Besides it was possible to compare the results of the current tests to the outcomes of the same test battery, which was carried out forty years ago. To support the conclusions of the tests the pupils and their teachers had to fill out a questionnaire in order to identify more background information. By using these tools it was not just possible to receive further background variables but additionally practical problems of teaching German in schools were easily to recognize. As a consequence appropriate suggestions for improvement could be presented aside. The data I received in this combination of longitudinal and cross-sectional study made it feasible to analyse most of it quantitatively and parts of the teachers’ questionnaire answers also qualitatively. The used sample comprised all in all ten teachers and 208 pupils. The test results show that the general proficiencies in German reading and writing of the pupils who attend the school with the multilingual concept do not get ahead of the ones of the children who learn the language in a traditional school with a monolingual German based educationin connection with a classical foreign language educational program. Noticeable is that the multilingual educated children improved more obviously in one school year than the comparable pupils of the High School, who nevertheless showed their qualities on a higher level of competency. The pupils of both schools developed their skills in the progress of their annual education, which underlines the effectiveness of the curriculum-defined treatment. Moreover it became apparent, that both school types were not able to confirm the results of the sample of the ADST forty years ago. Nevertheless the results of the two schools tested here do not give reason to resign. On the contrary it is possible to realise that it is not too late to act. The partial competencies in which the pupils of today performed worse could be exposed with the help of the insights of the text at hand and the direct leverage points to start with corrections and adaptations might be offered here as well. Regarding the outcomes of this study it becomes conceivable to define the work of the language teaching research, which needs to be done in the next years as the imperative necessity to support the idea of a sensible European multilingualism strategy with the focus on the local requirements through the creation of a comprehensive language concept and curriculum. With the help of such an approach regional multilingualism might be in the position to reach people in the field as well as the public at large. A close look at the differences of the test results between the genders revealed, that girls performed better in both tests. This could also be confirmed for the girls of the monolingual school over the multilingual one. The reasons for the differences between the academic achievements in languages between the sexes seem to arise outside the responsibility of the schools, which is underlined through the fact that the gap between the boys’ and the girls’ performance did not increase between the two tests the children had to take. Some approaches to adapt the boys’ language competences outside of school are presented and evaluated in the text. Supplementary it could be shown that the social origin including distance or affinity of the families to education did not have a significant influence on the children’s capacity to perform in the ADST. There also was no direct influence of the family origin to development of the skills from the first to the second test. Additionally, I could manage to show an influence of the languages children use at home to the achievements in the language tests. It was evident that pupils who solely speak German at home showed the best results, ahead of children who at least speak German with one of their parents. Boys and girls who do not speak German at all at home obtained the worst results. The influence of the habits of language usage at home had no significant effect on the skill development from pre-test to post-test. The central statements of the teachers’ questionnaire included the solicitation to improve the German language in school classes in a way that the needs of almost all pupils could be satisfied. Furthermore it is necessary to combine and unify all languages, which are present in a certain region and its schools to one overall curriculum and concept in order to reach the whole population with an integrated language-supporting plan. The starting point might be an adjusted educational policy hand in hand with an adapted education for teachers, who need to be sophisticated for a new understanding of the meaning of languages in the framework of the requirements of a 21-century Europe. The thesis demonstrates and recommends a concrete example including a practically relevant gathering instrument in combination with a self-assessment tool for teachers and pupils, which consequently follows the goal of a specific implementation of a language-aware teaching of the various school subjects. Furthermore the study includes implications for educational- and language policy with hints for a more effective exhaustion of our children’s existing language resources. One of the most important statements of the pupils’ questionnaire is their order of priority of importance of the school subjects for their own forthcoming professional life. This opinion is also reflected in the perceptions of the parents’ behaviour concerning the enrolment of their children to institutes of higher education in the Saarland. They clearly underlined that the most important school subjects in this respect are English, Maths and German. This fact should be taken into consideration for the determination of future educational goals and also for the actual strategic focus of the Saarlandian France Strategy, the Lorraine Germany Strategy and the ambitions of Luxembourg on the basis of an educational policy to strengthen the relationship to the border regions. An idea like the feuille de route as well as the intention of a reinforcement of the French language requires the persuasion of all people, who will be affected by new language developments. The moving closer together of the countries of the Greater Region must not lead to a growing distance to all other European countries and beyond not to the rest of the world. Therefore I personally speak up for an overall Saarlandian Language Strategy, which includes the German language as the fundament for further language learning, which furthermore includes French and English as equitable partners and which shows an appropriate attention to other occurring languages of the region and its people. In the forthcoming years it will be the task of Applied Linguistics and the Language Teaching Research in the Greater Region hand in hand with the teachers in the field to conceive an all-encompassing and realisable language concept and also to combine it with a viable school curriculum. The thesis in hand should be in the position to offer interesting and important cognitions to support the mentioned new language approach. [less ▲]

Detailed reference viewed: 184 (23 UL)
Full Text
See detailData-driven Repair Models for Text Chat with Language Learners
Höhn, Sviatlana UL

Doctoral thesis (2016)

This research analyses participants' orientation to linguistic identities in chat and introduces data-driven computational models for communicative Intelligent Computer-Assisted Language Learning ... [more ▼]

This research analyses participants' orientation to linguistic identities in chat and introduces data-driven computational models for communicative Intelligent Computer-Assisted Language Learning (communicative ICALL). Based on non-pedagogical chat conversations between native speakers and non-native speakers, computational models of the following types are presented: exposed and embedded corrections, explanations of unknown words following learner's request. Conversation Analysis helped to obtain patterns from a corpus of dyadic chat conversations in a longitudinal setting, bringing together German native speakers and advanced learners of German as a foreign language. More specifically, this work states a bottom-up, data-driven research design which takes “conversation” from its genuine personalised dyadic environment to a model of a conversational agent. It allows for an informal functional specification of such an agent to which a technical specification for two specific repair types is provided. Starting with the open research objective to create a machine that behaves like a language expert in an informal conversation, this research shows that various forms of orientation to linguistic identities are on participants' disposal in chat. In addition it shows that dealing with computational complexity can be approached by a separation between local models of specific practices and a high-level regulatory mechanism to activate them. More specifically, this work shows that learners' repair initiations may be analysed as turn formats containing resources for signalling trouble and referencing trouble source. Based on this finding, this work shows how computational models for recognition of the repair initiations and trouble source extraction can be formalised and implemented in a chatbot. Further, this work makes clear which level of description of error corrections is required to satisfy computational needs, and how these descriptions may be transformed to patterns for various error correction formats and which technological requirements they imply. Finally, this research shows which factors in interaction influence the decision to correct and how the creation of a high-level decision model for error correction in a Conversation-for-Learning can be approached. In sum, this research enriches the landscape of various communication setups between language learners and communicative ICALL systems explicitly covering Conversations-for-Learning. It strengthens multidisciplinary connections by showing how the multidisciplinary research field of ICALL benefits from including Conversation Analysis into the research paradigm. It highlights the impact of the micro-analytic understanding of actions accomplished by utterances in talk within a specific speech exchange system on computational modelling on the example of chat with language learners. [less ▲]

Detailed reference viewed: 49 (13 UL)
Full Text
See detailDer Schulbesuch in der Schweiz um 1800
Ruloff, Michael UL

Doctoral thesis (2016)

Analysis of school attendance in Switzerland in 1800 // The right to a free public education and in the meantime the duty to send children to school is found in the Swiss Federal Constitution. At the end ... [more ▼]

Analysis of school attendance in Switzerland in 1800 // The right to a free public education and in the meantime the duty to send children to school is found in the Swiss Federal Constitution. At the end of the eighteenth century, school attendance was already mandatory in several cantons, such as Zurich or Vaud. In 1800, the federal government introduced nationwide compulsory education. Unsettled is the question of the actual school attendance. Existing research conveys the image of rather bad and inconsistent school attendance in Switzerland at the end of the eighteenth century. Indeed, there were numerous reasons for children not to go to school: Many children had to work on their parents’ farm in summer. Therefore, in many villages, classes only took place in winter. But in winter, cold weather prevented poorer children with insufficient clothing from going to school. In proto-industrialised regions, where the domestic system was the main source of family income, children had to help out at home in summer and in winter. In many cases, they had to work and earn their own money. Mothers and fathers also kept their children at home for various other reasons: Since there were not enough schools in rural areas, many classrooms were overcrowded – sometimes, more than 100 children gathered in one classroom – and in a bad state with wet walls and no heating. Despite the striking fact that education in the Swiss public is often depicted as the one and only resource of the country, there is not much research about the history of Swiss public schools before 1830 (the years after 1830, when most of the cantons introduced constitutions, are believed as the birth date of public school): There are almost no actual numbers or school attendance rates and only very few sources are to be found in the archives. In the context of a multinational six-year scientific project, the data of the first nationwide school-inquiry (from the year 1799) had been transliterated, edited and published The so-called Stapfer-Enquête with its answers from more than 2400 schools provides a unique opportunity to be concerned with school attendance in 1800. This standardized questionnaire can be seen as a unique source – it is the only nationwide school inquiry answered by teachers at that time. Teachers had to give answers about their income, the school building the subjects they taught as well as the number of students in the classroom. With the help of the data of this inquiry (as well as data of other recently discovered and edited regional school inquiries), this dissertation aims to analyse school attendance in 1800. The core question is: How many children went to school in Switzerland in 1800? The first goal is to track down how many children really went to school in 1800. The analysis also aims to explicate possible motives for good or bad school attendance, with respect to gender, denomination, distance to school plus economic and sociological factors. The question about relative school attendance – as an indicator of the level of education in a society and the quality of education in a specific country is relevant: The matter is of public interest – today and back then. As shown above, there is a clear perception about school attendance in 1800. Some of the theories about this “low” school attendance still exist in the research of history of education: School attendance rates were specifically low in rural and catholic areas. Girls went to school more rarely than boys. The results of this analysis show another picture: Numerous rural and catholic schools used to have quite good school attendance rates and girls also went to school. In order to get a clear picture about school attendance in Switzerland in 1800, this dissertation not only calculates school attendance rates (the sample consists of 126 schools), it also explains differences between high and low rates. By using quantitative and qualitative statistical methods, school attendance data are commented and compared. Apart from doing so, the analysis builds a connection between the school attendance data and historic theory in order to explain the results in a greater context. It refers to previously existing research and clarifies to what extent arguments and beliefs about school attendance in Switzerland in 1800 have to be confirmed or contested. Furthermore, it develops new theses about schools and school attendance in Switzerland at the end of the eighteenth century. An important thesis is, that school attendance relies on the accessibility of the school (the distance between children’s homes and the school building) as well as the identification with and the financial support of public education in a local community. These three factors – accessibility, identification and financial support – have an influence on the quality of school personnel – back in 1800 and today. [less ▲]

Detailed reference viewed: 53 (9 UL)
Full Text
See detailAssisted Voluntary Return in Kosovo: A field analysis
Sacchetti, Sandra UL

Doctoral thesis (2016)

The thesis explores the institutional set-up, the underlying ideology and the workings of return assistance projects in Kosovo by applying Bourdieu’s field theory as an analytical framework. It describes ... [more ▼]

The thesis explores the institutional set-up, the underlying ideology and the workings of return assistance projects in Kosovo by applying Bourdieu’s field theory as an analytical framework. It describes experiences with return assistance from the perspectives of the returned persons as well as of those administering the projects. Additionally, it tries to offer explanations for the discrepancies observed between stated policy principles and practices . [less ▲]

Detailed reference viewed: 90 (20 UL)
Full Text
See detailDiversity Preserving Genetic Algorithms - Application to the Inverted Folding Problem and Analogous Formulated Benchmarks
Nielsen, Sune Steinbjorn UL

Doctoral thesis (2016)

Protein structure prediction is an essential step in understanding the molecular mechanisms of living cells with widespread applications in biotechnology and health. Among the open problems in the field ... [more ▼]

Protein structure prediction is an essential step in understanding the molecular mechanisms of living cells with widespread applications in biotechnology and health. Among the open problems in the field, the Inverse Folding Problem (IFP) that consists in finding sequences that fold into a defined structure is, in itself, an important research problem at the heart of most rational protein design approaches. In brief, solutions to the IFP are protein sequences that will fold into a given protein structure, contrary to conventional structure prediction where the solution consists of the structure into which a given sequence folds. This inverse approach is viewed as a simplification due to the fact that the near infinite number of structure conformations of a protein can be disregarded, and only sequence to structure compatibility needs to be determined. Additional emphasis has been put on the generation of many sequences dissimilar from the known reference sequence instead of finding only one solution. To solve the IFP computationally, a novel formulation of the problem was proposed in which possible problem solutions are evaluated in terms of their predicted secondary structure match. In addition, two specialised Genetic Algorithms (GAs) were developed specifically for solving the IFP problem and compared with existing algorithms in terms of performance. Experimental results outlined the superior performance of the developed algorithms, both in terms of model score and diversity of the generated sets of problem solutions, i.e. new protein sequences. A number of landscape analysis experiments were conducted on the IFP model, enabling the development of an original benchmark suite of analogous problems. These benchmarks were shown to share many characteristics with their IFP model counterparts, but are executable in a fraction of the time. To validate the IFP model and the algorithm output, a subset of the generated solutions were selected for further inspection through full tertiary structure prediction and comparison to the original protein structure. Congruence was then assessed by super-positioning and secondary structure annotation statistics. The results demonstrated that an optimisation process relying on a fast secondary structure approximation, such as the IFP model, permits to obtain meaningful sequences. [less ▲]

Detailed reference viewed: 39 (13 UL)
See detailLaw of the Living: The Semiotic Structure and Dynamics of Law
Ellsworth, Jeffrey Amans UL

Doctoral thesis (2016)

Law is an evolving mental construct. Law, considered distinctly from a legal system, is only a constructed reality which our minds overlay upon the existent world that we experience. This reality is ... [more ▼]

Law is an evolving mental construct. Law, considered distinctly from a legal system, is only a constructed reality which our minds overlay upon the existent world that we experience. This reality is formed by signs and is indefinite and always changing. This reality gives meaning to our world; it turns brute violence into justice and papers into contracts. Law is meaning. Law is merely meaning. And all meaning is law. Not all meaning is law in the legal sense of law, but in the general sense of law which includes the legal sense. Law is semiosis. Hence, Law of the Living because semiosis is always alive in our minds, always contemporary and in the moment. The analysis predominantly employs the philosophy of Charles Peirce. The work focuses on the ontology of law and the problems of legal meaning. Specifically, the first part explicitly addresses the issue of ontology and advocates for an inter-subjective ontological perspective, as well as considering the value and limitations of textualism. The second part addresses the problems of legal meaning due to the reductive nature of communication and the diversity of human perspectives. The former is done through a reconsideration of what law ‘is’ in terms of rules, principles, and factual categories; and the latter through an exploration of the differing conceptions of community and their relevance to law and to society. The work is a combination of legal theory and sociolegal studies culminating in the assertion that the general public must be provided a minimum level of legal education in order to experience legal reality, and not merely a generally analogous social reality, as well as to meaningfully participate in the ongoing legal discourse in society. [less ▲]

Detailed reference viewed: 41 (4 UL)
See detailTranszendentaler Schematismus: Zum Verhältnis von Sinnlichkeit und Verstand in Kants Kritik der reinen Vernunft
Birrer, Mathias UL

Doctoral thesis (2016)

In my dissertation, I deal with one of the fundamental topics of Immanuel Kant's Critique of Pure Reason, that is, the relation between sensibility and understanding as the two main and a priori sources ... [more ▼]

In my dissertation, I deal with one of the fundamental topics of Immanuel Kant's Critique of Pure Reason, that is, the relation between sensibility and understanding as the two main and a priori sources of human knowledge. I argue that most interpretations of the First Critique marginalise the role of sensibility as an isolable and irreducible representational capacity of the human mind and the importance of that role sensibility in order to understand some of the main Kantian arguments. In taking Kant’s position as what in contemporary philosophical debates is called a non-conceptualist viewpoint, I provide a detailed exegesis of some of the landmarks in the First Critique, which are, first, the Transcendental Aesthetic that uncovers the pure sensible nature of aspects of our representation of space and time, second, the second step of the Transcendental Deduction that relies on a sense of objectivity provided by sensibility alone, and finally, the Transcendental Schematism that implements our pure conceptual capacity within a non-conceptual representational framework. [less ▲]

Detailed reference viewed: 54 (5 UL)
See detailCriminal Liability of Managers for Excessive Risk-Taking?
Tosza, Stanislaw UL

Doctoral thesis (2016)

The thesis analyses and evaluates the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were ... [more ▼]

The thesis analyses and evaluates the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its dangers are not limited to the times of economic crisis. In the same time risk taking is at the very beginning and at the very core of business activity. By criminalising managers’ excessive risk-taking criminal law enters a sphere, which is at the core of the activity it affects. This research examines the regulation of these selected legal orders, in which excessive risk-taking by managers is criminalised (England & Wales, Germany and France). It is followed by a more in-depth reflection on the role of criminal law in punishing acts of mismanagement, which consist in exposure to excessive risk. This reflection takes a perspective of basic theories of criminalisation and ethical problems inherent to the topic, as well as the interference with other branches of law regulating corporate environment. It demonstrates that it is justified to criminalise excessive risk-taking to a certain extent. It formulates a blueprint how to design criminalisation of such acts taking into account the factual and legal background within which such a criminalisation would have to be fitted. This proposal might serve the national legislator as well as potentially the European one. [less ▲]

Detailed reference viewed: 132 (15 UL)