Last 7 days
Bookmark and Share    
Full Text
Peer Reviewed
See detailSocial Change for Sustainable Localized Food Sovereignty. Convergence between Prosumers and Ethical Entrepreneurs.
Reckinger, Rachel UL

in Sociologia del Lavoro (2018), 152(4),

Recently, some resourceful community-driven initiatives for local food production and retail have arisen in Luxembourg, where low organic agricultural rates are paradoxically paired with high consumer ... [more ▼]

Recently, some resourceful community-driven initiatives for local food production and retail have arisen in Luxembourg, where low organic agricultural rates are paradoxically paired with high consumer demands. This niche of social innovators is combining agro-ecology with circular economy practices. Four cases of alternative food networks are of interest – studied with qualitative interviews and participant observation. One has been established since the 1980s with 200 employees, partly in social insertion measures. The more recent and smaller initiatives are characterized by a cooperative governance, a community-supported agricultural outlook, hands-on workshops and time-banks, all enabled by social media. These initiatives are more radical in their agro-ecological or permaculture practices, focusing on regenerative land use without relying on imports and fostering the integration of consumers with varying degrees of prosumer involvement, as a politicized step further than mere (possibly industrialized) organic production. [less ▲]

Detailed reference viewed: 62 (0 UL)
Full Text
Peer Reviewed
See detailSequential Resource Distribution Technique for Multi-User OFDM-SWIPT based Cooperative Networks
Gautam, Sumit UL; Lagunas, Eva UL; Chatzinotas, Symeon UL et al

Scientific Conference (2018, December)

In this paper, we investigate resource allocation and relay selection in a dual-hop orthogonal frequency division multiplexing (OFDM)-based multi-user network where amplify-and-forward (AF) enabled relays ... [more ▼]

In this paper, we investigate resource allocation and relay selection in a dual-hop orthogonal frequency division multiplexing (OFDM)-based multi-user network where amplify-and-forward (AF) enabled relays facilitate simultaneous wireless information and power transfer (SWIPT) to the end-users. In this context, we address an optimization problem to maximize the end-users’ sum-rate subjected to transmit power and harvested energy constraints. Furthermore, the problem is formulated for both time-switching (TS) and power-splitting (PS) SWIPT schemes.We aim at optimizing the users’ SWIPT splitting factors as well as sub-carrier–destination assignment, sub-carrier pairing, and relay–destination coupling metrics. This kind of joint evaluation is combinatorial in nature with non-linear structure involving mixed-integer programming. In this vein, we propose a sub-optimal low complex sequential resource distribution (SRD) method to solve the aforementioned problem. The performance of the proposed SRD technique is compared with a semi-random resource allocation and relay selection approach. Simulation results reveal the benefits of the proposed design under several parameter values with various operating conditions to illustrate the efficiency of SWIPT schemes for the proposed techniques. [less ▲]

Detailed reference viewed: 36 (5 UL)
Full Text
Peer Reviewed
See detailPower and Load Optimization in Interference-Coupled Non-Orthogonal Multiple Access Networks
Lei, Lei UL; You, Lei; Yang, Yang UL et al

in IEEE Global Communications Conference (GLOBECOM) 2018 (2018, December)

Detailed reference viewed: 6 (1 UL)
Full Text
Peer Reviewed
See detailA volume-averaged nodal projection method for the Reissner-Mindlin plate model
Ortiz-Bernardin, Alejandro; Köbrich, Philip; Hale, Jack UL et al

in Computer Methods in Applied Mechanics & Engineering (2018), 341

We introduce a novel meshfree Galerkin method for the solution of Reissner-Mindlin plate problems that is written in terms of the primitive variables only (i.e., rotations and transverse displacement) and ... [more ▼]

We introduce a novel meshfree Galerkin method for the solution of Reissner-Mindlin plate problems that is written in terms of the primitive variables only (i.e., rotations and transverse displacement) and is devoid of shear-locking. The proposed approach uses linear maximum-entropy approximations and is built variationally on a two-field potential energy functional wherein the shear strain, written in terms of the primitive variables, is computed via a volume-averaged nodal projection operator that is constructed from the Kirchhoff constraint of the three-field mixed weak form. The stability of the method is rendered by adding bubble-like enrichment to the rotation degrees of freedom. Some benchmark problems are presented to demonstrate the accuracy and performance of the proposed method for a wide range of plate thicknesses. [less ▲]

Detailed reference viewed: 49 (9 UL)
See detailEducational assessment and its prospects in the 21st century
Greiff, Samuel UL

Speeches/Talks (2018)

Detailed reference viewed: 11 (1 UL)
Peer Reviewed
See detailSmall States in the European Union : Luxembourg
Danescu, Elena UL

in Hartly, Cathy (Ed.) WESTERN EUROPE 2019 (2018)

Detailed reference viewed: 48 (7 UL)
Full Text
Peer Reviewed
See detailHITECS: A UML Profile and Analysis Framework for Hardware-in-the-Loop Testing of Cyber Physical Systems
Shin, Seung Yeob UL; Chaouch, Karim UL; Nejati, Shiva UL et al

in Proceedings of ACM/IEEE 21st International Conference on Model Driven Engineering Languages and Systems (MODELS’18) (2018, October)

Hardware-in-the-loop (HiL) testing is an important step in the development of cyber physical systems (CPS). CPS HiL test cases manipulate hardware components, are time-consuming and their behaviors are ... [more ▼]

Hardware-in-the-loop (HiL) testing is an important step in the development of cyber physical systems (CPS). CPS HiL test cases manipulate hardware components, are time-consuming and their behaviors are impacted by the uncertainties in the CPS environment. To mitigate the risks associated with HiL testing, engineers have to ensure that (1) HiL test cases are well-behaved, i.e., they implement valid test scenarios and do not accidentally damage hardware, and (2) HiL test cases can execute within the time budget allotted to HiL testing. This paper proposes an approach to help engineers systematically specify and analyze CPS HiL test cases. Leveraging the UML profile mechanism, we develop an executable domain-specific language, HITECS, for HiL test case specification. HITECS builds on the UML Testing Profile (UTP) and the UML action language (Alf). Using HITECS, we provide analysis methods to check whether HiL test cases are well-behaved, and to estimate the execution times of these test cases before the actual HiL testing stage. We apply HITECS to an industrial case study from the satellite domain. Our results show that: (1) HITECS is feasible to use in practice; (2) HITECS helps engineers define more complete and effective well-behavedness assertions for HiL test cases, compared to when these assertions are defined without systematic guidance; (3) HITECS verifies in practical time that HiL test cases are well-behaved; and (4) HITECS accurately estimates HiL test case execution times. [less ▲]

Detailed reference viewed: 123 (21 UL)
Full Text
Peer Reviewed
See detailEnabling Model Testing of Cyber-Physical Systems
Gonzalez Perez, Carlos Alberto UL; Varmazyar, Mojtaba UL; Nejati, Shiva UL et al

in Proceedings of ACM/IEEE 21st International Conference on Model Driven Engineering Languages and Systems (MODELS’18) (2018, October)

Applying traditional testing techniques to Cyber-Physical Systems (CPS) is challenging due to the deep intertwining of software and hardware, and the complex, continuous interactions between the system ... [more ▼]

Applying traditional testing techniques to Cyber-Physical Systems (CPS) is challenging due to the deep intertwining of software and hardware, and the complex, continuous interactions between the system and its environment. To alleviate these challenges we propose to conduct testing at early stages and over executable models of the system and its environment. Model testing of CPSs is however not without difficulties. The complexity and heterogeneity of CPSs renders necessary the combination of different modeling formalisms to build faithful models of their different components. The execution of CPS models thus requires an execution framework supporting the co-simulation of different types of models, including models of the software (e.g., SysML), hardware (e.g., SysML or Simulink), and physical environment (e.g., Simulink). Furthermore, to enable testing in realistic conditions, the co-simulation process must be (1) fast, so that thousands of simulations can be conducted in practical time, (2) controllable, to precisely emulate the expected runtime behavior of the system and, (3) observable, by producing simulation data enabling the detection of failures. To tackle these challenges, we propose a SysML-based modeling methodology for model testing of CPSs, and an efficient SysML-Simulink co-simulation framework. Our approach was validated on a case study from the satellite domain. [less ▲]

Detailed reference viewed: 70 (7 UL)
Full Text
Peer Reviewed
See detailModel-Driven Trace Diagnostics for Pattern-based Temporal Specifications
Dou, Wei UL; Bianculli, Domenico UL; Briand, Lionel UL

in Proceedings of the 2018 ACM/IEEE 21st International Conference on Model Driven Engineering Languages and Systems (MODELS 2018) (2018, October)

Offline trace checking tools check whether a specification holds on a log of events recorded at run time; they yield a verification verdict (typically a boolean value) when the checking process ends. When ... [more ▼]

Offline trace checking tools check whether a specification holds on a log of events recorded at run time; they yield a verification verdict (typically a boolean value) when the checking process ends. When the verdict is false, a software engineer needs to diagnose the property violations found in the trace in order to understand their cause and, if needed, decide for corrective actions to be performed on the system. However, a boolean verdict may not be informative enough to perform trace diagnostics, since it does not provide any useful information about the cause of the violation and because a property can be violated for multiple reasons. The goal of this paper is to provide a practical and scalable so- lution to solve the trace diagnostics problem, in the settings of model-driven trace checking of temporal properties expressed in TemPsy, a pattern-based specification language. The main contributions of the paper are: a model-driven approach for trace diagnostics of pattern-based temporal properties expressed in TemPsy, which relies on the evaluation of OCL queries on an instance of a trace meta-model; the implementation of this trace diagnostics procedure in the TemPsy-Report tool; the evaluation of the scalability of TemPsy-Report, when used for the diagnostics of violations of real properties derived from a case study of our industrial partner. The results show that TemPsy-Report is able to collect diagnostic information from large traces (with one million events) in less than ten seconds; TemPsy-Report scales linearly with respect to the length of the trace and keeps approximately constant performance as the number of violations increases. [less ▲]

Detailed reference viewed: 33 (0 UL)
Full Text
Peer Reviewed
See detailStatic load deflection experiment on a beam for damage detection using the Deformation Area Difference Method
Erdenebat, Dolgion UL; Waldmann, Danièle UL; Teferle, Felix Norman UL

Scientific Conference (2018, October)

A reliable and safety infrastructure for both transport and traffic is becoming increasingly important today. The condition assessment of bridges remains difficult and new methods must be found to provide ... [more ▼]

A reliable and safety infrastructure for both transport and traffic is becoming increasingly important today. The condition assessment of bridges remains difficult and new methods must be found to provide reliable information. A meaningful in-situ assessment of bridges requires very detailed investigations which cannot be guaranteed by commonly used methods. It is known that the structural response to external loading is influenced by local damages. However, the detection of local damage depends on many factors such as environmental effects (e.g. temperature), construction layer (e.g. asphalt) and accuracy of the structural response measurement. Within the paper, a new so-called Deformation Area Difference (DAD) Method is presented. The DAD method is based on a load deflection experiment and does not require a reference measurement of initial condition. Therefore, the DAD method can be applied on existing bridges. Moreover, the DAD method uses the most modern technologies such as high precision measurement techniques and attempts to combine digital photogrammetry with drone applications. The DAD method uses information given in the curvature course from a theoretical model of the structure and compares it to real measurements. The paper shows results from a laboratory load-deflection experiment with a steel beam which has been gradually damaged at distinct positions. The load size is chosen so that the maximum deflection does not exceed the serviceability limit state. With the data obtained by the laboratory experiment, the damage degree, which can still be detected by the DAD method, is described. Furthermore, the influence of measurement accuracy on damage detection is discussed. [less ▲]

Detailed reference viewed: 61 (15 UL)
Peer Reviewed
See detailSecurity, reliability and regulation compliance in Ultrascale Computing System
Bouvry, Pascal UL; Varrette, Sébastien UL; Wasim, Muhammad Umer UL et al

in Carretero, J.; Jeannot, E. (Eds.) Ultrascale Computing Systems (2018)

Ultrascale Computing Systems (UCSs) are envisioned as large-scale complex systems joining parallel and distributed computing systems that will be two to three orders of magnitude larger than today’s ... [more ▼]

Ultrascale Computing Systems (UCSs) are envisioned as large-scale complex systems joining parallel and distributed computing systems that will be two to three orders of magnitude larger than today’s systems (considering the number of Central Process Unit (CPU) cores). It is very challenging to find sustainable solutions for UCSs due to their scale and a wide range of possible applications and involved technologies. For example, we need to deal with heterogeneity and cross fertilization among HPC, large-scale distributed systems, and big data management. One of the challenges regarding sustainable UCSs is resilience. Another one, which attracted less interest in the literature but becomes more and more crucial with the expected convergence with the Cloud computing paradigm, is the notion of regulation in such system to assess the Quality of Service (QoS) and Service Level Agreement (SLA) proposed for the use of these platforms. This chapter covers both aspects through the reproduction of two articles: [1] and [2]. [less ▲]

Detailed reference viewed: 21 (6 UL)
Peer Reviewed
See detailA Full-Cost Model for Estimating the Energy Consumption of Computing Infrastructures
Orgerie, Anne-Cecile; Varrette, Sébastien UL

in Carretero, J.; Jeannot, E. (Eds.) Ultrascale Computing Systems (2018)

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as a price-effective solution to many IT problems. This seems reasonable if we exclude the pure ... [more ▼]

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as a price-effective solution to many IT problems. This seems reasonable if we exclude the pure performance point of view as many studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when subjected to an High Performance Computing (HPC) workload. When this is the case, traditional HPC and Ultrascale computing systems are required, and then comes the question of the real cost-effectiveness, especially when comparing to instances offered by the Cloud providers. In this section, and inspired by the work proposed in [1], we propose a Total Cost of Ownership (TCO) analysis of an in-house academic HPC facility of medium-size (in particular the one operated at the University of Luxembourg since 2007, or within the Grid’5000 project [2]), and compare it with the investment that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. [less ▲]

Detailed reference viewed: 17 (0 UL)
Full Text
Peer Reviewed
See detailThe Price of Privacy in Collaborative Learning
Pejo, Balazs UL; Tang, Qiang UL; Gergely, Biczok

Poster (2018, October)

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training ... [more ▼]

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have enough data to train a reasonably accurate model. For such organizations, a realistic solution is to train machine learning models based on a joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player. [less ▲]

Detailed reference viewed: 25 (0 UL)
Full Text
See detailReport: The European Commission's e-Evidence Proposals
Robinson, Gavin UL

in European Data Protection Law Review (2018), (3),

In April 2018, the European Commission presented a legislative package intended to enable, foster and formalise cross-border access by national judicial authorities to electronic evidence controlled by ... [more ▼]

In April 2018, the European Commission presented a legislative package intended to enable, foster and formalise cross-border access by national judicial authorities to electronic evidence controlled by private service providers.1 In particular the public-private character of the ‘cooperation’ envisaged in the proposed set-up raises several questions at the interface of criminal procedure and data protection law. This report provides a brief overview of the proposed EUlegislation and an introduction to themost salient attendant legal and policy-related issues. [less ▲]

Detailed reference viewed: 10 (1 UL)
Full Text
Peer Reviewed
See detailShared Access Satellite-Terrestrial Reconfigurable Backhaul Network Enabled by Smart Antennas at MmWave Band
Artiga, Xavier; Pérez-Neira; Baranda et al

in IEEE Network (2018)

Detailed reference viewed: 3 (0 UL)
See detailPreventing and Resolving Conflicts of Jurisdiction in EU Criminal Law
Ligeti, Katalin UL; Robinson, Gavin UL; European Law Institute

Book published by Oxford University Press (2018)

Detailed reference viewed: 22 (6 UL)
See detailForum Choice and Cybercrime
Robinson, Gavin UL

in Ligeti, Katalin; Robinson, Gavin; European Law Institute (Eds.) Preventing and Resolving Conflicts of Jurisdiction in EU Criminal Law (2018)

Detailed reference viewed: 6 (1 UL)
Full Text
Peer Reviewed
See detailTUNA: TUning Naturalness-based Analysis
Jimenez, Matthieu UL; Cordy, Maxime UL; Le Traon, Yves UL et al

in 34th IEEE International Conference on Software Maintenance and Evolution, Madrid, Spain, 26-28 September 2018 (2018, September 26)

Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME ’18 paper, we have ... [more ▼]

Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME ’18 paper, we have shown that the conclusions of a study can drastically change with respect to how the code is tokenized and how the used n-gram model is parameterized. These choices are thus of utmost importance, and one must carefully make them. To show this and allow the community to benefit from our work, we have developed TUNA (TUning Naturalness-based Analysis), a Java software artifact to perform naturalness-based analyses of source code. To the best of our knowledge, TUNA is the first open- source, end-to-end toolchain to carry out source code analyses based on naturalness. [less ▲]

Detailed reference viewed: 31 (5 UL)
Full Text
Peer Reviewed
See detailKnowledge Graph-based Teacher Support for Learning Material Authoring
Grevisse, Christian UL; Manrique, Rubén; Mariño, Olga et al

in Advances in Computing - CCC 2018 (2018, September 26)

Preparing high-quality learning material is a time-intensive, yet crucial task for teachers of all educational levels. In this paper, we present SoLeMiO, a tool to recommend and integrate learning ... [more ▼]

Preparing high-quality learning material is a time-intensive, yet crucial task for teachers of all educational levels. In this paper, we present SoLeMiO, a tool to recommend and integrate learning material in popular authoring software. As teachers create their learning material, SoLeMiO identifies the concepts they want to address. In order to identify relevant concepts in a reliable, automatic and unambiguous way, we employ state of the art concept recognition and entity linking tools. From the recognized concepts, we build a semantic representation by exploiting additional information from Open Knowledge Graphs through expansion and filtering strategies. These concepts and the semantic representation of the learning material support the authoring process in two ways. First, teachers will be recommended related, heterogeneous resources from an open corpus, including digital libraries, domain-specific knowledge bases, and MOOC platforms. Second, concepts are proposed for semi-automatic tagging of the newly authored learning resource, fostering its reuse in different e-learning contexts. Our approach currently supports resources in English, French, and Spanish. An evaluation of concept identification in lecture video transcripts and a user study based on the quality of tag and resource recommendations yielded promising results concerning the feasibility of our technique. [less ▲]

Detailed reference viewed: 17 (1 UL)
Full Text
Peer Reviewed
See detailVerlet buffer for broad phase interaction detection in Discrete Element Method
Mainassara Chekaraou, Abdoul Wahid UL; Rousset, Alban UL; Besseron, Xavier UL et al

Poster (2018, September 24)

The Extended Discrete Element Method (XDEM) is a novel and innovative numerical simulation technique that extends the dynamics of granular materials or particles as described through the classical ... [more ▼]

The Extended Discrete Element Method (XDEM) is a novel and innovative numerical simulation technique that extends the dynamics of granular materials or particles as described through the classical discrete element method (DEM) by additional properties such as the thermodynamic state, stress/strain for each particle. Such DEM simulations used by industries to set up their experimental processes are complexes and heavy in computation time. Therefore, simulations have to be precise, efficient and fast in order to be able to process hundreds of millions of particles. To tackle this issue, such DEM simulations are usually parallelized with MPI. One of the most expensive computation parts of a DEM simulation is the collision detection of particles. It is classically divided into two steps: the broad phase and the narrow phase. The broad phase uses simplified bounding volumes to perform an approximated but fast collision detection. It returns a list of particle pairs that could interact. The narrow phase is applied to the result of the broad phase and returns the exact list of colliding particles. The goal of this research is to apply a Verlet buffer method to (X)DEM simulations regardless of which broad phase algorithm is used. We rely on the fact that such DEM simulations are temporal coherent: the neighborhood only changes slightly from the last time-step to the current time-step. We use the Verlet buffer method to extend the list of pairs returned by the broad phase by stretching the particles bounding volume with an extension range. This allows re-using the result of the broad phase for several time-steps before an update is required once again and thereby its reduce the number of times the broad phase is executed. We have implemented a condition based on particles displacements to ensure the validity of the broad phase: a new one is executed to update the list of colliding particles only when necessary. This guarantees identical results because approximations introduced in the broad phase by our approach are corrected in the narrow phase which is executed at every time-steps anyway. We perform an extensive study to evaluate the influence of the Verlet extension range on the performance of the execution in terms of computation time and memory consumption. We consider different test-cases, partitioners (ORB, Zoltan, METIS, SCOTCH, ...), broad phase algorithms (Link cell, Sweep and prune, ...) and grid configurations (fine, coarse), sequential and parallel (up to 280 cores). While a larger Verlet buffer increases the cost of the broad phase and narrow phase, it also allows skipping a significant number of broad phase execution (> 99 \%). As a consequence, our first results show that this approach can speeds up the total .execution time up to a factor of 5 for sequential executions, and up to a factor of 3 parallel executions on 280 cores while maintaining a reasonable memory consumption. [less ▲]

Detailed reference viewed: 13 (4 UL)