![]() Lorentz, Joe ![]() ![]() ![]() in Software and Systems Modeling (2022) Models based on differential programming, like deep neural networks, are well established in research and able to outperform manually coded counterparts in many applications. Today, there is a rising ... [more ▼] Models based on differential programming, like deep neural networks, are well established in research and able to outperform manually coded counterparts in many applications. Today, there is a rising interest to introduce this flexible modeling to solve real-world problems. A major challenge when moving from research to application is the strict constraints on computational resources (memory and time). It is difficult to determine and contain the resource requirements of differential models, especially during the early training and hyperparameter exploration stages. In this article, we address this challenge by introducing CalcGraph, a model abstraction of differentiable programming layers. CalcGraph allows to model the computational resources that should be used and then CalcGraph’s model interpreter can automatically schedule the execution respecting the specifications made. We propose a novel way to efficiently switch models from storage to preallocated memory zones and vice versa to maximize the number of model executions given the available resources. We demonstrate the efficiency of our approach by showing that it consumes less resources than state-of-the-art frameworks like TensorFlow and PyTorch for single-model and multi-model execution. [less ▲] Detailed reference viewed: 55 (3 UL)![]() Mouline, Ludovic ![]() ![]() in SAC 2018: SAC 2018: Symposium on Applied Computing , April 9--13, 2018, Pau, France (2018) Distributed adaptive systems are composed of federated entities offering remote inspection and reconfiguration abilities. This is often realized using a MAPE-K loop, which constantly evaluates system and ... [more ▼] Distributed adaptive systems are composed of federated entities offering remote inspection and reconfiguration abilities. This is often realized using a MAPE-K loop, which constantly evaluates system and environmental parameters and derives corrective actions if necessary. The OpenStack Watcher project uses such a loop to implement resource optimization services for multi-tenant clouds. To ensure a timely reaction in the event of failures, the MAPE-K loop is executed with a high frequency. A major drawback of such reactivity is that many actions, e.g., the migration of containers in the cloud, take more time to be effective and their effects to be measurable than the MAPE-k loop execution frequency. Unfinished actions as well as their expected effects over time are not taken into consideration in MAPE-K loop processes, leading upcoming analysis phases potentially take sub-optimal actions. In this paper, we propose an extended context representation for MAPE-K loop that integrates the history of planned actions as well as their expected effects over time into the context representations. This information can then be used during the upcoming analysis and planning phases to compare measured and expected context metrics. We demonstrate on a cloud elasticity manager case study that such temporal action-aware context leads to improved reasoners while still be highly scalable. [less ▲] Detailed reference viewed: 212 (9 UL)![]() ; Hartmann, Thomas ![]() in 33rd Annual ACM Symposium on Applied Computing (SAC'18) (2018, April) Time series are commonly used to store temporal data, e.g., sensor measurements. However, when it comes to complex analytics and learning tasks, these measurements have to be combined with structural ... [more ▼] Time series are commonly used to store temporal data, e.g., sensor measurements. However, when it comes to complex analytics and learning tasks, these measurements have to be combined with structural context data. Temporal graphs, connecting multiple time- series, have proven to be very suitable to organize such data and ultimately empower analytic algorithms. Computationally intensive tasks often need to be distributed and parallelized among different workers. For tasks that cannot be split into independent parts, several workers have to concurrently read and update these shared temporal graphs. This leads to inconsistency risks, especially in the case of frequent updates. Distributed locks can mitigate these risks but come with a very high-performance cost. In this paper, we present a lock-free approach allowing to concurrently modify temporal graphs. Our approach is based on a composition operator able to do online reconciliation of concurrent modifications of temporal graphs. We evaluate the efficiency and scalability of our approach compared to lock-based approaches. [less ▲] Detailed reference viewed: 153 (4 UL)![]() Toader, Bogdan ![]() ![]() ![]() in A New Modelling Framework over Temporal Graphs for Collaborative Mobility Recommendation Systems (2018, March 15) Over the years, collaborative mobility proved to be an important but challenging component of the smart cities paradigm. One of the biggest challenges in the smart mobility domain is the use of data ... [more ▼] Over the years, collaborative mobility proved to be an important but challenging component of the smart cities paradigm. One of the biggest challenges in the smart mobility domain is the use of data science as an enabler for the implementation of large scale transportation sharing solutions. In particular, the next generation of Intelligent Transportation Systems (ITS) requires the combination of artificial intelligence and discrete simulations when exploring the effects of whatif decisions in complex scenarios with millions of users. In this paper, we address this challenge by presenting an innovative data modelling framework that can be used for ITS related problems. We demonstrate that the use of graphs and time series in multi-dimensional data models can satisfy the requirements of descriptive and predictive analytics in real-world case studies with massive amounts of continuously changing data. The features of the framework are explained in a case study of a complex collaborative mobility system that combines carpooling, carsharing and shared parking. The performance of the framework is tested with a large-scale dataset, performing machine learning tasks and interactive realtime data visualization. The outcome is a fast, efficient and complete architecture that can be easily deployed, tested and used for research as well in an industrial environment. [less ▲] Detailed reference viewed: 228 (36 UL)![]() ; Hartmann, Thomas ![]() ![]() in 2017 ACM/IEEE 20th International Conference on Model Driven Engineering Languages and Systems (2017, September) The conviction that big data analytics is a key for the success of modern businesses is growing deeper, and the mobilisation of companies into adopting it becomes increasingly important. Big data ... [more ▼] The conviction that big data analytics is a key for the success of modern businesses is growing deeper, and the mobilisation of companies into adopting it becomes increasingly important. Big data integration projects enable companies to capture their relevant data, to efficiently store it, turn it into domain knowledge, and finally monetize it. In this context, historical data, also called temporal data, is becoming increasingly available and delivers means to analyse the history of applications, discover temporal patterns, and predict future trends. Despite the fact that most data that today’s applications are dealing with is inherently temporal current approaches, methodologies, and environments for developing these applications don’t provide sufficient support for handling time. We envision that Model-Driven Engineering (MDE) would be an appropriate ecosystem for a seamless and orthogonal integration of time into domain modelling and processing. In this paper, we investigate the state-of-the-art in MDE techniques and tools in order to identify the missing bricks for raising time-awareness in MDE and outline research directions in this emerging domain. [less ▲] Detailed reference viewed: 213 (9 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in Proceedings of the 29th International Conference on Software Engineering and Knowledge Engineering (2017, July) Modern analytics solutions succeed to understand and predict phenomenons in a large diversity of software systems, from social networks to Internet-of-Things platforms. This success challenges analytics ... [more ▼] Modern analytics solutions succeed to understand and predict phenomenons in a large diversity of software systems, from social networks to Internet-of-Things platforms. This success challenges analytics algorithms to deal with more and more complex data, which can be structured as graphs and evolve over time. However, the underlying data storage systems that support large-scale data analytics, such as time-series or graph databases, fail to accommodate both dimensions, which limits the integration of more advanced analysis taking into account the history of complex graphs, for example. This paper therefore introduces a formal and practical definition of temporal graphs. Temporal graphs pro- vide a compact representation of time-evolving graphs that can be used to analyze complex data in motion. In particular, we demonstrate with our open-source implementation, named GREYCAT, that the performance of temporal graphs allows analytics solutions to deal with rapidly evolving large-scale graphs. [less ▲] Detailed reference viewed: 313 (19 UL)![]() Hartmann, Thomas ![]() ![]() in Software and Systems Modeling (2017) Machine learning algorithms are designed to resolve unknown behaviors by extracting commonalities over massive datasets. Unfortunately, learning such global behaviors can be inaccurate and slow for ... [more ▼] Machine learning algorithms are designed to resolve unknown behaviors by extracting commonalities over massive datasets. Unfortunately, learning such global behaviors can be inaccurate and slow for systems composed of heterogeneous elements, which behave very differently, for instance as it is the case for cyber-physical systems andInternet of Things applications. Instead, to make smart deci-sions, such systems have to continuously refine the behavior on a per-element basis and compose these small learning units together. However, combining and composing learned behaviors from different elements is challenging and requires domain knowledge. Therefore, there is a need to structure and combine the learned behaviors and domain knowledge together in a flexible way. In this paper we propose to weave machine learning into domain modeling. More specifically, we suggest to decompose machine learning into reusable, chainable, and independently computable small learning units, which we refer to as microlearning units.These micro learning units are modeled together with and at the same level as the domain data. We show, based on asmart grid case study, that our approach can be significantly more accurate than learning a global behavior, while the performance is fast enough to be used for live learning. [less ▲] Detailed reference viewed: 363 (13 UL)![]() Mouline, Ludovic ![]() ![]() ![]() in Mouline, Ludovic; Hartmann, Thomas; Fouquet, François (Eds.) et al Programming '17 Companion to the first International Conference on the Art, Science and Engineering of Programming (2017, April) Smart systems are characterised by their ability to analyse measured data in live and to react to changes according to expert rules. Therefore, such systems exploit appropriate data models together with ... [more ▼] Smart systems are characterised by their ability to analyse measured data in live and to react to changes according to expert rules. Therefore, such systems exploit appropriate data models together with actions, triggered by domain-related conditions. The challenge at hand is that smart systems usually need to process thousands of updates to detect which rules need to be triggered, often even on restricted hardware like a Raspberry Pi. Despite various approaches have been investigated to efficiently check conditions on data models, they either assume to fit into main memory or rely on high latency persistence storage systems that severely damage the reactivity of smart systems. To tackle this challenge, we propose a novel composition process, which weaves executable rules into a data model with lazy loading abilities. We quantitatively show, on a smart building case study, that our approach can handle, at low latency, big sets of rules on top of large-scale data models on restricted hardware. [less ▲] Detailed reference viewed: 364 (22 UL)![]() Hartmann, Thomas ![]() Doctoral thesis (2016) Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded ... [more ▼] Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded systems. They will be entrusted with increasingly complex tasks like controlling electric grids or autonomously driving cars. These systems have the potential to lay the foundations for tomorrow’s critical infrastructures, to form the basis of emerging and future smart services, and to improve the quality of our everyday lives in many areas. In order to solve their tasks, they have to continuously monitor and collect data from physical processes, analyse this data, and make decisions based on it. Making smart decisions requires a deep understanding of the environment, internal state, and the impacts of actions. Such deep understanding relies on efficient data models to organise the sensed data and on advanced analytics. Considering that cyber-physical systems are controlling physical processes, decisions need to be taken very fast. This makes it necessary to analyse data in live, as opposed to conventional batch analytics. However, the complex nature combined with the massive amount of data generated by such systems impose fundamental challenges. While data in the context of cyber-physical systems has some similar characteristics as big data, it holds a particular complexity. This complexity results from the complicated physical phenomena described by this data, which makes it difficult to extract a model able to explain such data and its various multi-layered relationships. Existing solutions fail to provide sustainable mechanisms to analyse such data in live. This dissertation presents a novel approach, named model-driven live analytics. The main contribution of this thesis is a multi-dimensional graph data model that brings raw data, domain knowledge, and machine learning together in a single model, which can drive live analytic processes. This model is continuously updated with the sensed data and can be leveraged by live analytic processes to support decision-making of cyber-physical systems. The presented approach has been developed in collaboration with an industrial partner and, in form of a prototype, applied to the domain of smart grids. The addressed challenges are derived from this collaboration as a response to shortcomings in the current state of the art. More specifically, this dissertation provides solutions for the following challenges: First, data handled by cyber-physical systems is usually dynamic—data in motion as opposed to traditional data at rest—and changes frequently and at different paces. Analysing such data is challenging since data models usually can only represent a snapshot of a system at one specific point in time. A common approach consists in a discretisation, which regularly samples and stores such snapshots at specific timestamps to keep track of the history. Continuously changing data is then represented as a finite sequence of such snapshots. Such data representations would be very inefficient to analyse, since it would require to mine the snapshots, extract a relevant dataset, and finally analyse it. For this problem, this thesis presents a temporal graph data model and storage system, which consider time as a first-class property. A time-relative navigation concept enables to analyse frequently changing data very efficiently. Secondly, making sustainable decisions requires to anticipate what impacts certain actions would have. Considering complex cyber-physical systems, it can come to situations where hundreds or thousands of such hypothetical actions must be explored before a solid decision can be made. Every action leads to an independent alternative from where a set of other actions can be applied and so forth. Finding the sequence of actions that leads to the desired alternative, requires to efficiently create, represent, and analyse many different alternatives. Given that every alternative has its own history, this creates a very high combinatorial complexity of alternatives and histories, which is hard to analyse. To tackle this problem, this dissertation introduces a multi-dimensional graph data model (as an extension of the temporal graph data model) that enables to efficiently represent, store, and analyse many different alternatives in live. Thirdly, complex cyber-physical systems are often distributed, but to fulfil their tasks these systems typically need to share context information between computational entities. This requires analytic algorithms to reason over distributed data, which is a complex task since it relies on the aggregation and processing of various distributed and constantly changing data. To address this challenge, this dissertation proposes an approach to transparently distribute the presented multi-dimensional graph data model in a peer-to-peer manner and defines a stream processing concept to efficiently handle frequent changes. Fourthly, to meet future needs, cyber-physical systems need to become increasingly intelligent. To make smart decisions, these systems have to continuously refine behavioural models that are known at design time, with what can only be learned from live data. Machine learning algorithms can help to solve this unknown behaviour by extracting commonalities over massive datasets. Nevertheless, searching a coarse-grained common behaviour model can be very inaccurate for cyber-physical systems, which are composed of completely different entities with very different behaviour. For these systems, fine-grained learning can be significantly more accurate. However, modelling, structuring, and synchronising many fine-grained learning units is challenging. To tackle this, this thesis presents an approach to define reusable, chainable, and independently computable fine-grained learning units, which can be modelled together with and on the same level as domain data. This allows to weave machine learning directly into the presented multi-dimensional graph data model. In summary, this thesis provides an efficient multi-dimensional graph data model to enable live analytics of complex, frequently changing, and distributed data of cyber-physical systems. This model can significantly improve data analytics for such systems and empower cyber-physical systems to make smart decisions in live. The presented solutions combine and extend methods from model-driven engineering, models@run.time, data analytics, database systems, and machine learning. [less ▲] Detailed reference viewed: 616 (66 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in 31st Annual ACM Symposium on Applied Computing (SAC'16) (2016, April) Micro-generations and future grid usages, such as charging of electric cars, raises major challenges to monitor the electric load in low-voltage cables. Due to the highly interconnected nature, real-time ... [more ▼] Micro-generations and future grid usages, such as charging of electric cars, raises major challenges to monitor the electric load in low-voltage cables. Due to the highly interconnected nature, real-time measurements are problematic, both economically and technically. This entails an overload risk in electricity networks when cables must be disconnected for maintenance reasons or are accidentally damaged. Therefore, it is of great interest for electricity grid providers to anticipate the load in networks and quicker detect failures. However, computing the electric load in cables requires computational intensive power flow calculations and live consumption measurements. Today’s view of the grid is usually based on on-field documentation of cables, fuses, and measurements by technicians and therefore often outdated. Thus, the electric load is usually only simulated in case of major topology variations. However, live measurements of smart meters provide new opportunities. In this paper we present a novel approach for a near real-time electric load approximation by deriving in live the current electric topology and cable loads from smart meter data. We leverage the models@run.time paradigm to combine live measurements with topology characteristics of the grid. Our approach enables to approximate the load in cables, not only for the current grid topology, but also to simulate topology changes for maintenance purposes. We showed that this allows a near real-time approximation while remaining very accurate (average deviation of 1.89% compared to offline power-flow calculation tools). Developed with a grid operator, this approach will be integrated in a monitoring and warning system and as an embeddable solution for on-field simulation. [less ▲] Detailed reference viewed: 270 (14 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm) (2015, November) The transition from today’s electricity grid to the so-called smart grid relies heavily on the usage of modern information and communication technology to enable advanced features like two-way ... [more ▼] The transition from today’s electricity grid to the so-called smart grid relies heavily on the usage of modern information and communication technology to enable advanced features like two-way communication, an automated control of devices, and automated meter reading. The digital backbone of the smart grid opens the door for advanced collecting, monitoring, and processing of customers’ energy consumption data. One promising approach is the automatic detection of suspicious consumption values, e.g., due to physically or digitally manipulated data or damaged devices. However, detecting suspicious values in the amount of meter data is challenging, especially because electric consumption heavily depends on the context. For instance, a customers energy consumption profile may change during vacation or weekends compared to normal working days. In this paper we present an advanced software monitoring and alerting system for suspicious consumption value detection based on live machine learning techniques. Our proposed system continuously learns context-dependent consumption profiles of customers, e.g., daily, weekly, and monthly profiles, classifies them and selects the most appropriate one according to the context, like date and weather. By learning not just one but several profiles per customer and in addition taking context parameters into account, our approach can minimize false alerts (low false positive rate). We evaluate our approach in terms of performance (live detection) and accuracy based on a data set from our partner, Creos Luxembourg S.A., the electricity grid operator in Luxembourg. [less ▲] Detailed reference viewed: 353 (29 UL)![]() Moawad, Assaad ![]() ![]() ![]() in Lethbridge, Timothy; Cabot, Jordi; Egyed, Alexander (Eds.) 2015 ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS) (2015, September) Internet of Things applications analyze our past habits through sensor measures to anticipate future trends. To yield accurate predictions, intelligent systems not only rely on single numerical values ... [more ▼] Internet of Things applications analyze our past habits through sensor measures to anticipate future trends. To yield accurate predictions, intelligent systems not only rely on single numerical values, but also on structured models aggregated from different sensors. Computation theory, based on the discretization of observable data into timed events, can easily lead to millions of values. Time series and similar database structures can efficiently index the mere data, but quickly reach computation and storage limits when it comes to structuring and processing IoT data. We propose a concept of continuous models that can handle high-volatile IoT data by defining a new type of meta attribute, which represents the continuous nature of IoT data. On top of traditional discrete object-oriented modeling APIs, we enable models to represent very large sequences of sensor values by using mathematical polynomials. We show on various IoT datasets that this significantly improves storage and reasoning efficiency. [less ▲] Detailed reference viewed: 368 (18 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in Lethbridge, Timothy; Cabot, Jordi; Egyed, Alexander (Eds.) 2015 ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS) (2015, September) The models@run.time paradigm promotes the use of models during the execution of cyber-physical systems to represent their context and to reason about their runtime behaviour. However, current modeling ... [more ▼] The models@run.time paradigm promotes the use of models during the execution of cyber-physical systems to represent their context and to reason about their runtime behaviour. However, current modeling techniques do not allow to cope at the same time with the large-scale, distributed, and constantly changing nature of these systems. In this paper, we introduce a distributed models@run.time approach, combining ideas from reactive programming, peer-to-peer distribution, and large-scale models@run.time. We define distributed models as observable streams of chunks that are exchanged between nodes in a peer-to-peer manner. lazy loading strategy allows to transparently access the complete virtual model from every node, although chunks are actually distributed across nodes. Observers and automatic reloading of chunks enable a reactive programming style. We integrated our approach into the Kevoree Modeling Framework and demonstrate that it enables frequently changing, reactive distributed models that can scale to millions of elements and several thousand nodes. [less ▲] Detailed reference viewed: 342 (23 UL)![]() Moawad, Assaad ![]() ![]() ![]() in The 30th Annual ACM Symposium on Applied Computing (2015, April) Given the trend towards mobile computing, the next generation of ubiquitous “smart” services will have to continuously analyze surrounding sensor data. More than ever, such services will rely on data ... [more ▼] Given the trend towards mobile computing, the next generation of ubiquitous “smart” services will have to continuously analyze surrounding sensor data. More than ever, such services will rely on data potentially related to personal activities to perform their tasks, e.g. to predict urban traffic or local weather conditions. However, revealing personal data inevitably entails privacy risks, especially when data is shared with high precision and frequency. For example, by analyzing the precise electric consumption data, it can be inferred if a person is currently at home, however this can empower new services such as a smart heating system. Access control (forbid or grant access) or anonymization techniques are not able to deal with such trade-off because whether they completely prohibit access to data or lose source traceability. Blurring techniques, by tuning data quality, offer a wide range of trade-offs between privacy and utility for services. However, the amount of ubiquitous services and their data quality requirements lead to an explosion of possible configurations of blurring algorithms. To manage this complexity, in this paper we propose a platform that automatically adapts (at runtime) blurring components between data owners and data consumers (services). The platform searches the optimal trade-off between service utility and privacy risks using multi-objective evolutionary algorithms to adapt the underlying communication platform. We evaluate our approach on a sensor network gateway and show its suitability in terms of i) effectiveness to find an appropriate solution, ii) efficiency and scalability. [less ▲] Detailed reference viewed: 211 (14 UL)![]() Moawad, Assaad ![]() ![]() ![]() in Hammoudi, Slimane; Pires, Luis Ferreira; Desfray, Philippe (Eds.) et al MODELSWARD 2015 - Proceedings of the 3rd International Conference on Model-Driven Engineering and Software Development (2015, February) Multi-Objective Evolutionary Algorithms (MOEAs) have been successfully used to optimize various domains such as finance, science, engineering, logistics and software engineering. Nevertheless, MOEAs are ... [more ▼] Multi-Objective Evolutionary Algorithms (MOEAs) have been successfully used to optimize various domains such as finance, science, engineering, logistics and software engineering. Nevertheless, MOEAs are still very complex to apply and require detailed knowledge about problem encoding and mutation operators to obtain an effective implementation. Software engineering paradigms such as domain-driven design aim to tackle this complexity by allowing domain experts to focus on domain logic over technical details. Similarly, in order to handle MOEA complexity, we propose an approach, using model-driven software engineering (MDE) techniques, to define fitness functions and mutation operators without MOEA encoding knowledge. Integrated into an open source modelling framework, our approach can significantly simplify development and maintenance of multi-objective optimizations. By leveraging modeling methods, our approach allows reusable optimizations and seamlessly connects MOEA and MDE paradigms. We evaluate our approach on a cloud case study and show its suitability in terms of i) complexity to implement an MOO problem, ii) complexity to adapt (maintain) this implementation caused by changes in the domain model and/or optimization goals, and iii) show that the efficiency and effectiveness of our approach remains comparable to ad-hoc implementations. [less ▲] Detailed reference viewed: 320 (40 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in 2014 IEEE International Conference on Smart Grid Communications (SmartGridComm) (2014, November) Today’s electricity grid must undergo substantial changes in order to keep pace with the rising demand for energy. The vision of the smart grid aims to increase the efficiency and reliability of today’s ... [more ▼] Today’s electricity grid must undergo substantial changes in order to keep pace with the rising demand for energy. The vision of the smart grid aims to increase the efficiency and reliability of today’s electricity grid, e.g. by integrating renewable energies and distributed micro-generations. The backbone of this effort is the facilitation of information and communication technologies to allow two-way communication and an automated control of devices. The underlying communication topology is essential for the smart grid and is what enables the smart grid to be smart. Analyzing, simulating, designing, and comparing smart grid infrastructures but also optimizing routing algorithms, and predicating impacts of failures, all of this relies on deep knowledge of a smart grids communication topology. However, since smart grids are still in a research and test phase, it is very difficult to get access to real-world topology data. In this paper we provide a comprehensive analysis of the power-line communication topology of a real-world smart grid, the one currently deployed and tested in Luxembourg. Building on the results of this analysis we implement a generator to automatically create random but realistic smart grid communication topologies. These can be used by researchers and industrial professionals to analyze, simulate, design, compare, and improve smart grid infrastructures. [less ▲] Detailed reference viewed: 515 (34 UL)![]() Hartmann, Thomas ![]() ![]() ![]() Poster (2014, July 02) Intelligent systems continuously analyze their context to autonomously take corrective actions. Building a proper knowledge representation of the context is the key to take adequate actions. This requires ... [more ▼] Intelligent systems continuously analyze their context to autonomously take corrective actions. Building a proper knowledge representation of the context is the key to take adequate actions. This requires numerous and complex data models, for example formalized as ontologies or meta-models. As these systems evolve in a dynamic context, reasoning processes typically need to analyze and compare the current context with its history. A common approach consists in a temporal discretization, which regularly samples the context (snapshots) at specific timestamps to keep track of the history. Reasoning processes would then need to mine a huge amount of data, extract a relevant view, and finally analyze it. This would require lots of computational power and be time-consuming, conflicting with the near real-time response time requirements of intelligent systems. This paper introduces a novel temporal modeling approach together with a time-relative navigation between context concepts to overcome this limitation. Similarly to time distortion theory, our approach enables building time-distorted views of a context, composed by elements coming from different times, which speeds up the reasoning. We demonstrate the efficiency of our approach with a smart grid load prediction reasoning engine. [less ▲] Detailed reference viewed: 174 (21 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in Proceedings of the 26th International Conference on Software Engineering and Knowledge Engineering (2014, July) Intelligent systems continuously analyze their context to autonomously take corrective actions. Building a proper knowledge representation of the context is the key to take adequate actions. This requires ... [more ▼] Intelligent systems continuously analyze their context to autonomously take corrective actions. Building a proper knowledge representation of the context is the key to take adequate actions. This requires numerous and complex data models, for example formalized as ontologies or meta-models. As these systems evolve in a dynamic context, reasoning processes typically need to analyze and compare the current context with its history. A common approach consists in a temporal discretization, which regularly samples the context (snapshots) at specific timestamps to keep track of the history. Reasoning processes would then need to mine a huge amount of data, extract a relevant view, and finally analyze it. This would require lots of computational power and be time-consuming, conflicting with the near real-time response time requirements of intelligent systems. This paper introduces a novel temporal modeling approach together with a time-relative navigation between context concepts to overcome this limitation. Similarly to time distortion theory, our approach enables building time-distorted views of a context, composed by elements coming from different times, which speeds up the reasoning. We demonstrate the efficiency of our approach with a smart grid load prediction reasoning engine. [less ▲] Detailed reference viewed: 350 (52 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in Proceedings of the Second Open EIT ICT Labs Workshop on Smart Grid Security (SmartGridSec14) (2014, April) Smart grids leverage modern information and communication technology to offer new perspectives to electricity consumers, producers, and distributors. However, these new possibilities also increase the ... [more ▼] Smart grids leverage modern information and communication technology to offer new perspectives to electricity consumers, producers, and distributors. However, these new possibilities also increase the complexity of the grid and make it more prone to failures. Moreover, new advanced features like remotely disconnecting meters create new vulnerabilities and make smart grids an attractive target for cyber attackers. We claim that, due to the nature of smart grids, unforeseen attacks and failures cannot be effectively countered relying solely on proactive security techniques. We believe that a reactive and corrective approach can offer a long-term solution and is able to both minimize the impact of attacks and to deal with unforeseen failures. In this paper we present a novel approach combining a Models@run.time-based simulation and reasoning engine with reactive security techniques to intelligently monitor and continuously adapt the smart grid to varying conditions in near real-time. [less ▲] Detailed reference viewed: 371 (31 UL)![]() Hartmann, Thomas ![]() ![]() ![]() in Dingel, Juergen; Schulte, Wolfram; Ramos, Isidro (Eds.) et al Model-Driven Engineering Languages and Systems - 17th International Conference, MODELS 2014, Valencia, Spain, September 28 - October 3, 2014. Proceedings (2014) Models@run.time provides semantically rich reflection layers enabling intelligent systems to reason about themselves and their surrounding context. Most reasoning processes require not only to explore the ... [more ▼] Models@run.time provides semantically rich reflection layers enabling intelligent systems to reason about themselves and their surrounding context. Most reasoning processes require not only to explore the current state, but also the past history to take sustainable decisions e.g. to avoid oscillating between states. Models@run.time and model-driven engineering in general lack native mechanisms to efficiently support the notion of history, and current approaches usually generate redundant data when versioning models, which reasoners need to navigate. Because of this limitation, models fail in providing suitable and sustainable abstractions to deal with domains relying on history-aware reasoning. This paper tackles this issue by considering history as a native concept for modeling foundations. Integrated, in conjunction with lazy load/storage techniques, into the Kevoree Modeling Framework, we demonstrate onto a smart grid case study, that this mechanisms enable a sustainable reasoning about massive historized models. [less ▲] Detailed reference viewed: 294 (28 UL) |
||