Last 7 days
![]() Wang, Ziming ![]() ![]() in ACM Transactions on Human - Robot Interaction (in press) When flying robots are used in close-range interaction with humans, the noise they generate, also called consequential sound, is a critical parameter for user acceptance. We conjecture that there is a ... [more ▼] When flying robots are used in close-range interaction with humans, the noise they generate, also called consequential sound, is a critical parameter for user acceptance. We conjecture that there is a benefit in adding natural sounds to noisy domestic drones. To test our hypothesis experimentally, we carried out a mixed-methods research study (N=56) on reported user perception of a sonified domestic flying robot with three sound conditions at three distances. The natural sounds studied were respectively added to the robot’s inherent noises during flying; namely a birdsong and a rain sound, plus a control condition of no added sound. The distances studied were set according to proxemics; namely near, middle, and far. Our results show that adding birdsong or rain sound affects the participants’ perceptions, and the proxemic distances play a nonnegligible role. For instance, we found that participants liked the bird condition the most when the drone was at far, while they disliked the same sound the most when at near. We also found that participants’ perceptions strongly depended on their associations and interpretations deriving from previous experience. We derived six concrete design recommendations. [less ▲] Detailed reference viewed: 40 (12 UL)![]() Ligeti, Katalin ![]() in Mitsilegas, Valsamis; Bergström, Maria; Konstadinides, Theodore (Eds.) Research Handbook on EU Criminal Law (in press) Detailed reference viewed: 20 (2 UL)![]() ; Stecconi, Michele ![]() in Annales de l'Institut Fourier (in press) Detailed reference viewed: 22 (0 UL)![]() Vigano, Enrico ![]() ![]() in IEEE Transactions on Software Engineering (in press) Cyber-physical systems (CPSs) typically consist of a wide set of integrated, heterogeneous components; consequently, most of their critical failures relate to the interoperability of such components ... [more ▼] Cyber-physical systems (CPSs) typically consist of a wide set of integrated, heterogeneous components; consequently, most of their critical failures relate to the interoperability of such components. Unfortunately, most CPS test automation techniques are preliminary and industry still heavily relies on manual testing. With potentially incomplete, manually-generated test suites, it is of paramount importance to assess their quality. Though mutation analysis has demonstrated to be an effective means to assess test suite quality in some specific contexts, we lack approaches for CPSs. Indeed, existing approaches do not target interoperability problems and cannot be executed in the presence of black-box or simulated components, a typical situation with CPSs. In this paper, we introduce data-driven mutation analysis, an approach that consists in assessing test suite quality by verifying if it detects interoperability faults simulated by mutating the data exchanged by software components. To this end, we describe a data-driven mutation analysis technique (DaMAT) that automatically alters the data exchanged through data buffers. Our technique is driven by fault models in tabular form where engineers specify how to mutate data items by selecting and configuring a set of mutation operators. We have evaluated DaMAT with CPSs in the space domain; specifically, the test suites for the software systems of a microsatellite and nanosatellites launched on orbit last year. Our results show that the approach effectively detects test suite shortcomings, is not affected by equivalent and redundant mutants, and entails acceptable costs. [less ▲] Detailed reference viewed: 71 (10 UL)![]() Kasprzak, Mikolaj ![]() ![]() in Annals of Applied Probability (in press) Detailed reference viewed: 23 (0 UL)![]() Perucca, Antonella ![]() in For the Learning of Mathematics (in press) Detailed reference viewed: 30 (2 UL)![]() ; ; Fisch, Christian ![]() in Entrepreneurship and Regional Development (in press) Technostress is an important by-product of information and communication technologies (ICT). The technostress literature suggests focusing on specific dimensions of technostress, such as techno-overload ... [more ▼] Technostress is an important by-product of information and communication technologies (ICT). The technostress literature suggests focusing on specific dimensions of technostress, such as techno-overload, which describes when ICT usage demands to work faster and longer. However, only a few studies have dealt with the technostress of small business owners, let alone techno-overload. This is surprising since work overload in general has been identified as an important dimension of job stress for small business owners, and technostress has been identified as an important impediment for workers in general. The aim of the current study is to investigate the effect of techno-overload on well-being outcomes (as a composite measure consisting of physical well-being, mental well-being, sleep quality, burnout, and loneliness) using three data sets of French small business owners. Our results indicate a strong negative correlation between techno-overload and our composite measure of well-being for all three data sets. We interpret our findings for several different disciplines: information systems, small business owners and entrepreneurship, health and well-being, psychology and organization studies. Our data also allow for the identification of contextual effects – the COVID-19 pandemic – since one survey was conducted before, one at the start of, and one during the pandemic. [less ▲] Detailed reference viewed: 40 (1 UL)![]() Neugebauer, Tibor ![]() in Journal of Banking and Finance (in press) Modigliani and Miller showed the market value of the company is independent of its capital structure, and suggested that dividend policy makes no difference to this law of one price. We experimentally ... [more ▼] Modigliani and Miller showed the market value of the company is independent of its capital structure, and suggested that dividend policy makes no difference to this law of one price. We experimentally test the Modigliani-Miller theorem in a complete market with two simultaneously traded assets, employing two experimental treatment variations. The first variation involves the dividend stream. According to this variation the dividend payment order is either identical or independent. The second variation involves the market participation, or not, of an algorithmic arbitrageur. We find that Modigliani-Miller’s law of one price can be supported on average with or without an arbitrageur when dividends are identical. The law of one price breaks down when dividend payment order is independent unless there is arbitrageur participation. [less ▲] Detailed reference viewed: 34 (1 UL)![]() Botev, Jean ![]() ![]() in Proceedings of the 25th International Conference on Human-Computer Interaction (HCI International) (in press) Detailed reference viewed: 51 (12 UL)![]() ![]() Priem, Karin ![]() in Flury, Carmen; Geiss, Michael (Eds.) How Computers Entered the Classroom (1960-2000): Historical Perspectives (in press) Numerous studies and handbooks in the history of education are devoted to the history of educational media and the evolution of educational technologies. This chapter puts an explicit focus on the ... [more ▼] Numerous studies and handbooks in the history of education are devoted to the history of educational media and the evolution of educational technologies. This chapter puts an explicit focus on the implications and conceptual background of the United Nations Educational, Scientific and Cultural Organization´s (UNESCO) technology-driven idea of education, which already took shape before the 1957 Sputnik shock. Eager to establish strong bonds between mass communication and education, UNESCO by the late 1940s had already begun to set up a powerful internal apparatus for media policy which soon closely collaborated with its Education Division. From the late 1970s, UNESCO set out to establish a New World Information and Communication Order to further stabilize its global role in education and media policies. This chapter posits that textbooks, radio, TV, film, and computers were serving as interconnected elements of UNESCO’s educational mission. By looking at these specific technological ecologies of education, I connect research into the history of education with research into UNESCO´s media policies. This conceptual history approach demonstrates that education is not only based on ethical norms, teaching, and learning but is also connected to technological properties that offer access to knowledge and its acquisition. In addition, and when studying UNESCO, it becomes evident that the organization´s education-technology-nexus is also very much connected with the media and publishing industries. [less ▲] Detailed reference viewed: 16 (2 UL)![]() ; Clark, Andrew ![]() ![]() in Oxford Economic Papers (in press) Economic insecurity has attracted growing attention, but there is no consensus as to its definition. We characterize a class of individual economic-insecurity measures based on the time profile of ... [more ▼] Economic insecurity has attracted growing attention, but there is no consensus as to its definition. We characterize a class of individual economic-insecurity measures based on the time profile of economic resources. We apply this economic-insecurity measure to political-preference data in the USA, UK, and Germany. Conditional on current economic resources, economic insecurity is associated with both greater political participation (support for a party or the intention to vote) and more support for conservative parties. In particular, economic insecurity predicts greater support for both Donald Trump before the 2016 US Presidential election and the UK leaving the European Union in the 2016 Brexit referendum. [less ▲] Detailed reference viewed: 91 (3 UL)![]() ; De Beule, Christophe ![]() in Physical Review. B (in press) We develop the theory of an Andreev junction, which provides a method to probe the intrinsic topology of the Fermi sea of a two-dimensional electron gas (2DEG). An Andreev junction is a Josephson π ... [more ▼] We develop the theory of an Andreev junction, which provides a method to probe the intrinsic topology of the Fermi sea of a two-dimensional electron gas (2DEG). An Andreev junction is a Josephson π junction proximitizing a ballistic 2DEG, and exhibits low-energy Andreev bound states that propagate along the junction. It has been shown that measuring the nonlocal Landauer conductance due to these Andreev modes in a narrow linear junction leads to a topological Andreev rectification (TAR) effect characterized by a quantized conductance that is sensitive to the Euler characteristic χF of the 2DEG Fermi sea. Here we expand on that analysis and consider more realistic device geometries that go beyond the narrow linear junction and fully adiabatic limits considered earlier. Wider junctions exhibit additional Andreev modes that contribute to the transport and degrade the quantization of the conductance. Nonetheless, we show that an appropriately defined rectified conductance remains robustly quantized provided large momentum scattering is suppressed. We verify and demonstrate these predictions by performing extensive numerical simulations of realistic device geometries. We introduce a simple model system that demonstrates the robustness of the rectified conductance for wide linear junctions as well as point contacts, even when the nonlocal conductance is not quantized. Motivated by recent experimental advances, we model devices in specific materials, including InAs quantum wells, as well as monolayer and bilayer graphene. These studies indicate that for sufficiently ballistic samples observation of the TAR effect should be within experimental reach. [less ▲] Detailed reference viewed: 48 (0 UL)![]() ; ; Briand, Lionel ![]() in IEEE Transactions on Software Engineering (in press) Deep Neural Networks (DNNs) have been extensively used in many areas including image processing, medical diagnostics and autonomous driving. However, DNNs can exhibit erroneous behaviours that may lead to ... [more ▼] Deep Neural Networks (DNNs) have been extensively used in many areas including image processing, medical diagnostics and autonomous driving. However, DNNs can exhibit erroneous behaviours that may lead to critical errors, especially when used in safety-critical systems. Inspired by testing techniques for traditional software systems, researchers have proposed neuron coverage criteria, as an analogy to source code coverage, to guide the testing of DNNs. Despite very active research on DNN coverage, several recent studies have questioned the usefulness of such criteria in guiding DNN testing. Further, from a practical standpoint, these criteria are white-box as they require access to the internals or training data of DNNs, which is often not feasible or convenient. Measuring such coverage requires executing DNNs with candidate inputs to guide testing, which is not an option in many practical contexts. In this paper, we investigate diversity metrics as an alternative to white-box coverage criteria. For the previously mentioned reasons, we require such metrics to be black-box and not rely on the execution and outputs of DNNs under test. To this end, we first select and adapt three diversity metrics and study, in a controlled manner, their capacity to measure actual diversity in input sets. We then analyze their statistical association with fault detection using four datasets and five DNNs. We further compare diversity with state-of-the-art white-box coverage criteria. As a mechanism to enable such analysis, we also propose a novel way to estimate fault detection in DNNs. Our experiments show that relying on the diversity of image features embedded in test input sets is a more reliable indicator than coverage criteria to effectively guide DNN testing. Indeed, we found that one of our selected black-box diversity metrics far outperforms existing coverage criteria in terms of fault-revealing capability and computational time. Results also confirm the suspicions that state-of-the-art coverage criteria are not adequate to guide the construction of test input sets to detect as many faults as possible using natural inputs. [less ▲] Detailed reference viewed: 76 (8 UL)![]() Biryukov, Alexei ![]() ![]() ![]() in Tibouchi, Mehdi; Wang, Xiaofeng (Eds.) Applied Cryptography and Network Security. 20th International Conference, ACNS 2022, Rome, Italy, June 20–23, 2022, Proceedings (in press) We propose a new cryptanalytic tool for differential cryptanalysis, called meet-in-the-filter (MiF). It is suitable for ciphers with a slow or incomplete diffusion layer such as the ones based on Addition ... [more ▼] We propose a new cryptanalytic tool for differential cryptanalysis, called meet-in-the-filter (MiF). It is suitable for ciphers with a slow or incomplete diffusion layer such as the ones based on Addition-Rotation-XOR (ARX). The main idea of the MiF technique is to stop the difference propagation earlier in the cipher, allowing to use differentials with higher probability. This comes at the expense of a deeper analysis phase in the bottom rounds possible due to the slow diffusion of the target cipher. The MiF technique uses a meet-in-the-middle matching to construct differential trails connecting the differential’s output and the ciphertext difference. The proposed trails are used in the key recovery procedure, reducing time complexity and allowing flexible time-data trade-offs. In addition, we show how to combine MiF with a dynamic counting technique for key recovery. We illustrate MiF in practice by reporting improved attacks on the ARXbased family of block ciphers Speck. We improve the time complexities of the best known attacks up to 15 rounds of Speck32 and 20 rounds of Speck64/128. Notably, our new attack on 11 rounds of Speck32 has practical analysis and data complexities of 224.66 and 226.70 respectively, and was experimentally verified, recovering the master key in a matter of seconds. It significantly improves the previous deep learning-based attack by Gohr from CRYPTO 2019, which has time complexity 238. As an important milestone, our conventional cryptanalysis method sets a new high benchmark to beat for cryptanalysis relying on machine learning. [less ▲] Detailed reference viewed: 30 (0 UL)![]() Huemer, Birgit ![]() in Wetschanow, Karin; Kuntschner, Eva; Unterpertinger, Erika (Eds.) et al Neue Perspektiven auf die Schreibberatung (in press) Detailed reference viewed: 59 (0 UL)![]() ; Hansen, Christopher ![]() in Entrepreneurship: Theory and Practice (in press) Detailed reference viewed: 31 (3 UL)![]() ; ; Bianculli, Domenico ![]() in IEEE Transactions on Software Engineering (in press) Trace checking is a verification technique widely used in Cyber-physical system (CPS) development, to verify whether execution traces satisfy or violate properties expressing system requirements. Often ... [more ▼] Trace checking is a verification technique widely used in Cyber-physical system (CPS) development, to verify whether execution traces satisfy or violate properties expressing system requirements. Often these properties characterize complex signal behaviors and are defined using domain-specific languages, such as SB-TemPsy-DSL, a pattern-based specification language for signal-based temporal properties. Most of the trace-checking tools only yield a Boolean verdict. However, when a property is violated by a trace, engineers usually inspect the trace to understand the cause of the violation; such manual diagnostic is time-consuming and error-prone. Existing approaches that complement trace-checking tools with diagnostic capabilities either produce low-level explanations that are hardly comprehensible by engineers or do not support complex signal-based temporal properties. In this paper, we propose TD-SB-TemPsy, a trace-diagnostic approach for properties expressed using SB-TemPsy-DSL. Given a property and a trace that violates the property, TD-SB-TemPsy determines the root cause of the property violation. TD-SB-TemPsy relies on the concepts of violation cause, which characterizes one of the behaviors of the system that may lead to a property violation, and diagnoses, which are associated with violation causes and provide additional information to help engineers understand the violation cause. As part of TD-SB-TemPsy, we propose a language-agnostic methodology to define violation causes and diagnoses. In our context, its application resulted in a catalog of 34 violation causes, each associated with one diagnosis, tailored to properties expressed in SB-TemPsy-DSL. We assessed the applicability of TD-SB-TemPsy on two datasets, including one based on a complex industrial case study. The results show that TD-SB-TemPsy could finish within a timeout of 1 min for ≈ 83.66% of the trace-property combinations in the industrial dataset, yielding a diagnosis in ≈ 99.84% of these cases; moreover, it also yielded a diagnosis for all the trace-property combinations in the other dataset. These results suggest that our tool is applicable and efficient in most cases. [less ▲] Detailed reference viewed: 46 (8 UL)![]() Vigano, Enrico ![]() ![]() ![]() in Proceedings of the 45th International Conference on Software Engineering (ICSE ’23) (in press) We present DaMAT, a tool that implements data- driven mutation analysis. In contrast to traditional code-driven mutation analysis tools it mutates (i.e., modifies) the data ex- changed by components ... [more ▼] We present DaMAT, a tool that implements data- driven mutation analysis. In contrast to traditional code-driven mutation analysis tools it mutates (i.e., modifies) the data ex- changed by components instead of the source of the software under test. Such an approach helps ensure that test suites appropriately exercise components interoperability — essential for safety-critical cyber-physical systems. A user-provided fault model drives the mutation process. We have successfully evalu- ated DaMAT on software controlling a microsatellite and a set of libraries used in deployed CubeSats. A demo video of DaMAT is available at https://youtu.be/s5M52xWCj84 [less ▲] Detailed reference viewed: 88 (1 UL)![]() ; Pastore, Fabrizio ![]() in IEEE Transactions on Software Engineering (in press) Security testing aims at verifying that the software meets its security properties. In modern Web systems, however, this often entails the verification of the outputs generated when exercising the system ... [more ▼] Security testing aims at verifying that the software meets its security properties. In modern Web systems, however, this often entails the verification of the outputs generated when exercising the system with a very large set of inputs. Full automation is thus required to lower costs and increase the effectiveness of security testing. Unfortunately, to achieve such automation, in addition to strategies for automatically deriving test inputs, we need to address the oracle problem, which refers to the challenge, given an input for a system, of distinguishing correct from incorrect behavior (e.g., the response to be received after a specific HTTP GET request). In this paper, we propose Metamorphic Security Testing for Web-interactions (MST-wi), a metamorphic testing approach that integrates test input generation strategies inspired by mutational fuzzing and alleviates the oracle problem in security testing. It enables engineers to specify metamorphic relations (MRs) that capture many security properties of Web systems. To facilitate the specification of such MRs, we provide a domain-specific language accompanied by an Eclipse editor. MST-wi automatically collects the input data and transforms the MRs into executable Java code to automatically perform security testing. It automatically tests Web systems to detect vulnerabilities based on the relations and collected data. We provide a catalog of 76 system-agnostic MRs to automate security testing in Web systems. It covers 39% of the OWASP security testing activities not automated by state-of-the-art techniques; further, our MRs can automatically discover 102 different types of vulnerabilities, which correspond to 45% of the vulnerabilities due to violations of security design principles according to the MITRE CWE database. We also define guidelines that enable test engineers to improve the testability of the system under test with respect to our approach. We evaluated MST-wi effectiveness and scalability with two well-known Web systems (i.e., Jenkins and Joomla). It automatically detected 85% of their vulnerabilities and showed a high specificity (99.81% of the generated inputs do not lead to a false positive); our findings include a new security vulnerability detected in Jenkins. Finally, our results demonstrate that the approach scale, thus enabling automated security testing overnight. [less ▲] Detailed reference viewed: 64 (4 UL)![]() ![]() Kornadt, Anna Elena ![]() in Bauer, Jürgen; Denkinger, Michael; Becker, Clemens (Eds.) et al Geriatrie (in press) Detailed reference viewed: 43 (2 UL) |
||