Last 7 days
![]() Neugebauer, Tibor ![]() in Journal of Banking and Finance (in press) Modigliani and Miller showed the market value of the company is independent of its capital structure, and suggested that dividend policy makes no difference to this law of one price. We experimentally ... [more ▼] Modigliani and Miller showed the market value of the company is independent of its capital structure, and suggested that dividend policy makes no difference to this law of one price. We experimentally test the Modigliani-Miller theorem in a complete market with two simultaneously traded assets, employing two experimental treatment variations. The first variation involves the dividend stream. According to this variation the dividend payment order is either identical or independent. The second variation involves the market participation, or not, of an algorithmic arbitrageur. We find that Modigliani-Miller’s law of one price can be supported on average with or without an arbitrageur when dividends are identical. The law of one price breaks down when dividend payment order is independent unless there is arbitrageur participation. [less ▲] Detailed reference viewed: 34 (1 UL)![]() Botev, Jean ![]() ![]() in Proceedings of the 25th International Conference on Human-Computer Interaction (HCI International) (in press) Detailed reference viewed: 51 (12 UL)![]() ; van Ryckeghem, Dimitri ![]() in Pain (in press) Attentional biases have been posited as one of the key mechanisms underlying the development and maintenance of chronic pain and co-occurring internalizing mental health symptoms. Despite this theoretical ... [more ▼] Attentional biases have been posited as one of the key mechanisms underlying the development and maintenance of chronic pain and co-occurring internalizing mental health symptoms. Despite this theoretical prominence, a comprehensive understanding of the nature of biased attentional processing in chronic pain and its relationship to theorized antecedents and clinical outcomes is lacking, particularly in youth. This study used eye-tracking to assess attentional bias for painful facial expressions and its relationship to theorized antecedents of chronic pain and clinical outcomes. Youth with chronic pain (n = 125) and without chronic pain (n = 52) viewed face images of varying levels of pain expressiveness while their eye gaze was tracked and recorded. At baseline, youth completed questionnaires to assess pain characteristics, theorized antecedents (pain catastrophizing, fear of pain, and anxiety sensitivity), and clinical outcomes (pain intensity, interference, anxiety, depression, and posttraumatic stress). For youth with chronic pain, clinical outcomes were reassessed at 3 months to assess for relationships with attentional bias while controlling for baseline symptoms. In both groups, youth exhibited an attentional bias for painful facial expressions. For youth with chronic pain, attentional bias was not significantly associated with theorized antecedents or clinical outcomes at baseline or 3-month follow-up. These findings call into question the posited relationships between attentional bias and clinical outcomes. Additional studies using more comprehensive and contextual paradigms for the assessment of attentional bias are required to clarify the ways in which such biases may manifest and relate to clinical outcomes. [less ▲] Detailed reference viewed: 28 (1 UL)![]() ![]() Priem, Karin ![]() in Flury, Carmen; Geiss, Michael (Eds.) How Computers Entered the Classroom (1960-2000): Historical Perspectives (in press) Numerous studies and handbooks in the history of education are devoted to the history of educational media and the evolution of educational technologies. This chapter puts an explicit focus on the ... [more ▼] Numerous studies and handbooks in the history of education are devoted to the history of educational media and the evolution of educational technologies. This chapter puts an explicit focus on the implications and conceptual background of the United Nations Educational, Scientific and Cultural Organization´s (UNESCO) technology-driven idea of education, which already took shape before the 1957 Sputnik shock. Eager to establish strong bonds between mass communication and education, UNESCO by the late 1940s had already begun to set up a powerful internal apparatus for media policy which soon closely collaborated with its Education Division. From the late 1970s, UNESCO set out to establish a New World Information and Communication Order to further stabilize its global role in education and media policies. This chapter posits that textbooks, radio, TV, film, and computers were serving as interconnected elements of UNESCO’s educational mission. By looking at these specific technological ecologies of education, I connect research into the history of education with research into UNESCO´s media policies. This conceptual history approach demonstrates that education is not only based on ethical norms, teaching, and learning but is also connected to technological properties that offer access to knowledge and its acquisition. In addition, and when studying UNESCO, it becomes evident that the organization´s education-technology-nexus is also very much connected with the media and publishing industries. [less ▲] Detailed reference viewed: 16 (2 UL)![]() ; Clark, Andrew ![]() ![]() in Oxford Economic Papers (in press) Economic insecurity has attracted growing attention, but there is no consensus as to its definition. We characterize a class of individual economic-insecurity measures based on the time profile of ... [more ▼] Economic insecurity has attracted growing attention, but there is no consensus as to its definition. We characterize a class of individual economic-insecurity measures based on the time profile of economic resources. We apply this economic-insecurity measure to political-preference data in the USA, UK, and Germany. Conditional on current economic resources, economic insecurity is associated with both greater political participation (support for a party or the intention to vote) and more support for conservative parties. In particular, economic insecurity predicts greater support for both Donald Trump before the 2016 US Presidential election and the UK leaving the European Union in the 2016 Brexit referendum. [less ▲] Detailed reference viewed: 91 (3 UL)![]() ; De Beule, Christophe ![]() in Physical Review. B (in press) We develop the theory of an Andreev junction, which provides a method to probe the intrinsic topology of the Fermi sea of a two-dimensional electron gas (2DEG). An Andreev junction is a Josephson π ... [more ▼] We develop the theory of an Andreev junction, which provides a method to probe the intrinsic topology of the Fermi sea of a two-dimensional electron gas (2DEG). An Andreev junction is a Josephson π junction proximitizing a ballistic 2DEG, and exhibits low-energy Andreev bound states that propagate along the junction. It has been shown that measuring the nonlocal Landauer conductance due to these Andreev modes in a narrow linear junction leads to a topological Andreev rectification (TAR) effect characterized by a quantized conductance that is sensitive to the Euler characteristic χF of the 2DEG Fermi sea. Here we expand on that analysis and consider more realistic device geometries that go beyond the narrow linear junction and fully adiabatic limits considered earlier. Wider junctions exhibit additional Andreev modes that contribute to the transport and degrade the quantization of the conductance. Nonetheless, we show that an appropriately defined rectified conductance remains robustly quantized provided large momentum scattering is suppressed. We verify and demonstrate these predictions by performing extensive numerical simulations of realistic device geometries. We introduce a simple model system that demonstrates the robustness of the rectified conductance for wide linear junctions as well as point contacts, even when the nonlocal conductance is not quantized. Motivated by recent experimental advances, we model devices in specific materials, including InAs quantum wells, as well as monolayer and bilayer graphene. These studies indicate that for sufficiently ballistic samples observation of the TAR effect should be within experimental reach. [less ▲] Detailed reference viewed: 48 (0 UL)![]() ; ; Briand, Lionel ![]() in IEEE Transactions on Software Engineering (in press) Deep Neural Networks (DNNs) have been extensively used in many areas including image processing, medical diagnostics and autonomous driving. However, DNNs can exhibit erroneous behaviours that may lead to ... [more ▼] Deep Neural Networks (DNNs) have been extensively used in many areas including image processing, medical diagnostics and autonomous driving. However, DNNs can exhibit erroneous behaviours that may lead to critical errors, especially when used in safety-critical systems. Inspired by testing techniques for traditional software systems, researchers have proposed neuron coverage criteria, as an analogy to source code coverage, to guide the testing of DNNs. Despite very active research on DNN coverage, several recent studies have questioned the usefulness of such criteria in guiding DNN testing. Further, from a practical standpoint, these criteria are white-box as they require access to the internals or training data of DNNs, which is often not feasible or convenient. Measuring such coverage requires executing DNNs with candidate inputs to guide testing, which is not an option in many practical contexts. In this paper, we investigate diversity metrics as an alternative to white-box coverage criteria. For the previously mentioned reasons, we require such metrics to be black-box and not rely on the execution and outputs of DNNs under test. To this end, we first select and adapt three diversity metrics and study, in a controlled manner, their capacity to measure actual diversity in input sets. We then analyze their statistical association with fault detection using four datasets and five DNNs. We further compare diversity with state-of-the-art white-box coverage criteria. As a mechanism to enable such analysis, we also propose a novel way to estimate fault detection in DNNs. Our experiments show that relying on the diversity of image features embedded in test input sets is a more reliable indicator than coverage criteria to effectively guide DNN testing. Indeed, we found that one of our selected black-box diversity metrics far outperforms existing coverage criteria in terms of fault-revealing capability and computational time. Results also confirm the suspicions that state-of-the-art coverage criteria are not adequate to guide the construction of test input sets to detect as many faults as possible using natural inputs. [less ▲] Detailed reference viewed: 75 (8 UL)![]() Biryukov, Alexei ![]() ![]() ![]() in Tibouchi, Mehdi; Wang, Xiaofeng (Eds.) Applied Cryptography and Network Security. 20th International Conference, ACNS 2022, Rome, Italy, June 20–23, 2022, Proceedings (in press) We propose a new cryptanalytic tool for differential cryptanalysis, called meet-in-the-filter (MiF). It is suitable for ciphers with a slow or incomplete diffusion layer such as the ones based on Addition ... [more ▼] We propose a new cryptanalytic tool for differential cryptanalysis, called meet-in-the-filter (MiF). It is suitable for ciphers with a slow or incomplete diffusion layer such as the ones based on Addition-Rotation-XOR (ARX). The main idea of the MiF technique is to stop the difference propagation earlier in the cipher, allowing to use differentials with higher probability. This comes at the expense of a deeper analysis phase in the bottom rounds possible due to the slow diffusion of the target cipher. The MiF technique uses a meet-in-the-middle matching to construct differential trails connecting the differential’s output and the ciphertext difference. The proposed trails are used in the key recovery procedure, reducing time complexity and allowing flexible time-data trade-offs. In addition, we show how to combine MiF with a dynamic counting technique for key recovery. We illustrate MiF in practice by reporting improved attacks on the ARXbased family of block ciphers Speck. We improve the time complexities of the best known attacks up to 15 rounds of Speck32 and 20 rounds of Speck64/128. Notably, our new attack on 11 rounds of Speck32 has practical analysis and data complexities of 224.66 and 226.70 respectively, and was experimentally verified, recovering the master key in a matter of seconds. It significantly improves the previous deep learning-based attack by Gohr from CRYPTO 2019, which has time complexity 238. As an important milestone, our conventional cryptanalysis method sets a new high benchmark to beat for cryptanalysis relying on machine learning. [less ▲] Detailed reference viewed: 30 (0 UL)![]() Huemer, Birgit ![]() in Wetschanow, Karin; Kuntschner, Eva; Unterpertinger, Erika (Eds.) et al Neue Perspektiven auf die Schreibberatung (in press) Detailed reference viewed: 56 (0 UL)![]() ; Hansen, Christopher ![]() in Entrepreneurship: Theory and Practice (in press) Detailed reference viewed: 31 (3 UL)![]() ; ; Bianculli, Domenico ![]() in IEEE Transactions on Software Engineering (in press) Trace checking is a verification technique widely used in Cyber-physical system (CPS) development, to verify whether execution traces satisfy or violate properties expressing system requirements. Often ... [more ▼] Trace checking is a verification technique widely used in Cyber-physical system (CPS) development, to verify whether execution traces satisfy or violate properties expressing system requirements. Often these properties characterize complex signal behaviors and are defined using domain-specific languages, such as SB-TemPsy-DSL, a pattern-based specification language for signal-based temporal properties. Most of the trace-checking tools only yield a Boolean verdict. However, when a property is violated by a trace, engineers usually inspect the trace to understand the cause of the violation; such manual diagnostic is time-consuming and error-prone. Existing approaches that complement trace-checking tools with diagnostic capabilities either produce low-level explanations that are hardly comprehensible by engineers or do not support complex signal-based temporal properties. In this paper, we propose TD-SB-TemPsy, a trace-diagnostic approach for properties expressed using SB-TemPsy-DSL. Given a property and a trace that violates the property, TD-SB-TemPsy determines the root cause of the property violation. TD-SB-TemPsy relies on the concepts of violation cause, which characterizes one of the behaviors of the system that may lead to a property violation, and diagnoses, which are associated with violation causes and provide additional information to help engineers understand the violation cause. As part of TD-SB-TemPsy, we propose a language-agnostic methodology to define violation causes and diagnoses. In our context, its application resulted in a catalog of 34 violation causes, each associated with one diagnosis, tailored to properties expressed in SB-TemPsy-DSL. We assessed the applicability of TD-SB-TemPsy on two datasets, including one based on a complex industrial case study. The results show that TD-SB-TemPsy could finish within a timeout of 1 min for ≈ 83.66% of the trace-property combinations in the industrial dataset, yielding a diagnosis in ≈ 99.84% of these cases; moreover, it also yielded a diagnosis for all the trace-property combinations in the other dataset. These results suggest that our tool is applicable and efficient in most cases. [less ▲] Detailed reference viewed: 45 (8 UL)![]() Vigano, Enrico ![]() ![]() ![]() in Proceedings of the 45th International Conference on Software Engineering (ICSE ’23) (in press) We present DaMAT, a tool that implements data- driven mutation analysis. In contrast to traditional code-driven mutation analysis tools it mutates (i.e., modifies) the data ex- changed by components ... [more ▼] We present DaMAT, a tool that implements data- driven mutation analysis. In contrast to traditional code-driven mutation analysis tools it mutates (i.e., modifies) the data ex- changed by components instead of the source of the software under test. Such an approach helps ensure that test suites appropriately exercise components interoperability — essential for safety-critical cyber-physical systems. A user-provided fault model drives the mutation process. We have successfully evalu- ated DaMAT on software controlling a microsatellite and a set of libraries used in deployed CubeSats. A demo video of DaMAT is available at https://youtu.be/s5M52xWCj84 [less ▲] Detailed reference viewed: 87 (1 UL)![]() ; Pastore, Fabrizio ![]() in IEEE Transactions on Software Engineering (in press) Security testing aims at verifying that the software meets its security properties. In modern Web systems, however, this often entails the verification of the outputs generated when exercising the system ... [more ▼] Security testing aims at verifying that the software meets its security properties. In modern Web systems, however, this often entails the verification of the outputs generated when exercising the system with a very large set of inputs. Full automation is thus required to lower costs and increase the effectiveness of security testing. Unfortunately, to achieve such automation, in addition to strategies for automatically deriving test inputs, we need to address the oracle problem, which refers to the challenge, given an input for a system, of distinguishing correct from incorrect behavior (e.g., the response to be received after a specific HTTP GET request). In this paper, we propose Metamorphic Security Testing for Web-interactions (MST-wi), a metamorphic testing approach that integrates test input generation strategies inspired by mutational fuzzing and alleviates the oracle problem in security testing. It enables engineers to specify metamorphic relations (MRs) that capture many security properties of Web systems. To facilitate the specification of such MRs, we provide a domain-specific language accompanied by an Eclipse editor. MST-wi automatically collects the input data and transforms the MRs into executable Java code to automatically perform security testing. It automatically tests Web systems to detect vulnerabilities based on the relations and collected data. We provide a catalog of 76 system-agnostic MRs to automate security testing in Web systems. It covers 39% of the OWASP security testing activities not automated by state-of-the-art techniques; further, our MRs can automatically discover 102 different types of vulnerabilities, which correspond to 45% of the vulnerabilities due to violations of security design principles according to the MITRE CWE database. We also define guidelines that enable test engineers to improve the testability of the system under test with respect to our approach. We evaluated MST-wi effectiveness and scalability with two well-known Web systems (i.e., Jenkins and Joomla). It automatically detected 85% of their vulnerabilities and showed a high specificity (99.81% of the generated inputs do not lead to a false positive); our findings include a new security vulnerability detected in Jenkins. Finally, our results demonstrate that the approach scale, thus enabling automated security testing overnight. [less ▲] Detailed reference viewed: 64 (4 UL)![]() ![]() Kornadt, Anna Elena ![]() in Bauer, Jürgen; Denkinger, Michael; Becker, Clemens (Eds.) et al Geriatrie (in press) Detailed reference viewed: 43 (2 UL)![]() Fahmy, Hazem ![]() ![]() ![]() in ACM Transactions on Software Engineering and Methodology (in press) When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with failures (i.e., erroneous outputs) observed during testing. For DNNs ... [more ▼] When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with failures (i.e., erroneous outputs) observed during testing. For DNNs processing images, engineers visually inspect all failure-inducing images to determine common characteristics among them. Such characteristics correspond to hazard-triggering events (e.g., low illumination) that are essential inputs for safety analysis. Though informative, such activity is expensive and error-prone. To support such safety analysis practices, we propose SEDE, a technique that generates readable descriptions for commonalities in failure-inducing, real-world images and improves the DNN through effective retraining. SEDE leverages the availability of simulators, which are commonly used for cyber-physical systems. It relies on genetic algorithms to drive simulators towards the generation of images that are similar to failure-inducing, real-world images in the test set; it then employs rule learning algorithms to derive expressions that capture commonalities in terms of simulator parameter values. The derived expressions are then used to generate additional images to retrain and improve the DNN. With DNNs performing in-car sensing tasks, SEDE successfully characterized hazard-triggering events leading to a DNN accuracy drop. Also, SEDE enabled retraining leading to significant improvements in DNN accuracy, up to 18 percentage points. [less ▲] Detailed reference viewed: 58 (8 UL)![]() Solanki, Sourabh ![]() ![]() in Short-Packet Communication Assisted Reliable Control of UAV for Optimum Coverage Range (in press) The reliability of command and control (C2) operation of the UAV is one of the crucial aspects for the success of UAV applications beyond 5G wireless networks. In this paper, we focus on the short-packet ... [more ▼] The reliability of command and control (C2) operation of the UAV is one of the crucial aspects for the success of UAV applications beyond 5G wireless networks. In this paper, we focus on the short-packet communication to maximize the coverage range of reliable UAV control. We quantify the reliability performance of the C2 transmission from a multi-antenna ground control station (GCS), which also leverages maximal-ratio transmission beamforming, by deriving the closed-form expression for the average block error rate (BLER). To obtain additional insights, we also derive the asymptotic expression of the average BLER in the high-transmit power regime and subsequently analyze the possible UAV configuration space to find the optimum altitude. Based on the derived average BLER, we formulate a joint optimization problem to maximize the range up to which a UAV can be reliably controlled from a GCS. The solution to this problem leads to the optimal resource allocation parameters including blocklength and transmit power while exploiting the vertical degrees of freedom for UAV placement. Finally, we present numerical and simulation results to corroborate the analysis and to provide various useful design insights. [less ▲] Detailed reference viewed: 63 (12 UL)![]() ![]() Börnchen, Stefan ![]() in Bay, Hansjörg; Hamann, Christof; Osthues, Julian (Eds.) et al Handbuch Literatur und Reise (in press) Detailed reference viewed: 16 (0 UL)![]() Solanki, Sourabh ![]() ![]() ![]() in MEC-assisted Low Latency Communication for Autonomous Flight Control of 5G-Connected UAV (in press) Proliferating applications of unmanned aerial vehicles (UAVs) impose new service requirements, leading to several challenges. One of the crucial challenges in this vein is to facilitate the autonomous ... [more ▼] Proliferating applications of unmanned aerial vehicles (UAVs) impose new service requirements, leading to several challenges. One of the crucial challenges in this vein is to facilitate the autonomous navigation of UAVs. Concretely, the UAV needs to individually process the visual data and subsequently plan its trajectories. Since the UAV has limited onboard storage constraints, its computational capabilities are often restricted and it may not be viable to process the data locally for trajectory planning. Alternatively, the UAV can send the visual inputs to the ground controller which, in turn, feeds back the command and control signals to the UAV for its safe navigation. However, this process may introduce some delays, which is not desirable for autonomous UAVs’ safe and reliable navigation. Thus, it is essential to devise techniques and approaches that can potentially offer low-latency solutions for planning the UAV’s flight. To this end, this paper analyzes a multi-access edge computing aided UAV and aims to minimize the latency of the task processing. More specifically, we propose an offloading strategy for a UAV by optimally designing the offloading parameter, local computational resources, and altitude of the UAV. The numerical and simulation results are presented to offer various design insights, and the benefits of the proposed strategy are also illustrated in contrast to the other baseline approaches. [less ▲] Detailed reference viewed: 93 (9 UL)![]() ![]() ; Wille, Christian ![]() in Nesselhauf, Jonas; Weber, Florian (Eds.) Handbuch Kulturwissenschaftliche Studies (in press) Detailed reference viewed: 67 (2 UL)![]() Huemer, Birgit ![]() in Szurawitzki, Michael; Wolf-Farré, Patrick (Eds.) Handbuch Deutsch als Fach- und Fremdsprache (in press) In diesem Beitrag wird der Entwicklungsgeschichte der studentischen Hausarbeit und ihrer Funktionen nachgegangen, die sie heutzutage in der deutschsprachigen Hochschullandschaft erfüllt. Zunächst werden ... [more ▼] In diesem Beitrag wird der Entwicklungsgeschichte der studentischen Hausarbeit und ihrer Funktionen nachgegangen, die sie heutzutage in der deutschsprachigen Hochschullandschaft erfüllt. Zunächst werden die wichtigsten Ansätze zur Erforschung der Textsorte Hausarbeit und zum wissenschaftlichen Schreiben von Studierenden im deutschsprachigen Raum skizziert. Dabei werden allgemeine schreibdidaktische Herausforderungen beim Lehren und Lernen wissenschaftlichen Schreibens besprochen, bevor im Speziellen auf fach- und fremdsprachliche Aspekte beim Verfassen von Hausarbeiten eingegangen wird. Zum Abschluss zeigt der Beitrag gegenwärtige Problemfelder und Forschungstrends auf. [less ▲] Detailed reference viewed: 40 (0 UL) |
||