References of "Volp, Marcus 50008905"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailI-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies
Pascoal, Túlio UL; Decouchant, Jérémie; Boutet, Antoine et al

in Proceedings on Privacy Enhancing Technologies (2023)

Genome-wide Association Studies (GWASes) identify genomic variations that are statistically associated with a trait, such as a disease, in a group of individuals. Unfortunately, careless sharing of GWAS ... [more ▼]

Genome-wide Association Studies (GWASes) identify genomic variations that are statistically associated with a trait, such as a disease, in a group of individuals. Unfortunately, careless sharing of GWAS statistics might give rise to privacy attacks. Several works attempted to reconcile secure processing with privacy-preserving releases of GWASes. However, we highlight that these approaches remain vulnerable if GWASes utilize overlapping sets of individuals and genomic variations. In such conditions, we show that even when relying on state-of-the-art techniques for protecting releases, an adversary could reconstruct the genomic variations of up to 28.6% of participants, and that the released statistics of up to 92.3% of the genomic variations would enable membership inference attacks. We introduce I-GWAS, a novel framework that securely computes and releases the results of multiple possibly interdependent GWASes. I-GWAS continuously releases privacy-preserving and noise-free GWAS results as new genomes become available. [less ▲]

Detailed reference viewed: 31 (6 UL)
Full Text
Peer Reviewed
See detailSecure and Distributed Assessment of Privacy-Preserving Releases of GWAS
Pascoal, Túlio UL; Decouchant, Jérémie; Volp, Marcus UL

in ACM/IFIP International Middleware Conference (2022, November)

Genome-wide association studies (GWAS) identify correlations between the genetic variants and an observable characteristic such as a disease. Previous works presented privacy-preserving distributed ... [more ▼]

Genome-wide association studies (GWAS) identify correlations between the genetic variants and an observable characteristic such as a disease. Previous works presented privacy-preserving distributed algorithms for a federation of genome data holders that spans multiple institutional and legislative domains to securely compute GWAS results. However, these algorithms have limited applicability, since they still require a centralized instance to decide whether GWAS results can be safely disclosed, which is in violation to privacy regulations, such as GDPR. In this work, we introduce GenDPR, a distributed middleware that leverages Trusted Execution Environments (TEEs) to securely determine a subset of the potential GWAS statistics that can be safely released. GenDPR achieves the same accuracy as centralized solutions, but requires transferring significantly less data because TEEs only exchange intermediary results but no genomes. Additionally, GenDPR can be configured to tolerate all-but-one honest-but-curious federation members colluding with the aim to expose genomes of correct members. [less ▲]

Detailed reference viewed: 14 (6 UL)
Full Text
Peer Reviewed
See detailMethods for increasing the dependability of High-performance, Many-core, System-on-Chips
Graczyk, Rafal UL; Memon, Md Saad UL; Volp, Marcus UL

in Graczyk, Rafal; Memon, Md Saad; Volp, Marcus (Eds.) IAC 2022 congress proceedings, 73rd International Astronautical Congress (IAC) (2022, September 21)

Future space exploration and exploitation missions will require significantly increased autonomy of operation for mission planning, decision-making, and adaptive control techniques. Spacecrafts will ... [more ▼]

Future space exploration and exploitation missions will require significantly increased autonomy of operation for mission planning, decision-making, and adaptive control techniques. Spacecrafts will integrate new processing and compression algorithms that are often augmented with machine learning and artificial intelligence capabilities. This functionality will have to be provided with high levels of robustness, reliability, and dependability for conducting missions successfully. High-reliability requirements for space-grade processors have led to trade-offs in terms of costs, energy efficiency, and performance to obtain robustness. However, while high-performance / low-robustness configurations are acceptable in the Earth's vicinity, where assets remain protected by the planet's magnetosphere, they cease to work in more demanding environments, like cis-lunar or deep space, where high-energy particles will affect modern components heavily, causing temporary or permanent damage and ultimately system failures. The above has led to a situation where state-of-the-art processing elements (processors, co-processors, memories, special purpose accelerators, and field-programmable-gate arrays (FPGAs), all possibly integrated into System-on-a-Chip (SoC) designs) are superior to their high reliability, space-qualified counterparts in terms of processing power or energy efficiency. For example, from modern, state-of-the-art (SOTA) devices, one can expect a 2-3 order-of-magnitude performance per Watts improvement over space-grade equipment. Likewise, one finds a gap of approximately nine technology nodes between devices, which translates into a factor 25 decrease in operations per Watts. In this paper, we demonstrate how to utilize part of this enormous performance advantage to increase the robustness and resilience of otherwise susceptible semiconductor devices while harnessing the remaining processing power to build affordable space systems capable of hosting the compute-intensive functionality that future space missions require. We are bridging this performance-reliability gap by researching the enabling building blocks for constructing reliable and secure, space-ready Systems-on-a-Chip from SOTA processing elements. [less ▲]

Detailed reference viewed: 5 (2 UL)
Full Text
See detailTowards practical Genome-Wide Association Studies: Overview and Challenges
Pascoal, Túlio UL; Decouchant, Jérémie UL; Volp, Marcus UL

Scientific Conference (2022, July 11)

The popularization of large-scale federated Genome-Wide Association Study (GWAS) where multiple data owners share their genome data to conduct federated analytics uncovers new privacy issues that have ... [more ▼]

The popularization of large-scale federated Genome-Wide Association Study (GWAS) where multiple data owners share their genome data to conduct federated analytics uncovers new privacy issues that have remained unnoticed or not given proper attention. Indeed, as soon as a diverse type of interested parties (e.g., private or public biocenters and governmental institutions from around the globe) and individuals from heterogeneous populations are participating in cooperative studies, interdependent and multi-party privacy appear as crucial issues that are currently not adequately assessed. In fact, in federated GWAS environments, the privacy of individuals and parties does not depend solely on their own behavior anymore but also on others, because a collaborative environment opens new credible adversary models. For instance, one might want to tailor the privacy guarantees to withstand the presence of potentially colluding federation members aiming to violate other members' data privacy and the privacy deterioration that might occur in the presence of interdependent genomic data (e.g., due to the presence of relatives in studies or the perpetuation of previous genomic privacy leaks in future studies). In this work, we catalog and discuss the features, unsolved problems, and challenges to tackle toward truly end-to-end private and practical federated GWAS. [less ▲]

Detailed reference viewed: 36 (3 UL)
Full Text
Peer Reviewed
See detailFrom Graphs to the Science Computer of a Space Telescope. The power of Petri Nets in Systems Engineering
Graczyk, Rafal UL; Bujwan, Waldemar; Darmetko, Marcin et al

in Graczyk, Rafal; Bujwan, Waldemar; Darmetko, Marcin (Eds.) et al From Graphs to the Science Computer of a Space Telescope. The power of Petri Nets in Systems Engineering (2022, June 23)

Space system engineering has to follow a rigorous design process to manage performance/risk trade-offs at each development stage and possibly across several functional and organizational domains. The ... [more ▼]

Space system engineering has to follow a rigorous design process to manage performance/risk trade-offs at each development stage and possibly across several functional and organizational domains. The process is further complicated by the co-development of multiple solutions, each contributing differently to the goal and with different tradeoffs. Moreover, the design process is iterative, involving both changing requirements and specifications along the different ways that lead to the set goal of the mission. The above requires rigorous modeling that, in addition, must be easily extendible and maintainable across organizational units. On the example of the PROBA-3 science computer (instrument control unit, CCB DPU), we show how Petri Nets can serve as such a simple-to-maintain, holistic model, combining finite-state characterizations with dynamic system behavior caused by hardware-software interactions, to express the component-state dependent end-to-end performance characteristics of the system. The paper elaborates on how the proposed Petri-Net modeling scheme allows for system architecture optimization that result in safely reduced technical margins and in turn substantial savings in components costs. We show that performance metrics, obtained from simulation, correlate well with the real performance characteristics of the flight model of PROBA-3's science computer. [less ▲]

Detailed reference viewed: 49 (5 UL)
Full Text
Peer Reviewed
See detailTo verify or tolerate, that’s the question
Pinto Gouveia, Ines UL; Sakr, Mouhammad UL; Graczyk, Rafal UL et al

Scientific Conference (2021, December 06)

Formal verification carries the promise of absolute correctness, guaranteed at the highest level of assurance known today. However, inherent to many verification attempts is the assumption that the ... [more ▼]

Formal verification carries the promise of absolute correctness, guaranteed at the highest level of assurance known today. However, inherent to many verification attempts is the assumption that the underlying hardware, the code-generation toolchain and the verification tools are correct, all of the time. While this assumption creates interesting recursive verification challenges, which already have been executed successfully for all three of these elements, the coverage of this assumption remains incomplete, in particular for hardware. Accidental faults, such as single-event upsets, transistor aging and latchups keep causing hardware to behave arbitrarily in situations where such events occur and require other means (e.g., tolerance) to safely operate through them. Targeted attacks, especially physical ones, have a similar potential to cause havoc. Moreover, faults of the above kind may well manifest in such a way that their effects extend to all software layers, causing incorrect behavior, even in proven correct ones. In this position paper, we take a holistic system-architectural point of view on the role of trusted-execution environments (TEEs), their implementation complexity and the guarantees they can convey and that we want to be preserved in the presence of faults. We find that if absolute correctness should remain our visionary goal, TEEs can and should be constructed differently with tolerance embedded at the lowest levels and with verification playing an essential role. Verification should both assure the correctness of the TEE construction protocols and mechanisms as well as help protecting the applications executing inside the TEEs. [less ▲]

Detailed reference viewed: 60 (2 UL)
Full Text
Peer Reviewed
See detailEphemeriShield - Defence Against Cyber-Antisatellite Weapons
Graczyk, Rafal UL; Volp, Marcus UL

Scientific Conference (2021, October 14)

Mitigating the risks associated with space system operations, especially in Low Earth Orbit, requires a holistic approach, which addresses, in particular, cybersecurity challenges, in addition to meeting ... [more ▼]

Mitigating the risks associated with space system operations, especially in Low Earth Orbit, requires a holistic approach, which addresses, in particular, cybersecurity challenges, in addition to meeting the data acquisition requirements the mission needs. Space traffic management systems form no exception to this rule, but are further constrained by backward compatibility requirements that sometimes are based on decades old foundations. As a result, some space situational awareness systems continue to operate with object catalogues and data dissemination architectures that are prone to failures, not to mention adversarial actions. Proof-of-Concept papers, demonstrating this vulnerability in example attacks on space object ephemerides distribution channels have already been published and show the urgency in rethinking the way we build such high-critical infrastructure. Leveraging recent developments of distributed systems theory and concepts from multi-party consensus in limited-trust environments and in the presence of malicious actors, we designed a more secure system for orbital object ephemerides distribution, ultimately targeting at increasing the safety of satellite operations. This paper presents EphemeriShield, a distributed ephemerides storage and distribution system, aiming at maintaining safety and security guarantees in presence of active attacker or unfortunate fault. Using our EphemeriShield prototype setup, we were able to prove its ability to mask attacks and local faults that otherwise would lead to unnecessary maneuvers. Wide adoption of EphemeriShield may help satellite system operations to become safer and less vulnerable to intentionally adversarial activities, which improves the overall sustainability of space. [less ▲]

Detailed reference viewed: 56 (9 UL)
Full Text
Peer Reviewed
See detailThreat Adaptive Byzantine Fault Tolerant State-Machine Replication
Simoes Silva, Douglas UL; Graczyk, Rafal UL; Decouchant, Jérémie et al

Scientific Conference (2021, September)

Critical infrastructures have to withstand advanced and persistent threats, which can be addressed using Byzantine fault tolerant state-machine replication (BFT-SMR). In practice, unattended cyberdefense ... [more ▼]

Critical infrastructures have to withstand advanced and persistent threats, which can be addressed using Byzantine fault tolerant state-machine replication (BFT-SMR). In practice, unattended cyberdefense systems rely on threat level detectors that synchronously inform them of changing threat levels. How- ever, to have a BFT-SMR protocol operate unattended, the state- of-the-art is still to configure them to withstand the highest possible number of faulty replicas f they might encounter, which limits their performance, or to make the strong assumption that a trusted external reconfiguration service is available, which introduces a single point of failure. In this work, we present ThreatAdaptive the first BFT-SMR protocol that is automatically strengthened or optimized by its replicas in reaction to threat level changes. We first determine under which conditions replicas can safely reconfigure a BFT-SMR system, i.e., adapt the number of replicas n and the fault threshold f, so as to outpace an adversary. Since replicas typically communicate with each other using an asynchronous network they cannot rely on consensus to decide how the system should be reconfigured. ThreatAdaptive avoids this pitfall by proactively preparing the reconfiguration that may be triggered by an increasing threat when it optimizes its performance. Our evaluation shows that ThreatAdaptive can meet the latency and throughput of BFT baselines configured statically for a particular level of threat, and adapt 30% faster than previous methods, which make stronger assumptions to provide safety. [less ▲]

Detailed reference viewed: 226 (31 UL)
Full Text
Peer Reviewed
See detailRandomization as Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Real-Time Systems with Task Replication
Krüger, Kristin; Vreman, Nils; Pates, Richard et al

in LITES: Leibnitz Transactions on Embedded Systems (2021)

Time-triggered real-time systems achieve deterministic behavior using schedules that are constructed offline, based on scheduling constraints. Their deterministic behavior makes time-triggered systems ... [more ▼]

Time-triggered real-time systems achieve deterministic behavior using schedules that are constructed offline, based on scheduling constraints. Their deterministic behavior makes time-triggered systems suitable for usage in safety-critical environments, like avionics. However, this determinism also allows attackers to fine-tune attacks that can be carried out after studying the behavior of the system through side channels, targeting safety-critical victim tasks. Replication – i.e., the execution of task variants across different cores – is inherently able to tolerate both accidental and malicious faults (i.e. attacks) as long as these faults are independent of one another. Yet, targeted attacks on the timing behavior of tasks which utilize information gained about the system behavior violate the fault independence assumption fault tolerance is based on. This violation may give attackers the opportunity to compromise all replicas simultaneously, in par- ticular if they can mount the attack from already compromised components. In this paper, we ana- lyze vulnerabilities of time-triggered systems, focusing on safety-certified multicore real-time systems. We introduce two runtime mitigation strategies to withstand directed timing inference based attacks: (i) schedule randomization at slot level, and (ii) randomization within a set of offline constructed schedules. We evaluate these mitigation strategies with synthetic experiments and a real case study to show their effectiveness and practicality. [less ▲]

Detailed reference viewed: 83 (5 UL)
Full Text
Peer Reviewed
See detailBridging the space systems performance-reliability gap for future deep space resources exploration and exploitation.
Pinto Gouveia, Ines UL; Graczyk, Rafal UL; Volp, Marcus UL

Poster (2021, April)

In building equipment for space exploitation, one has to trade system robustness for the high processing capabilities and low energy consumption. The high performance, low robustness approach, is ... [more ▼]

In building equipment for space exploitation, one has to trade system robustness for the high processing capabilities and low energy consumption. The high performance, low robustness approach, is acceptable, especially in Earths’ vicinity. However, in more demanding (especially high-radiation) environments, attempts had disappointing outcomes. The processing-reliability gap, between highly reliable and highly performant systems, spans 2-3 orders of magnitude. This gap brings hope, that some of this excess processing power can be utilized, in building a combination of hardware and software mechanisms that is capable of increasing robustness and resilience of otherwise susceptible semiconductor devices, while allowing to harness the remaining processing power to build affordable space systems with large degrees of autonomy, rich functionality and high bandwidth. At the CritiX research group, we aim to bridge this performance-reliability gap, by researching the enabling building blocks for constructing more reliable and more secure System-on-Chips. [less ▲]

Detailed reference viewed: 62 (10 UL)
Full Text
See detailPriLok:Citizen-protecting distributed epidemic tracing
Esteves-Verissimo, Paulo UL; Decouchant, Jérémie UL; Volp, Marcus UL et al

E-print/Working paper (2020)

Contact tracing is an important instrument for national health services to fight epidemics. As part of the COVID-19 situation, many proposals have been made for scaling up contract tracing capacities with ... [more ▼]

Contact tracing is an important instrument for national health services to fight epidemics. As part of the COVID-19 situation, many proposals have been made for scaling up contract tracing capacities with the help of smartphone applications, an important but highly critical endeavor due to the privacy risks involved in such solutions. Extending our previously expressed concern, we clearly articulate in this article, the functional and non-functional requirements that any solution has to meet, when striving to serve, not mere collections of individuals, but the whole of a nation, as required in face of such potentially dangerous epidemics. We present a critical information infrastructure, PriLock, a fully-open preliminary architecture proposal and design draft for privacy preserving contact tracing, which we believe can be constructed in a way to fulfill the former requirements. Our architecture leverages the existing regulated mobile communication infrastructure and builds upon the concept of "checks and balances", requiring a majority of independent players to agree to effect any operation on it, thus preventing abuse of the highly sensitive information that must be collected and processed for efficient contact tracing. This is enforced with a largely decentralised layout and highly resilient state-of-the-art technology, which we explain in the paper, finishing by giving a security, dependability and resilience analysis, showing how it meets the defined requirements, even while the infrastructure is under attack. [less ▲]

Detailed reference viewed: 76 (5 UL)
Full Text
See detailBehind the Last Line of Defense -- Surviving SoC Faults and Intrusions
Pinto Gouveia, Ines UL; Volp, Marcus UL; Esteves-Verissimo, Paulo UL

E-print/Working paper (2020)

Today, leveraging the enormous modular power, diversity and flexibility of manycore systems-on-a-chip (SoCs) requires careful orchestration of complex resources, a task left to low-level software, e.g ... [more ▼]

Today, leveraging the enormous modular power, diversity and flexibility of manycore systems-on-a-chip (SoCs) requires careful orchestration of complex resources, a task left to low-level software, e.g. hypervisors. In current architectures, this software forms a single point of failure and worthwhile target for attacks: once compromised, adversaries gain access to all information and full control over the platform and the environment it controls. This paper proposes Midir, an enhanced manycore architecture, effecting a paradigm shift from SoCs to distributed SoCs. Midir changes the way platform resources are controlled, by retrofitting tile-based fault containment through well known mechanisms, while securing low-overhead quorum-based consensus on all critical operations, in particular privilege management and, thus, management of containment domains. Allowing versatile redundancy management, Midir promotes resilience for all software levels, including at low level. We explain this architecture, its associated algorithms and hardware mechanisms and show, for the example of a Byzantine fault tolerant microhypervisor, that it outperforms the highly efficient MinBFT by one order of magnitude. [less ▲]

Detailed reference viewed: 244 (32 UL)
Full Text
Peer Reviewed
See detailPrivacy-Preserving Processing of Filtered DNA Reads
Fernandes, Maria UL; Decouchant, Jérémie UL; Volp, Marcus UL et al

Scientific Conference (2019, October 22)

Detailed reference viewed: 92 (17 UL)
Full Text
Peer Reviewed
See detailDNA-SeAl: Sensitivity Levels to Optimize the Performance of Privacy-Preserving DNA Alignment
Fernandes, Maria UL; Decouchant, Jérémie UL; Volp, Marcus UL et al

in IEEE Journal of Biomedical and Health Informatics (2019)

The advent of next-generation sequencing (NGS) machines made DNA sequencing cheaper, but also put pressure on the genomic life-cycle, which includes aligning millions of short DNA sequences, called reads ... [more ▼]

The advent of next-generation sequencing (NGS) machines made DNA sequencing cheaper, but also put pressure on the genomic life-cycle, which includes aligning millions of short DNA sequences, called reads, to a reference genome. On the performance side, efficient algorithms have been developed, and parallelized on public clouds. On the privacy side, since genomic data are utterly sensitive, several cryptographic mechanisms have been proposed to align reads more securely than the former, but with a lower performance. This manuscript presents DNA-SeAl a novel contribution to improving the privacy × performance product in current genomic workflows. First, building on recent works that argue that genomic data needs to be treated according to a threat-risk analysis, we introduce a multi-level sensitivity classification of genomic variations designed to prevent the amplification of possible privacy attacks. We show that the usage of sensitivity levels reduces future re-identification risks, and that their partitioning helps prevent linkage attacks. Second, after extending this classification to reads, we show how to align and store reads using different security levels. To do so, DNA-SeAl extends a recent reads filter to classify unaligned reads into sensitivity levels, and adapts existing alignment algorithms to the reads sensitivity. We show that using DNA-SeAl allows high performance gains whilst enforcing high privacy levels in hybrid cloud environments. [less ▲]

Detailed reference viewed: 205 (24 UL)
Full Text
Peer Reviewed
See detailSustainable Security and Safety: Challenges and Opportunities
Paverd, Andrew; Volp, Marcus UL; Brasser, Ferdinand et al

in OpenAccess Series in Informatics (OASIcs) (2019), 73

A significant proportion of today's information and communication technology (ICT) systems are entrusted with high value assets, and our modern society has become increasingly dependent on these systems ... [more ▼]

A significant proportion of today's information and communication technology (ICT) systems are entrusted with high value assets, and our modern society has become increasingly dependent on these systems operating safely and securely over their anticipated lifetimes. However, we observe a mismatch between the lifetimes expected from ICT-supported systems (such as autonomous cars) and the duration for which these systems are able to remain safe and secure, given the spectrum of threats they face. Whereas most systems today are constructed within the constraints of foreseeable technology advancements, we argue that long term, i.e., sustainable security & safety, requires anticipating the unforeseeable and preparing systems for threats not known today. In this paper, we set out our vision for sustainable security & safety. We summarize the main challenges in realizing this desideratum in real-world systems, and we identify several design principles that could address these challenges and serve as building blocks for achieving this vision. [less ▲]

Detailed reference viewed: 92 (9 UL)
Full Text
Peer Reviewed
See detailVulnerability Analysis and Mitigation of Directed Timing Inference Based Attacks on Time-Triggered Systems
Krüger, Kristin; Volp, Marcus UL; Fohler, Gerhard

in LIPIcs-Leibniz International Proceedings in Informatics (2018), 106

Much effort has been put into improving the predictability of real-time systems, especially in safety-critical environments, which provides designers with a rich set of methods and tools to attest safety ... [more ▼]

Much effort has been put into improving the predictability of real-time systems, especially in safety-critical environments, which provides designers with a rich set of methods and tools to attest safety in situations with no or a limited number of accidental faults. However, with increasing connectivity of real-time systems and a wide availability of increasingly sophisticated exploits, security and, in particular, the consequences of predictability on security become concerns of equal importance. Time-triggered scheduling with offline constructed tables provides determinism and simplifies timing inference, however, at the same time, time-triggered scheduling creates vulnerabilities by allowing attackers to target their attacks to specific, deterministically scheduled and possibly safety-critical tasks. In this paper, we analyze the severity of these vulnerabilities by assuming successful compromise of a subset of the tasks running in a real-time system and by investigating the attack potential that attackers gain from them. Moreover, we discuss two ways to mitigate direct attacks: slot-level online randomization of schedules, and offline schedule-diversification. We evaluate these mitigation strategies with a real-world case study to show their practicability for mitigating not only accidentally malicious behavior, but also malicious behavior triggered by attackers on purpose. [less ▲]

Detailed reference viewed: 186 (7 UL)
Full Text
Peer Reviewed
See detailIntrusion-Tolerant Autonomous Driving
Volp, Marcus UL; Verissimo, Paulo UL

in Proceedings of 2018 IEEE 21st International Symposium on Real-Time Distributed Computing (ISORC) (2018, May 29)

Fully autonomous driving is one if not the killer application for the upcoming decade of real-time systems. However, in the presence of increasingly sophisticated attacks by highly skilled and well ... [more ▼]

Fully autonomous driving is one if not the killer application for the upcoming decade of real-time systems. However, in the presence of increasingly sophisticated attacks by highly skilled and well equipped adversarial teams, autonomous driving must not only guarantee timeliness and hence safety. It must also consider the dependability of the software concerning these properties while the system is facing attacks. For distributed systems, fault-and-intrusion tolerance toolboxes already offer a few solutions to tolerate partial compromise of the system behind a majority of healthy components operating in consensus. In this paper, we present a concept of an intrusion-tolerant architecture for autonomous driving. In such a scenario, predictability and recovery challenges arise from the inclusion of increasingly more complex software on increasingly less predictable hardware. We highlight how an intrusion tolerant design can help solve these issues by allowing timeliness to emerge from a majority of complex components being fast enough, often enough while preserving safety under attack through pre-computed fail safes. [less ▲]

Detailed reference viewed: 152 (22 UL)
Full Text
Peer Reviewed
See detailVelisarios: Byzantine Fault-Tolerant Protocols Powered by Coq
Rahli, Vincent UL; Vukotic, Ivana UL; Volp, Marcus UL et al

in ESOP 2018 (2018, April)

Our increasing dependence on complex and critical information infrastructures and the emerging threat of sophisticated attacks, ask for extended efforts to ensure the correctness and security of these ... [more ▼]

Our increasing dependence on complex and critical information infrastructures and the emerging threat of sophisticated attacks, ask for extended efforts to ensure the correctness and security of these systems. Byzantine fault-tolerant state-machine replication (BFT-SMR) provides a way to harden such systems. It ensures that they maintain correctness and availability in an application-agnostic way, provided that the replication protocol is correct and at least n-f out of n replicas survive arbitrary faults. This paper presents Velisarios a logic-of-events based framework implemented in Coq, which we developed to implement and reason about BFT-SMR protocols. As a case study, we present the first machine-checked proof of a crucial safety property of an implementation of the area's reference protocol: PBFT. [less ▲]

Detailed reference viewed: 568 (62 UL)
Full Text
Peer Reviewed
See detailAccurate filtering of privacy-sensitive information in raw genomic data
Decouchant, Jérémie UL; Fernandes, Maria UL; Volp, Marcus UL et al

in Journal of Biomedical Informatics (2018)

Sequencing thousands of human genomes has enabled breakthroughs in many areas, among them precision medicine, the study of rare diseases, and forensics. However, mass collection of such sensitive data ... [more ▼]

Sequencing thousands of human genomes has enabled breakthroughs in many areas, among them precision medicine, the study of rare diseases, and forensics. However, mass collection of such sensitive data entails enormous risks if not protected to the highest standards. In this article, we follow the position and argue that post-alignment privacy is not enough and that data should be automatically protected as early as possible in the genomics workflow, ideally immediately after the data is produced. We show that a previous approach for filtering short reads cannot extend to long reads and present a novel filtering approach that classifies raw genomic data (i.e., whose location and content is not yet determined) into privacy-sensitive (i.e., more affected by a successful privacy attack) and non-privacy-sensitive information. Such a classification allows the fine-grained and automated adjustment of protective measures to mitigate the possible consequences of exposure, in particular when relying on public clouds. We present the first filter that can be indistinctly applied to reads of any length, i.e., making it usable with any recent or future sequencing technologies. The filter is accurate, in the sense that it detects all known sensitive nucleotides except those located in highly variable regions (less than 10 nucleotides remain undetected per genome instead of 100,000 in previous works). It has far less false positives than previously known methods (10% instead of 60%) and can detect sensitive nucleotides despite sequencing errors (86% detected instead of 56% with 2% of mutations). Finally, practical experiments demonstrate high performance, both in terms of throughput and memory consumption. [less ▲]

Detailed reference viewed: 302 (52 UL)