References of "Oliveira, Daniela"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detail4.2 Social Dynamics Metrics-Working Group Report
Benenson, Zinaida; Bleikertz, Sören; Foley, Simon N. et al

in Socio-Technical Security Metrics (2015)

Individuals continually interact with security mechanisms when performing tasks in everyday life. These tasks may serve personal goals or work goals, be individual or shared. These interactions can be ... [more ▼]

Individuals continually interact with security mechanisms when performing tasks in everyday life. These tasks may serve personal goals or work goals, be individual or shared. These interactions can be influenced by peers and superiors in the respective environments (workplace, home, public spaces), by personality traits of the users, as well as by contextual constraints such as available time, cognitive resources, and perceived available effort. All these influencing factors, we believe, should be considered in the design, implementation and maintenance of good socio-technical security mechanisms. Therefore, we need to observe reliable socio-technical data, and then transform them into meaningful and helpful metrics for user interactions and influencing factors. More precisely, there are three main questions that the group discussed: 1. What data do we need to observe and what of this data we actually can observe and measure? 2. How can we observe and measure? 3. What can we do with the results of the observations? [less ▲]

Detailed reference viewed: 55 (2 UL)
Full Text
Peer Reviewed
See detailMaybe Poor Johnny Really Cannot Encrypt - The Case for a Complexity Theory for Usable Security
Beneson, Zinaida; Lenzini, Gabriele UL; Oliveira, Daniela et al

in Maybe Poor Johnny Really Cannot Encrypt - The Case for a Complexity Theory for Usable Security (2015)

This paper discusses whether usable security is unattainable for some security tasks due to intrinsic bounds of human cognitive capacities. Will Johnny ever be able to encrypt? Psychology and neuroscience ... [more ▼]

This paper discusses whether usable security is unattainable for some security tasks due to intrinsic bounds of human cognitive capacities. Will Johnny ever be able to encrypt? Psychology and neuroscience literature shows that there are upper bounds on the human capacity for executing cognitive tasks and for information processing. We argue that the usable security discipline should scientifically understand human capacities for security tasks, i.e., what we can realistically expect from people. We propose a framework for evaluation of human capacities in security that assigns socio-technical systems to complexity classes according to their security and usability. The upper bound of human capacity is considered the point at which people start experiencing cognitive strain while performing a task, because cognitive strain demonstrably leads to errors in the task execution. The ultimate goal of the work we initiate in this paper is to provide designers of security mechanisms or policies with the ability to say:“This feature of the security mechanism X or this security policy element Y is inappropriate, because this evidence shows that it is beyond people’s capacity. [less ▲]

Detailed reference viewed: 58 (0 UL)
Full Text
Peer Reviewed
See detailMaybe Poor Johnny Really Cannot Encrypt - The Case for a Complexity Theory for Usable Security
Benenson, Zinaida; Lenzini, Gabriele UL; Oliveira, Daniela et al

in Proc. of the New Security Paradigm Workshop (2015)

This paper discusses whether usable security is unattainable for some security tasks due to intrinsic bounds of human cognitive capacities. Will Johnny ever be able to encrypt? Psychology and neuroscience ... [more ▼]

This paper discusses whether usable security is unattainable for some security tasks due to intrinsic bounds of human cognitive capacities. Will Johnny ever be able to encrypt? Psychology and neuroscience literature shows that there are upper bounds on the human capacity for executing cognitive tasks and for information processing. We argue that the usable security discipline should scientifically understand human capacities for security tasks, i.e., what we can realistically expect from people. We propose a framework for evaluation of human capacities in security that assigns socio-technical systems to complexity classes according to their security and usability. The upper bound of human capacity is considered the point at which people start experiencing cognitive strain while performing a task, because cognitive strain demonstrably leads to errors in the task execution. The ultimate goal of the work we initiate in this paper is to provide designers of security mechanisms or policies with the ability to say:“This feature of the security mechanism X or this security policy element Y is inappropriate, because this evidence shows that it is beyond people’s capacity". [less ▲]

Detailed reference viewed: 99 (6 UL)