Reference : Maybe Poor Johnny Really Cannot Encrypt - The Case for a Complexity Theory for Usable... |
Scientific congresses, symposiums and conference proceedings : Paper published in a book | |||
Engineering, computing & technology : Computer science | |||
http://hdl.handle.net/10993/23567 | |||
Maybe Poor Johnny Really Cannot Encrypt - The Case for a Complexity Theory for Usable Security | |
English | |
Beneson, Zinaida [University of Erlangen-Nuremberg > IT Security Infrastructures > > Lecturer] | |
Lenzini, Gabriele ![]() | |
Oliveira, Daniela [University of Florida > Department of Electrical and Computer Engineering > > Associate Professor] | |
Parkin, Simon [University College London - UCL > Department of Computer Science > > Research Associate] | |
Uebelacker, Sven [TUHH > SVA > > Research Associate] | |
2015 | |
Maybe Poor Johnny Really Cannot Encrypt - The Case for a Complexity Theory for Usable Security | |
ACM | |
Proceedings of the 2015 New Security Paradigms Workshop | |
85-99 | |
Yes | |
No | |
International | |
978-1-4503-3754-0 | |
New Security Paradigms Workshop | |
from 08-09-2015 to 11-09-2015 | |
Enschede | |
The Netherlands | |
[en] Socio-Technical Security ; Usable Security | |
[en] This paper discusses whether usable security is unattainable
for some security tasks due to intrinsic bounds of human cognitive capacities. Will Johnny ever be able to encrypt? Psychology and neuroscience literature shows that there are upper bounds on the human capacity for executing cognitive tasks and for information processing. We argue that the usable security discipline should scientifically understand human capacities for security tasks, i.e., what we can realistically expect from people. We propose a framework for evaluation of human capacities in security that assigns socio-technical systems to complexity classes according to their security and usability. The upper bound of human capacity is considered the point at which people start experiencing cognitive strain while performing a task, because cognitive strain demonstrably leads to errors in the task execution. The ultimate goal of the work we initiate in this paper is to provide designers of security mechanisms or policies with the ability to say:“This feature of the security mechanism X or this security policy element Y is inappropriate, because this evidence shows that it is beyond people’s capacity. | |
SnT | |
Fonds National de la Recherche - FnR | |
STAST | |
Researchers | |
http://hdl.handle.net/10993/23567 | |
10.1145/2841113.2841120 | |
http://dl.acm.org/citation.cfm?id=2841120 | |
FnR ; FNR1183245 > Gabriele Lenzini > STAST > Socio-Technical Analysis of Security and Trust > 01/05/2012 > 30/04/2015 > 2011 |
File(s) associated to this reference | ||||||||||||||
Fulltext file(s):
| ||||||||||||||
All documents in ORBilu are protected by a user license.