Abstract :
[en] Formal verification carries the promise of absolute correctness,
guaranteed at the highest level of assurance known today. However,
inherent to many verification attempts is the assumption that
the underlying hardware, the code-generation toolchain and the
verification tools are correct, all of the time. While this assumption
creates interesting recursive verification challenges, which already
have been executed successfully for all three of these elements, the
coverage of this assumption remains incomplete, in particular for
hardware. Accidental faults, such as single-event upsets, transistor
aging and latchups keep causing hardware to behave arbitrarily
in situations where such events occur and require other means
(e.g., tolerance) to safely operate through them. Targeted attacks,
especially physical ones, have a similar potential to cause havoc.
Moreover, faults of the above kind may well manifest in such a
way that their effects extend to all software layers, causing incorrect
behavior, even in proven correct ones. In this position paper,
we take a holistic system-architectural point of view on the role
of trusted-execution environments (TEEs), their implementation
complexity and the guarantees they can convey and that we want
to be preserved in the presence of faults. We find that if absolute
correctness should remain our visionary goal, TEEs can and should
be constructed differently with tolerance embedded at the lowest
levels and with verification playing an essential role. Verification
should both assure the correctness of the TEE construction protocols
and mechanisms as well as help protecting the applications
executing inside the TEEs.