CRYPTO CORNER Editors: Peter Gutmann, [email protected] | David Naccache, [email protected] | Charles C. Palmer, [email protected] Automated Proof and Flaw-Finding Tools in Cryptography Graham Steel | Cryptosense O ne of the few things cryptographers can agree on is that cryptography is a complex subject—so complex that even after expert peer review, flaws are regularly found in security proofs, APIs, protocols, and protocol implementations. Computer scientists have long hoped that computers could do the work of checking hardware and software for flaws, but this idea—called formal analysis—has only recently been successfully implemented in cryptography. Unlike conventional software verification, cryptographic analysis requires verifiers to account for all possible actions of a malicious intruder trying to break into the system. To facilitate this task, researchers have turned to algebraic modeling as a way to produce abstract models of crypto protocols. Dolev and Yao’s Verification Model In 1983, Daniel Dolev and Andrew C. Yao proposed modeling cryptographic protocols and a hypothetical attacker in an abstract way to allow verification of the protocols’ 2 March/April 2015 security.1 By modeling bitstrings as terms in abstract algebra, cryptographic algorithms as functions of those terms, and an intruder as a set of deduction rules on the abstract algebra, Dolev and Yao (DY) were able to verify simple protocols in their model. Formal tools such as model checkers and theorem provers that matured through the 1990s all used the DY protocol verification model. By the early 2000s, several purposebuilt tools were capable of verifying abstract DY models of large protocols for unbounded numbers of protocol sessions. However, the DY model is usually an approximation rather than an abstraction of the protocol implementation. This means that attacks found in a DY model don’t always correspond to real attacks on an implemented protocol and—more worryingly—that an absence of attacks in the DY model doesn’t generally imply absence of attacks on the real protocols. In the mid- to late 2000s, several groups worked to close the gap between DY models and real protocols. They were particularly Copublished by the IEEE Computer and Reliability Societies interested in examining when DY model proofs might imply security proofs in the more detailed “standard model” of cryptography wherein intruders are modeled as arbitrary probabilistic polynomialtime Turing machines and bitstrings are modeled as real bitstrings. An alternative approach is to use computers to assist with direct proofs in the standard model. Typically, these proofs show that any attack on the protocol can be efficiently translated into an attack on a mathematical problem believed to be “hard.” This means that if an attack is found on the protocol, it would be possible to use current computing power to solve a problem considered “impossible.” Therefore, logically, there can’t be any attacks on the protocol. Formal Proofs in the Standard Model Proofs in the standard model can be strong but are notoriously difficult to get right. One approach is to formalize the proof using a proof assistant, a piece of software that mechanizes logical reasoning with a very high degree of confidence of correctness. This high confidence comes from the fact that the trusted code base—the part of the software we have to believe is correct to trust the proofs—is very small and has been tested extensively. Successful approaches using this method in other fields include using the Coq proof assistant to verify proofs of the four-color theorem (http://research.microso f t .co m / en -u s / u m / p eo pl e / gonthier/4colproof.pdf) and, more recently, using Isabelle and 1540-7993/15/$31.00 © 2015 IEEE HOL Light proof assistants to refinement type checker and verify algorithms rather than creating their verify Johannes Kepler’s con- that the implementation instantiates own. These algorithms are accessed jecture (http://arxiv.org/ the specification. Their work shows via an API. For this reason, it’s also abs/1501.02155). The Computer- how cryptographic security guaran- necessary to study cryptographic Aided Cryptography group at the tees, which are typically seen only security at the API level to see IMDEA Software Institute and for low-level primitives and proto- whether, for example, a rogue appliINRIA Sophia Antipolis developed cols, can be scaled up to a real imple- cation could extract cryptographic the EasyCrypt proof assistant and mentation of a complex protocol. keys from a piece of secure crypto While completing the specifica- hardware by calling an unexpected used it to verify security proofs for several well-known crypto- tion and implementation of their sequence of commands. graphic primitives (such as OAEP2) software, which requires a deep underExamples of formal analysis and protocols (such as being used to search for NAXOS3). EasyCrypt attacks on APIs date back Although proving the security of is technically impressive to the early 1990s,6 but a protocol specification is useful, but requires significant the subject came to promuser expertise and effort. inence in the early 2000s security flaws are often introduced Although proving when a series of such at the implementation level. the security of a protoattacks was discovered col specification is useful, on the hardware security security flaws are often modules (HSMs) widely introduced at the implementation standing of the protocol, the miTLS used in banking and other seculevel. Several research groups have team discovered several exploitable rity-critical applications.7,8 These tackled this problem over the past flaws in the most widely used imple- attacks were discovered by manual 10 years. One idea is to take a for- mentations (www.secure-resumption analysis or ad hoc tools, so several mally verified protocol specification .com; www.smacktls.com). research groups wanted to see if Other automated analysis the general API security problem and compile it into an implementation that’s guaranteed to be secure.4 approaches focus on finding vul- could be tackled using formal tools Another approach is to prove an nerabilities in real implementations similar to those used for DY-style implementation’s security in a high- without looking at the source code. protocols. level language that’s amenable to Erik Poll’s group at the Institute for In 2010, Matteo Bortolozzo formal verification, enabling the Computing and Information Sci- and his colleagues demonstrated specification of rich properties that ences at Radboud University has Tookan—a tool capable of learning translate to cryptographic security recently shown how to determine an implementation model of RSA a protocol’s state machine from its PKCS#11, the most widely used guarantees. An example of this approach is black-box implementation using standard for cryptographic hardthe miTLS (www.mitls.org) soft- the L* algorithm. Once a model ware—by sending a call sequence ware for the widely used Web pro- has been created, it can be explored and observing the error responses.9 tocol SSL/TLS. Developed by with a model checker, allowing the After learning a model, Tookan Karthikeyan Bhargavan’s team at discovery of paths leading to bugs. queries a model checker to search Comparing several implementa- for sequences of calls that could INRIA in the functional programming language F#, this software tions of the same protocol has also reveal the value of a sensitive crypincludes a full implementation of proved fruitful in finding flaws. Poll’s tographic key. If such a sequence the standard protocol including team applied this approach to chip is found, Tookan converts the multiple cipher suites, resumption, and PIN cards and, more recently, sequence of steps in the model back and renegotiation of sessions. The to implementations of TLS. In into a sequence of real PKCS#11 use of the memory-safe F# language doing so, they uncovered several API calls and executes the attack guarantees an absence of the kinds anomalies and security bugs.5 on the hardware being tested. Using of buffer-overflow bugs that led to this methodology, Tookan was the widely publicized Heartbleed API-Level Attacks able to find key-recovery attacks vulnerability discovered in 2014 When developing new enterprise on several commercially available (http://heartbleed.com). applications that will use cryp- PKCS#11 smart cards and HSMs. Bhargavan’s team then used F7 tography, developers typically Following industrial interest, a programming language to spec- use existing software or hardware spin-off company of INRIA called ify the security of TLS and F7’s implementations of cryptographic Cryptosense now commercializes www.computer.org/security 3 CRYPTO CORNER the tool under the name Cryptosense Analyzer. Many large banks and security agencies use it to audit cryptographic hardware and ensure that the crypto infrastructure is configured securely. Side-Channel Attacks Side-channel attacks are significant threats to real cryptography implementations. These attacks rely on the information leakage by power consumption, execution timing, electromagnetic or audio emissions, or the error messages given by a cryptographic application. New side channels are continually being discovered, and each typically has its own specific set of countermeasures. However, the idea of automatically verifying the effectiveness of a given side channel’s countermeasures has only recently begun to receive attention. One defense against attacks that rely on variations in execution time is to implement cryptography in such a way that the execution time is constant regardless of the data or the value of the cryptographic key. Tools that can verify whether a piece of code will execute in constant time are beginning to emerge (https://github.com/agl/ctgrind); however, this analysis isn’t easy. Code that seems to be constant at the source code level might not always produce constant-time machine code. Recent work has looked directly at machine code.10 Many open challenges remain in verifying the absence of side-channel attacks. C urrent trends such as cloud computing, smartphones, and the Internet of Things are creating a growing need for security that will be provided by cryptography. As a consequence, developers are now required to implement cryptography without necessarily developing the expertise needed to 4 IEEE Security & Privacy do it securely. It’s safe to say we’ll see more automated analysis technologies for all aspects of cryptography in the years to come. References 1. D. Dolev and A. Yao, “On the Security of Public Key Protocols,” IEEE Trans. Information Theory, vol. 29, no. 2, 1983, pp. 198–208. 2. M. Bellare and P. Rogaway, “Optimal Asymmetric Encryption— How to Encrypt with RSA,” Advances in Cryptology—Eurocrypt 94 Proc., LNCS 950, 1995; https:// cseweb.ucsd.edu/~mihir/papers /oae.pdf. 3. B. LaMacchia, K. Lauter, and A. Mityagin, “Stronger Security of Authenticated Key Exchange,” Provable Security, LNCS 4784, 2007, pp. 1–16. 4. D. Cadé and B. Blanchet, “Proved Generation of Implementations from Computationally Secure Protocol Specifications,” D. Basin and J.C. Mitchell, eds., Principles of Security and Trust, LNCS 7796, Springer, 2013, pp. 63–82. 5. F. Aarts, E. Poll, and J. de Ruiter, “Formal Models of Bank Cards for Free,” Proc. 6th Int’l Conf. Software Testing, Verification and Validation Workshops (ICSTW 13), 2013, pp. 461–468. 6. D. Longley and S. Rigby, “An Automatic Search for Security Flaws in Key Management Schemes,” Computers and Security, vol. 11, no. 1, 1992, pp. 75–89. 7. M. Bond and R. Anderson, “APILevel Attacks on Embedded Systems,” Computer, vol. 34, no. 10, 2001, pp. 67–75. 8. J. Clulow, “The Design and Analysis of Cryptographic APIs for Security Devices,” master’s thesis, Dept. Mathematics, Univ. of Natal, 2003; www.cl.cam.ac.uk/~mkb23 /research/Clulow-Dissertation.pdf. 9. M. Bortolozzo et al., “Attacking and Fixing PKCS#11 Security Tokens,” Proc. 17th ACM Conf. Computer and Communications Security (CCS 10), 2010, pp. 260–269. 10. G. Barthe et al., “System-Level Non-interference for ConstantTime Cryptography,” Proc. 2014 ACM SIGSAC Conf. Computer and Communications Security (CCS 14), 2014, pp. 1267–1279. Graham Steel is CEO and cofounder of Cryptosense, a spin-off of INRIA that produces software for security analysis of cryptographic systems. Contact him at graham. [email protected]. Selected CS articles and columns are also available for free at http://ComputingNow.computer.org. March/April 2015
© Copyright 2024