source:  sent by Rob Wiltbank, Galois

Our friends at GALOIS (thanks to Artemus FAN, Rob Wiltbank!) have been gracious in allowing us to post their take on the recent 'KRACK" (WPA2) vulnerability.  We reported the vulnerability in our last issue of "Artemus Spotlights".  Take a look at that posting here.


 

On Monday, October 16, the KRACK vulnerability to WPA2 was revealed in a paper by Mathy Vanhoef and Frank Piessens. KRACK enables a range of attacks against the protocol, resulting in a total loss of the privacy that the protocol attempts to guarantee. For more technical details on the attack, the website and the Key Reinstallation Attacks (KRA) paper are the best place to look. The paper presents the problem clearly, and you will learn about a protocol that you use constantly. Furthermore, it presents a number of compelling attacks that show exactly how big of a problem KRACK is. This post will discuss what the KRACK paper has to teach us about formal methods and cryptography standards.

It’s a little surprising that a protocol as widely used as WPA2 still harbors critical vulnerabilities. Even more surprising is that portions of the protocol have been formally verified (mathematically proved) to be secure! Why don’t these factors guarantee that the protocol is free of such critical vulnerabilities? The KRA paper raises the following concerns about standards and formal verification. These provide us with valuable insight into pitfalls to avoid as we perform and present our work:

  1. Specifications for a protocol may not be precise enough to guarantee security.
  2. Real-world implementations may not match formal specifications used in proofs.
  3. Formal proofs might lead to complacency, discouraging future audits and inspections.

At Galois, we believe strongly in the value of formal verification, so we think it’s worth examining each of these points. In doing so, we gain some insights into real-world cryptography verification.

Concern: Specifications may not be precise enough

It is impossible to test for security. Security is a property of all possible behaviors of a system, so the time to get security right is when the system is defined. KRACK is a vulnerability in the specification of the WPA2 protocol, and it is exacerbated in some cases by decisions that implementors made in the face of an ambiguous specification. Any of these decisions allow the implementations to function correctly. After all, we’ve been successfully utilizing WPA2 for a long time without noticing any significant functionality shortcomings. In the face of the KRACK vulnerability, however, these ambiguities allow for significantly more damaging attacks.

We can ensure that specifications are unambiguous by making them more formal. Often, ambiguity hides in natural language specifications in ways that are difficult to understand until the specifications are represented formally. A formal specification serves as an intermediate point between the easily ingestible natural language specifications we typically see today and the more complicated implementations.

A successful specification language for Cryptography allows the cryptographic community to express themselves precisely. The specifications should be readable by mathematicians and software engineers alike. They should allow natural language specification to sit side-by-side with code that looks almost exactly like the mathematics that occurs in the research literature, RFCs, and standards. Examples of language that can be used for Cryptography specification include our own Cryptol language, F* (as used in miTLS), Coq (In particular the FCF library and Fiat). All of these are open-source tools that are free for use by anyone.

For an example of how implementation and natural language specifications can coexist, see our ChaCha20 specification. That file can be both compiled to an attractive PDF for printing and reading as well as typechecked and executed by Cryptol. This is a “best of both worlds” combination of readability and rigor.

Of course, rigorous specifications are only useful if they are available for inspection by the scientific community. IEEE’s policy of putting standards behind a paywall discourages scientists from examining and reasoning about those standards. Matt Green does an excellent job of explaining this in his blog post on the topic.

Concern: Real-world Implementations might not match specifications

This concern is absolutely correct. There is not much value in proving specifications correct if the specifications don’t reflect the reality of implementations. The technologies to match protocol specifications to implementations are becoming more usable (and more widely used) every day. We call such proofs end-to-end proofs, as they carry properties all the way from high level security proofs down to source code, even in challenging languages such as C. Examples of these techniques in the cryptography world include recent work using Coq and VST to verify HMAC and DRBG, the code-generation approach of Project Everest, and our own efforts to verify parts of s2n.

The KRA paper, however, is really concerned that the models used in proofs don’t fully match the specification. Of course, this also implies that the models in proofs won’t match implementations, but implementations are not needed to discover this. Matching proofs to specifications is a challenging problem because it requires a mapping between imprecise, natural language protocol specifications, and models that are typically created with a certain property in mind. But the models used for previous proofs of WPA2 match the specification (as far as we know today.)

The discovery of KRACK doesn’t invalidate either of the original security proofs, because those proofs don’t cover the behavior where KRACK appears. This is not an oversight, it is intentional. Every security proof is part of a larger security picture. The authors of the KRA paper were able to understand what the proofs were about, and why they don’t cover the KRACK vulnerability. Even though the original proofs didn’t reveal security flaws, a principled approach would use these proofs in order to discover where to look.

Imagine software as an adhesive sheet with bubbles (bugs or vulnerabilities) under it. Your goal is to get the bubbles out without ever looking at the sheet. You could either start from a good location and methodically work the bubbles out, or you could poke at the sheet randomly until you hit some bubbles.

Now try to do the same exercise with a team. Each team member gets to work for 5 minutes in succession, only providing the next member with a description of the work already done. If you smooth an entire area, you can tell the next person “The bottom left 5 square inches are clear, ignore them”. If you’ve poked around you will have a much harder time explaining what you’ve done.

Formal proofs of software and protocols are analogous to the smoothing approach. They provide well defined areas that are completely free of a certain class of problem. In the case of WPA2, the formal proofs didn’t reveal any problems, but other formal approaches have been much more successful. For example, the miTLS project used formal methods to discover many of the recent high-profile vulnerabilities in TLS implementations. He and Mitchell discovered a problem in an early version of WPA2 that resulted in an improvement to the specification. This paper is cited as related work in the KRACK paper. A key to making this all work is the description of the work done. While the formal proofs carry rigorous definitions, they can be very difficult to understand. Like software and protocol specifications, theorem statements must combine rigor and readability.

Concern: Formal proofs might lead to complacency

Looking back at the adhesive smoothing example, formal proofs should discourage future work. If the verification is presented correctly, it should not induce complacency. Improper representation of verification, however, can be a serious problem. The value of formal methods as opposed to more manual techniques is that once they are done they discourage work in clear and well defined areas. For this to work, however, proof presentation must be clear and precise. If readers come away thinking a proof is even a little more broad than it is in reality, that opens up space for bugs and vulnerabilities to hide.

This is a risk that is all too tempting for formal methods researchers to take. When we complete our proofs, we know what we have proved, and we fear that it won’t be interesting to others. To sell our work we then make grand statements in our abstracts and introductions. Statements such as “we have proved this protocol secure”. Statements that will be held up as evidence of the inefficacy of formal methods when a vulnerability is found in the protocol we verified.

We must be as clear in our documentation and publications as we are in our theorem statements. In the words of Edsger Dijkstra, “… we take the position that it is not only the programmer’s responsibility to produce a correct program but also to demonstrate its correctness in a convincing manner, …”. Formal methods are a tool that can be used to guide everyone, from other verification experts to engineers looking to write test libraries. When formal methods are explainable and accessible to the security research community at large, the authors of the next security vulnerability paper can brag that they were guided by previous formal proofs about the system, instead of trumpeting that they found a vulnerability in spite of them.

This post was written with input from Brent Carmer, Thomas DuBuisson, Mike Dodds, Stephen Magill, Alex Malozemoff, Shpat Morina, Kevin Quick, Aaron Tomb, and Dan Zimmerman.