December 21, 2022

5 Myths about Formally Verifying Smart Contracts

For a long time it puzzled me how something so expensive, so leading edge, could be so useless, and then it occurred to me that a computer is a stupid machine with the ability to do incredibly smart things, while computer programmers are smart people with the ability to do incredibly stupid things. They are, in short, a perfect match.
Bill Bryson, author, from Notes from a Big Country

Formal verification uses mathematical models and computer algorithms to prove or disprove the correctness of the system’s design with respect to formal specifications expressed as properties. Formal verification is a well-established field of computer science. Given a program, we desire to show that the program is correct and behaves as expected in the same ways we prove a mathematical theorem. Alternatively, the goal is to show that there exists a scenario in which the code violates its formal verification. The theory of formal verification goes back to Alan Turing in 1949. The field has dramatically advanced in the last three decades, with tools such as interactive theorem provers, static program analyzers, and mathematical constraint solvers enabling verifications that had previously existed only as concepts and sometimes not at all. As techniques have been refined and technological barriers have fallen, industry adoption has spread. The hardware verification field has already integrated formal verification, preventing bugs in chips and saving tremendous sums of money across industries.

Recently it has become feasible to build tools for formally verifying smart contracts. Bugs in smart contracts can be expensive and hard to fix as the code is immutable. Formal verification can be used to prevent these catastrophic bugs.

Myth 1: Proofs vs. bugs

“Program testing can be used to show the presence of bugs, but never to show their absence!”
Edsger W. Dijkstra

A particularly common belief about formal verification techniques is that they are only helpful in proving the absence of errors once the code is mature. Arguably, however, the most exciting application of formal verification is to prevent security bugs in the implementation. Model-checking tools like CBMC highlight this capability. It is also demonstrated by formal verification tools like the Certora Prover that compare the code to its requirements, finding bugs both before and after code deployment.

When a formal verification tool locates a potential test case that violates a requirement during development, there is an opportunity to correct the program before any harm has been done. These bugs do not tend to make the news since they never have the chance to be discovered in the wild. Naturally, it is also helpful when a tool proves that a property holds, but these instances require particular diligence to ensure that the verified property describes the correct behavior.

Bugs found after deployment can be much higher profile and potentially catastrophic. The MakerDAO team, for example, recently discovered that the foundational invariant of the Maker protocol was not an invariant using the Certora Prover

Myth 2: The most complex problem in formal verification is its computational cost

For the mathematician there is no Ignorabimus, and, in my opinion, not at all for natural science either. … The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish Ignorabimus, our credo avers: We must know, we shall know! “
- David Hilbert

Rice’s theorem states it is impossible for computer programs to automatically verify interesting properties of arbitrary programs and therefore the computer will always fail to verify the correctness of some programs. As a result, the cost of proving even a tiny program can be massive. Therefore, it is easy to think that the computational cost is the limiting factor for the adoption and implementation of formal verification.

In reality, a greater problem in formal verification is defining the required properties for software. The approach advocated by Certora, heeding Dijkstra, is that the programmer expresses desired invariants about the program. However, it is challenging to express invariants that are both correct yet strong enough to find various possible mistakes in the code.

The Trident protocol has an example of such a property. Trident implements a liquidity pool (LP), a structure where users called liquidity providers add funds to create a market. In exchange for keeping their funds in the pool, they earn fees from swapping activities proportional to their share of the total liquidity. Should they decide to withdraw their liquidity, providers can redeem their shares to reclaim the underlying funds.

It is a property of this system that liquidity and LP shares must both be zero, or they both must be non-zero. In other words, if any funds are present in the pool, then LP shares should exist because someone is providing liquidity for the pool. This property is an example of an invariant: a rule that should never be broken.

If this invariant is violated, it means that there are either shares worth nothing or funds that cannot be claimed. Before Trident’s contract was deployed, a violation of this property was found by the Certora Prover, uncovering a bug in the code which was subsequently addressed.

Myth 3: Formal Verification has to be done once when the code is stable

“The trouble with programmers is that you can never tell what a programmer is doing until it’s too late.”
-Seymour Cray

Smart contracts are ideal applications for formal verification since the bugs are costly and the code is typically small or modular and evolves over time. Formal verification improves the security of smart contracts by finding bugs and mathematically proving their absence throughout development and deployment.

A common belief among many DeFi projects is that formal verification must come last, after auditing and once the code is stable. In our experience, this practice leads to reduced security and productivity.

We argue it is best to start formal verification as early as possible in the pipeline, before any code freezes, and before the code is even feature complete. Such an upfront approach has several benefits. First, thinking of — and formally stating — invariants of a system before coding clarifies deficiencies in initial designs and guides the ultimate implementation. Second, starting formal verification early, especially when using automated tooling, can identify hard-to-find bugs before code passes through the review process and can quickly identify when such bugs have snuck in. In other words, the earlier a bug is detected, especially the difficult and tricky bugs found by automated formal verification, the easier it is to engineer a fix. Finally, by starting verification early, you can write code designed for formal verification from the beginning rather than discovering after development that entire pieces require expensive rewrites to make them amenable to formal verification. One of the long-term values of formal verification tools comes from integrating into CI/CD and running automatic checks every time the code changes to ensure no bugs are introduced. Such automated tools include mechanized verifiers like the Certora Prover, or fuzzers like Foundry and Echidna.

Formal verification can live alongside and complement auditing. One such benefit is economic; starting formal verification early before manual auditing can reduce costs. Furthermore, auditors can review specifications to ensure that the formal guarantees provided by automated tools match programmers’ expectations. As published research has found time and time again, as for instance with Efficient Detection of Vacuity in Temporal Model Checking, even power users can make many mistakes in writing specifications.

Formal verification is becoming more and more accepted not just as a helpful tool but as a must-have for smart contract security. Starting formal verification early (and using it often) is crucial to successfully integrating it into any project.

Myth 4: Formal verification has to be performed precisely, modeling the exact machine semantics

“Being abstract is something profoundly different from being vague… The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. “
— Edsger Dijkstra

Blockchains are incredibly complex distributed systems whose correctness depends on the coordination of thousands of untrusted nodes running sophisticated software. Formally modeling the details of the blockchain implementation would be prohibitively expensive in terms of both development time and computational time, but that doesn’t mean that formal verification is not a useful tool for verifying smart contracts.

On the contrary, smart contracts are an excellent target for formal verification, because blockchains provide an abstract interface that has simple and well-defined semantics. Smart contract developers do not have to reason about complex details like network communication or cryptographic primitives, because their contracts do not (and can not) interact directly with these primitives. Instead, smart contracts are limited to the interface provided by a specialized virtual machine with a small number of instructions and clearly defined mechanisms for interacting with other contracts. Since the system implementation is part of the trusted computing base, developers do not need to think about it.

Formal verification tools are also freed from reasoning about the underlying implementation. Like contract developers, they can take the correct operation of the blockchain as an assumption, and simply reason about the behavior of the smart contracts using the abstract language provided by the virtual machine. By abstracting away the details of the underlying implementation, verification tools can provide proofs in a reasonable amount of time, and give counterexamples in a language that the smart contract developers understand.

In fact, the process of abstraction can be taken even farther: instead of modeling the exact operation of a smart contract using the semantics of the virtual machine, it is often sufficient to prove properties of an approximation of the program. Tools like Dafny and the Certora Prover allow users to replace a complex program that computes a value with a simple assumption about the computed value, which can dramatically reduce the verification time. As long as the replaced program fragment satisfies the assumptions, the proof about the approximation remains valid for the real implementation.

For example, nonlinear operations like taking a square root can be difficult for SMT solvers to reason about (and algorithms for computing them even more so), but many important properties of DeFi protocols can be proved just using the fact that the square root is an increasing function. A logical formula representing the implementation of a square root function is orders of magnitude larger than a formula representing an arbitrary increasing function; replacing one with the other can take the verification time from hours or even days down to minutes.

Just because a system is complex does not mean that formal verification is impossible. By choosing the right level of abstraction, formal verification tools can prove important properties of extremely complex systems.

Myth 5: Formal verification produces bulletproof code

“I have only proven the algorithm correct, not tested it.”
― Donald Knuth

The biggest myth about formal verification is that a formally verified program cannot go wrong. For several reasons, the code can contain errors even after it was formally proven.

The most common cause for this is inadequate specifications. A specification can be vacuous, i.e., independent of the code. For example, an invariant of the form x = 5 => x > 0 is a tautology. Independent of the value of x, this invariant must hold. Either x is equal to 5, and thus it is greater than zero, or it is not equal to 5, and the claim is true. Proving this invariant does not give any guarantee on the code and may even give us a false sense of confidence about the code’s correctness.

Humans can make mistakes when writing a formal specification. The Certora prover includes simple vacuity checks reporting vacuous invariants.

Another cause for incorrect code after formal verification is missing specifications. A formal verification tool will not be able to find a bug if no specification was written that is violated by that bug. The specification writer should aim to cover as many areas of the program as possible, with priority given to the most critical parts with the worst consequences in case of failure.

We are implementing mutation testing, which checks that the set of correctness rules is adequate by mutating the program and checking if formal verification catches erroneous mutations. If bugs are missed, the specification is missing some key properties.

There is no silver bullet to avoiding computer bugs. Formal verification is another tool to add on top of human efforts and other mechanisms to increase code security.

Myths vs. reality

While the formal verification field has been developed for decades, it is still not commonly used in many software applications. As a result, many misconceptions about formal verification are widespread. We hope this article gave you a better understanding of this emerging technology, its challenges, and its limitations. Our mission at Certora is to make formal verification a standard practice in smart contracts and improve the ecosystem’s security.

Get every blog post delivered

Certora Logo
Terms of UsePrivacy Policy