Daniel Jackson (email@example.com)
Just a few weeks ago, Rari Capital, a new platform for lending and borrowing cryptocurrency lost $80 million. It was only the latest fiasco in which hackers were able to exploit a bug in a “smart contract” that turned out to be less smart than expected.
In mid-April, Akudreams—an NFT project from former baseball player Micah Johnson—suffered a failure on launch when $34m of the founders’ stake was locked up forever. By the standards of such failures, these are small: players who bought coins in the Ronin gaming network lost over $625m from a single attack in March. Over $10.5b was reportedly lost to hacks in 2021.
Decentralized finance, or DeFi, promises all kinds of transactions—loans, payments, trading, auctions, insurance, etc.—but without banks or any centralized authority. Eliminating the need to trust any such authority, along with prejudices that have favored some clients over others, is key to the success of DeFi. Contracts negotiated and interpreted by humans are replaced by smart contracts, computer programs that operate with perfect objectivity. The very objectivity of such contracts, however, makes them, and those who participate in them, vulnerable to subtle errors in the code.
Programmers have dealt with bugs since the advent of computing. So why are smart contract bugs any different?
For many software apps, bugs are surprisingly benign. They can be fixed as soon as they are detected, and—unless they impact physical processes, such as nuclear reactors—their effects can be mitigated. A bank, for example, can usually reverse a mistaken transfer. Moreover, although many bugs go undetected in code, they usually lie dormant and are never activated. Few bugs represent security risks, and finding them is made harder because companies’ most critical code is executed on private servers behind firewalls and is thus invisible to hackers.
None of these extenuating factors apply to bugs in smart contracts. The code is published on a blockchain for all to see, and size limits make the code easier to analyze. Any small bug in a contract offers the possibility of huge windfalls, so incentives to find and exploit bugs are high. And worst of all, smart contracts are by definition immutable, so a developer who discovers a bug in a contract will be unable to fix it once deployed.
Testing is the standard strategy for finding bugs. The program is executed on a “test suite,” a collection of specially chosen inputs, and the results are checked against expectations. Testing is reasonably effective for finding the most egregious bugs, but it performs poorly at exposing subtle errors, because it’s hard to devise suitable inputs without knowing the errors in advance. Generating a large number of random inputs might seem promising, but the space of possible inputs is so high, even for simple programs, that it rarely helps much.
Over the last few decades, computer science researchers have developed a very different and more powerful approach known as verification, in which the program is checked not against some set of inputs but against every possible input.
The trick is to “execute” the program not with concrete inputs, but symbolically. As a trivial example, imagine a program that is supposed to calculate the square of a number, but which instead just multiplies it by ten. If you tested this program on zero or ten itself, you’d get the right answer, and you might infer, wrongly, that the program is correct. But if you fed the program a symbolic value x and got back 10x instead of x^2, you’d see immediately that it’s computing the wrong value. Moreover, by solving the inequalities 10x < x^2 and 10x > x^2 (for the two cases in which the computed and expected results fail to match) you’d find inputs, such as 11 for the first and 1 for the second, that demonstrate the bug.
Until now, verification has had limited application, because programs are often very large so the inequalities become too hard to solve. But smart contracts are much smaller than typical programs, and thus more amenable to such analysis. Moreover, the technology of verification has been steadily advancing.
The risk posed by bugs in smart contracts has spurred a growing market segment of companies specializing in auditing. Most of these companies employ experts to review the code, but such reviews are error-prone and incomplete.
Certora, a new VC-backed company founded by Mooly Sagiv, a leading verification researcher, combines inspection by human experts with automatic verification. Their Certora Prover can guarantee that a smart contract’s code meets its specification for all inputs, giving greater confidence to their clients than would be possible using conventional techniques.
The approach is not fool-proof. Unlike our simple example of a program that is supposed to square a number, a smart contract cannot be so easily summarized. Its specification must cover special cases and must indicate not only what the immediate effect should be, but also which additional effects should not occur. To mitigate this problem, Certora’s engineers work closely with a smart contract’s developers to formulate a water-tight specification.
Gaps will inevitably remain, and even Certora’s verification-based auditing will not catch every bug. But their new approach raises the bar considerably, and companies are already lining up to have their contracts checked.
Certora’s website lists an impressive array of companies whose smart contracts have been checked to date, along with the verification reports showing bugs that have been caught prior to deployment. The potential cost of a smart contract bug is so high that companies fielding smart contracts will increasingly see auditing as an essential step in their development, and companies like Certora will become central to their efforts.