Over the last day with the community’s help we have crowdsourced alist of all of the major bugs with smart contracts on Ethereum so far, including both the DAO as well as various smaller 100-10000 ETH thefts and losses in games and token contracts.
This list (original source here ) is as follows:The DAO(obviously) The “payout index without the underscore” ponzi (“FirePonzi”) The casino with a public RNG seed Governmental (1100 ETH stuck because payout exceeds gas limit) 5800 ETH swiped (by whitehats) from an ETH-backed ERC20 token The King of the Ether game Rubixi : Fees stolen because the constructor function had an incorrect name, allowing anyone to become the owner Rock paper scissors trivially cheatable because the first to move shows their hand Various instances of funds lost because a recipient contained a fallback function that consumed more than 2300 gas, causing sends to them to fail. Various instances of call stack limit exceptions.
We can categorize the list by categories of bugs:Variable/function naming mixups:FirePonzi, Rubixi Public data that should not have been public: the public RNG seed casino, cheatable RPS Re-entrancy (A calling B calling A): the DAO, Maker’s ETH-backed token Sends failing due to 2300 gas limit: King of the Ether Arrays/loops and gas limits: Governmental Much more subtle game-theoretic weaknesseswhere at the limit people even debate whether or not they’re bugs: the DAO
There have been many solutions proposed to smart contract safety, ranging from better development environments to better programming languages to formal verification and symbolic execution, and researchers have started developing such tools . My personal opinion regardingthe topic is that an important primary conclusion is the following: progress in smart contract safety is necessarily going to be layered, incremental, and necessarily dependent on defense-in-depth . There will be further bugs, and we will learn furtherlessons; there will not be a single magic technology that solves everything.
The reason for this fundamental conclusion is as follows. All instances of smart contract theft or loss in fact, the very definition of smart contract theft or loss, is fundamentally about differences between implementation and intent. If, in a given case, implementation and intentare the same thing, then any instance of “theft” is in fact a donation, and any instance of “loss” is voluntary money-burning, economically equivalent to a proportional donation to the ETH token holder community by means of deflation. This leads to the next challenge: intent is fundamentally complex .
The philosophy behind this fact has been best formalized by the friendly AI research community, where is bears the names of “ complexity of value ” and “ fragility of value “. The thesis is simple: we as human beings have very many values, and very complex values so complex that we ourselves are not capable of fully expressing them, and any attempt to will inevitably contain some uncovered corner case. The utility of the concept to AI research is important because a super-intelligent AI would in fact search through every corner, including corners that we findso unintuitive that we do not even think of them, to maximize its objective. Tell a superintelligent AI to cure cancer, and it will get 99.99% of the way there through some moderately complex tweaks in molecular biology, but it will soon realize that it can bump that up to 100% by triggering human extinction through a nuclear war and/or biological pandemic. Tell it to cure cancer without killing humans, and it will simply force all humans to freeze themselves, reasoning that it’s not technically killing because it could wake the humans up if it wanted to it just won’t. And so forth.
In smart contract land, the situation is similar. We believe that we value things like “fairness”, but it’s hard to define what fairness even means. You may want to say things like “it should not be possible for someone to just steal 10000 ETH from a DAO”, but what if, for a given withdrawal transaction, the DAO actually approved of the transfer because the recipient provided a valuable service? But then, if the transfer was approved, how do we know that the mechanism for deciding this wasn’t fooled through a game-theoretic vulnerability? What is a game-theoretic vulnerability? What about “splitting”? In the case of a blockchain-based market, what about front-running? If a given contract specifies an “owner” who can collect fees, what if the ability for anyone to become the owner was actually part of the rules, to add to the fun?
All of this is not a strike against experts in formal verification, type theory, weird programming languages and the like; the smart ones already know and appreciate these issues. However, it does show that there is a fundamental barrier to what can be accomplished, and “fairness” is not something that can be mathematically proven in a theorem in some cases, the set of fairness claims is so long and complex that you have to wonder if the set of claims itself might have a bug.
That said, thereare plenty of areas where divergence between intent and implementation can be greatly reduced. One category is to try to take common patterns and hardcode them: for example, the Rubixi bug could have been avoided by making owner a keyword that could only be initializedto equal msg.sender in the constructor and possibly transferred in a transferOwnership function. Another category is to try to create as many standardized mid-level components as possible; for example, we may want to discourage every casino from creating its own random number generator, and instead direct people to RANDAO (or something like my RANDAO++ proposal , once implemented).A more important catego