Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

The History of Casper Chapter 2

0
0

This chapter describes the game theory and economic security modelling we were doing in the Fall of 2014. It recounts how the “bribing attacker model” led our research directly to a radical solution to the long range attack problem.

Chapter 2: The Bribing Attacker, Economic Security, and the Long Range Attack Problem

Vitalik and I had each been reasoning about incentives as part of our research before we ever met, so the proposition that “getting the incentives right” was crucial in proof-of-stake was never a matter of debate. We were never willing to take “half of the coins are honest” as a security assumption. (It’s in bold because it’s important.) We knew that we needed some kind of “incentive compatibility” between bonded node incentives and protocol security guarantees.

It was always our view that the protocol could be viewed as a game that could easily result in “bad outcomes” if the protocol’s incentives encouraged that behaviour. We regarded this as a potential security problem. Security deposits gave us a clear way to punish bad behaviour; slashing conditions, which are basically programs that decide whether to destroy the deposit.

We had long observed that Bitcoin was more secure when the price of bitcoin was higher, and less secure when it was lower. We also now knew that security deposits provided slasher with more economic efficiency than slasher only on rewards. It was clear to us that economic security existed and we madeit a high priority.

The Bribing Attacker

I’m not sure how much background Vitalik had in game theory (though it was clear he had more than I did). My owngame theory knowledge atthe start of the story was even more minimal than it is at the end. But I knew how to recognize and calculate Nash Equilibriums. If you haven’t learned about Nash Equilibriums yet, this next paragraph is for you.

A Nash Equilibrium is a strategy profile (the players’ strategy choices) with a corresponding payoff (giving $ETH or taking $ETH away) where no players individually have an incentive to deviate. “Incentive to deviate” means “they get more $ETHif they somehow change what they’re doing”. If you remember that, and every time you hear “Nash Equilbrium” you thought “no points for individual strategy changes”, you’ll have it.

Some time in late summer of 2014, I firstran into “the bribing attacker model” when I madean offhand response to an economic security question Vitalik asked me on a Skype call (“I can just bribe them to do it”). I don’t know where I got the idea. Vitalik then asked me again about this maybe a week or two later, putting me on the spot to develop it further.

By bribing game participants you can modify a game’s payoffs, and through this operation change its Nash Equilibriums.Here’s how this might look:


The History of Casper   Chapter 2

The bribe attack changes the Nash Equilibrium of the Prisoner’s Dilemma game from (Up, Left) to (Down,Right). The bribing attacker in this example has a cost of 6 if (Down, Right) is played.

The bribing attacker was our first useful modelof economic security.

Before the bribing attack, we usually thought about economic attacks as hostile takeovers by foreign, extra-protocol purchasers of tokens or mining power. A pile of external capital would have to come into the system to attack the blockchain. With the bribe attack, the question became “what is the price of bribing the currently existing nodes to get the desired outcome?”.

We hoped that the bribing attacks of ouryet-to-be-defined proof-of-stake protocol would have to spend a lot of money to compensate for lost deposits.

Debate about “reasonableness” aside, this was our first step in learning to reason about economic security. It was fun and simple to use a bribing attacker. You just see how much you have to pay the players to do what the attacker wants. And we were already confident that we would be able to make sure that an attacker has to pay security-deposit-sized bribes to revert the chain in an attempted double-spend. We knew we could recognize “double-signing”. So we were pretty sure that this would give proof-of-stake a quantifiable economic security advantage over a proof-of-work protocol facing a bribing attacker.

The Bribing Economics of the Long Range Attack

Vitalik and I applied the bribing attacker to our proof-of-stake research. We found that PoS protocols without security deposits could be trivially defeated with small bribes. You simply pay coin holders to move their coins to new addresses and give you the key to their now emptyaddresses. (I’m not sure who originally thought of this idea.) Our insistence on using the briber model easily ruled out all of the proof-of-stake protocols we knew about.I liked that. (At the time we had not yet heard of Jae Kwon’s Tendermint, of Dominic William’s now-defunct Pebble, or of Nick Williamson’s Credits.)

This bribe attack also posed a challengeto security-deposit based proof-of-stake: The moment after a security deposit was returned to its original owner, the bribing adversary could buy the keys to their bonded stakeholder address at minimal cost.

This attack is identical to the long range attack.It is acquiring old keys to take control of the blockchain. Itmeant that the attacker can create “false histories” at will. But only if theystart at a height from which all deposits are expired.

Before working on setting the incentives for our proof-of-stake protocol, therefore, we needed to address the long-range attack problem. If we didn’t address the long range attack problem, then it would be impossible for clients to reliably learn who really had the security deposits.

We did know that developer checkpoints could be used to deal with the long-range attack problem. We thought this was clearly way too centralized.

In the weeks following my conversion toproof-of-stake, while I was staying at Stephan Tual’s house outside of London, I discovered that there was a natural rule for client reasoning about security deposits. Signed commitmentsare only meaningful if the sender currently has a deposit. That is to say, after the deposit is withdrawn, the signatures from these nodes are no longer meaningful. Why would I trust you after you withdraw your deposit?

The bribing attack model demanded it. It would cost the bribingattacker almost nothing to break the commitments after the deposit is withdrawn.

This meant that a client would hold a list of bonded nodes, and stop blocks at the door if they were not signed by one of these nodes. Ignoring consensus messages from nodes who don’t currently have security deposits

solves circumventsthe long-range attack problem.

Instead of authenticating the current state based on the history starting from the genesis block, we authenticate it based on a list of who currently has deposits.

This isradicallydifferent fromproof-of-work.

In PoW, a block is valid if it ischained to the genesis block, and if the block hash meets the difficulty requirement for itschain. In this security deposit-based model, a block is valid if it was created by a stakeholder with acurrently existing deposit. This meant that you would need to have current information in order to authenticate the blockchain. This subjectivity has caused a lot of people a lot of concern, but it is necessary for security-deposit based proof-of-stake to be secure against the bribingattacker.

This realization made it very clear to me that theproof-of-work security model and the proof-of-stake security model are fundamentally not compatible. I therefore abandoned any serious use of “hybrid” PoW/PoS solutions. Trying to authenticate a proof-of-stake blockchain from genes

Viewing all articles
Browse latest Browse all 12749

Latest Images

Trending Articles





Latest Images