The Consensus Path To A Bitcoin Hard Fork: Part 1

Rusty Russell
5 min readAug 23, 2017

TL;DR: Best-hope ingredients for consensus seem to be: witness discount for non-segwit txs, gradual weight increase by 14–17% pa, 12-18 month deployment period, hardFork nVersion bit, spoonnet coinbase refinements, 1-way [FIX: opt-in 2-way] replay protection.

Previously I talked about the problem of bitcoin not scaling enough. But that doesn’t mean it can’t improve; there have been massive strides in how scalable the bitcoin protocol and code is, from block validation, propagation, and storage. And it’s all been done in a careful, backward compatible way.

But at some point, we’re still going to need a backward incompatible, everyone-must-upgrade change, aka a hard fork. The most obvious thing to do is increase the blockweight limit (which currently restricts blocks to 4MB worst-case): There’s been too much noise on this, which has prompted most core developers to withdraw and wait to see broad consensus emerge.

But there are plenty of other useful changes, and I want to credit Luke Jr’s continued engagement on this topic (even if I disagree with him!).

Things We Won’t Do

If you have a valid, standard transaction today, it has to be usable after the fork. People do have timelocked transactions, and you don’t become a store of value by saying “please don’t make any transactions today”. This means, for example, we can’t use BIP143 hashing for old transactions (this is what the Bcash fork did, for example).

If you have a spendable output on the blockchain today, it has to be spendable after the hard fork. Otherwise we’re confiscating people’s money.

Generally, this means that transaction format cleanups are not helpful: future code will have to support both old and new formats forever, and the segregated witness format is already pretty optimal, as well as upgradeable. Block format cleanups are helpful: eventually new code could simply forget the old format (or only support enough to handle the ancient chain).

But theoretical problems like the timewarping attack might not be worth fixing, depending on the code complexity.

Things We Could Definitely Do

There’s a great project called spoonnet by Dr Johnson Lau: he’s trying a range of different features in his hardfork (hardspoon?) which I recommend reading. Many are pretty subtle and technical, though.

Obviously this needs winnowing: I think the coinbase changes are really interesting and very nice futureproofing, for example. I’m not a fan of the more complex block sizing measurement: introducing multiple factors significantly complicates construction.

One thing I’d like to see in transactions is a change in the txid calculation: instead of a linear hash, use a merkle tree of the (inputs + tx core) on one side, and the outputs on the other. This makes it possible to prove an output, without having to provide a complete transaction. This would have to be opt-in (if we allow both styles of txid at once it would double the size of the UTXO index), say using the top nVersion bit of the transaction.

We could simply limit old pre-segwit transactions to 100k (actually, that’s just a soft-fork). That limits the damage from sighashing attacks, and transactions larger than this are already non-standard so won’t propagate through the network and would need direct help of a miner anyway.

Increasing the Blocksize?

There seems to be emerging developer consensus on two points: that the safest path to increasing blocksize is to allow pre-segwit transaction to receive the witness discount, and that block size should increase slowly over time.

Spreading The Discount

When witnessed were segregated (with a new transaction type), the witnesses (basically, the signatures) were chosen to only count as 1/4 the weight of the rest of the transaction. This reflects the reality that:

  1. a node could discard the witnesses once it had checked they were valid, and
  2. this makes it as cheaper to spend outputs, which is good for the network: everyone has to remember all the unspent outputs. Before this, wallets trying to create the cheapest (ie. smallest) transaction were better off creating new outputs than consuming existing ones.

Luke-Jr pointed out that if we’re going to hard-fork, we could give this same discount to old-style, non-segwit transactions. That has the immediate effect of roughly doubling transaction capacity for them, without making the worst-case (a 4MB megablock) any worse. I’d like a slight technical modification to this, as non-segwit witnesses can’t be completely discarded, so we should add 32 bytes to the effective size to reflect that we still have to remember the transaction ID, but I think this is a winning approach.

It would be possible to ramp this effect in to avoid a shock to the fee market, but that would be completely artificial and arbitrary, thus unlikely to achieve wide support.

Ramping The Total Size

Two years ago, Pieter Wuille posted a draft BIP “Block size following technological growth” which increased block size by 17.7% per year (our then-best guess on average bandwidth growth). I’ve produced predictions between 17% and 19%; though I note the CISCO VNI is currently predicting 14% growth for the next 5 years.

This approach has been endorsed by Luke-Jr as well, though with arguments over the starting point: the generally-perceived risk of trying to estimate future growth seems to be outweighed by the political and operational risk of requiring another hardfork to increase again.

A conservative proposal might begin ramping up a year (52596 blocks?) after the initial hard fork, to allow the market to adjust to the new capacity.

Flexible Blocksize

The idea of making having blocksize “burst” capacity has been floating around of a while (“flexcap”), but the schemes I have seen work best once fee revenue greatly exceeds the subsidy, otherwise miners are motivated to just produce empty blocks (eg. SPV mining) which doesn’t help the network throughput. There are also a large number of knobs to tune, with no clear guidance on how to select them.

So unless there’s new research soon, I’m reluctant to suggest such a change (it could be soft-forked in, but in that case it can only reduce block size). Someone please prove me wrong :)

Measuring Consensus

It’s nice to discuss a consensus hard fork in the ideal, but how would we know? That’s the subject of part 2…

--

--

Rusty Russell

Rusty is a Linux kernel dev who wandered into Blockstream, and is currently trying to produce a prototype and spec for bitcoin lightning. Hodls bitcoin (only).