To The Moon With Larger Blocks

I’ve written a number of articles as of late that are related to the bitcoin block size debate but have never really laid out my reasons for supporting an increase. This occurred to me while reading Greg Maxwell’s trip to the moon reddit post.

In that post Greg more or less gave his reasons for not supporting a block size increase. It comes down to:

1) Bitcoin can’t handle transaction volumes similar to credit card companies without payment systems layered on top of it.

2) Attempting to increase transaction volumes without using these layered solutions makes bitcoin less secure.

Are we on the verge of VISA-level transaction volumes?

I don’t disagree with #1, but this seems to be a bit of a straw man argument. I’m not aware of anyone in the bitcoin space saying that it can handle those kinds of loads today. Nor is anyone seriously proposing that the block size be increased by that large of an amount today. I think what people do is they look forward into the future when we know CPU, memory, storage, bandwidth, etc will be cheaper and more plentiful and conclude that, at some point in the future, bitcoin may be able to handle those kinds of volumes.

We know this is absolutely true if we consider the state of technology in the year, say 2100. At which point fitting all global transactions on to the block chain will be a trivial task. The more relevant question is, will technology grow fast enough to enable Bitcoin to handle the volume of transactions that we are likely to see?

Keep in mind VISA-level transaction volume isn’t likely to happen anytime soon. If we assume transaction volume continues to double, we wont hit VISA-level volume for about a decade. Will transaction volume continue to double? Will its growth rate accelerate or slow down? It’s hard to know. If it continues to double (or even accelerates), the block size will probably eventually need to be used to throttle down the growth of on-chain transactions at some point. If growth slows even a little, then it’s possible the network never runs out of capacity as the technology might very well grow faster.

This was more or less Satoshi’s view after all:

By Moore’s Law, we can expect hardware speed to be 10 times faster in 5 years and 100 times faster in 10. Even if Bitcoin grows at crazy adoption rates, I think computer speeds will stay ahead of the number of transactions.

So, yes, bitcoin can’t handle VISA level volumes today but it can handle today’s transaction volumes (which the current block size limit is preventing it from doing) as well as volumes we are likely to see in the near future and it might be able to handle much higher volumes depending on the rate of growth of transactions relative to the rate of growth of technology.

It’s nice to know that layered payment solutions, like the lightning network, could (theoretically) handle high transaction volumes if we experienced a massive burst in Bitcoin popularity that pushes the network beyond capacity. But the existence of such layers doesn’t change the likelihood of that happening.

Do larger blocks make Bitcoin less secure?

If you read Greg’s reddit post you almost certainly came away with the notion that allowing larger blocks will harm bitcoin and make it less secure. Why would this be the case? Let’s run though some of the common reasons.

Larger blocks increase the costs of running a bitcoin node. Increased costs = fewer nodes = less security.

I think there are a number of reasons why this equation is overly simplistic to the point of being misleading. First, as another author put it, the current costs of running a bitcoin node are “are somewhere between infinitesimally small and very small”.

We do have to worry that an increasing cost of running a node might price people out of doing so, but we are a ways from that. Consider my current laptop, on my home internet connection, could likely handle block sizes upwards of 100 megabytes (assuming some optimizations which I’ll get to later). So not only can I easily handle today’s volumes, but I can also handle volumes we aren’t likely to see for another 6 – 7 years ― at no extra cost to me. And note that in 6 or 7 years I will have a much faster computer and a much faster internet connection.

“Ok, ok, you can handle those volumes, but you aren’t exactly typical of the average person.” This is true, but this brings me to a larger question. Do people need to run full nodes from their home? Moreover, does everyone need to run a full node?

Satoshi recognized that running a full node could be too burdensome for the average user and designed the protocol so that most people don’t have to. There’s a section in the whitepaper specifically outlining “simplified payment verification” which provides close to the same amount of security and fraud prevention as a full node, but leaves a very tiny footprint on your computer.

It’s true that SPV wallets offer slightly less fraud prevention than a full node, but the level of security is certainly high enough for average use (far better than what we currently have with banks and credit cards). Considering how the largest incoming payment most people accept is from their employer, who isn’t going to try to double spend, SPV wallets would be an acceptable replacement for a checking account in a full bitcoin world. Note that even if the block size stays at 1 MB and most people can run a full node from home, 99% of them still wont do so. It’s just not how end users want and expect consumer oriented software to behave.

For people who are processing either high value or large volumes of payments, they would most likely want maximum protection and run a full node. Basically we’re talking small businesses and up. So here’s likely a major difference between myself and the small block folks ― I believe the minimum system requirements of full node software should be designed such that anyone processing large volumes of payments (small businesses and up) can run a node at a “reasonable cost” as opposed to “anyone can run a full node from home with their 6 year old Windows XP laptop”. Even if we designed for the latter, most people wont do it.

Now we can differ about what a “reasonable cost” is, but I’d say somewhere around $30-$50/month. Anything more than that and you are probably pricing some small businesses out of running a node. But note that at those costs, not only can you process 2 MB, 4 MB, or 8 MB blocks, but also blocks much larger. Like I stated eariler, if transaction volume increases to the point where the cost of running a node start to become unreasonable for a small business, it’s likely to be closer to a decade from now, and only if transaction volume continues to double every year.

And finally note that while I view businesses as the primary users of fulls nodes, running a node at home wouldn’t be out of reach for a hobbyist with a decent computer and reasonable internet connection. As I said earlier, I expect my computer and home internet connection to be able to run a full node for many more years.

But don’t we need everyone running full nodes from their home? Doesn’t more full nodes help keep Bitcoin decentralized?

I think this is one of the biggest misconceptions in Bitcoin. It’s often exacerbated by the fact that charts are often circulated showing the number of full nodes declining as transaction volume has been increasing (of course, we can’t state if that is the cause as there’s no control group!).

But the broader question is does the raw number of nodes matter for decentralization? This answer depends on how people use their nodes. The reason full nodes matter is because they serve as a check on the miners. If, say, 51% of the miners colluded and decided they were going to increase the inflation rate to pay themselves more, the coins they generate would not be accepted by people running full nodes. In other words, full nodes enforce the network rules and prevent any shenanigans like this.

But note here that simply running a node doesn’t provide any check at all ― you must be using that node to accept payments. To elaborate, if I run an SPV wallet, which cannot verify the inflation rate, a miner might be able to fraudulently send me inflated coins in exchange for goods. If I’m running a full node my client will not accept those coins. The fact that miners wont have anywhere to spend those coins is what provides the check and disincentivizes them from trying to change the rules.

All those people who are simply running a node to “contribute” to the network, without actually processing payments, are doing nothing for decentralization. At best, they spread out the burden of serving wallets like bitcoin-wallet android and breadwallet, but that’s about it (and serving those wallets is optional anyway). If those nodes went offline, the network would not miss them.

So what we care about is people who are actually selling things of value ― those small businesses and up again ― running full nodes. The fact that one of those malicious miners can’t purchase my used snowboard from me because I’m running a full node on my home computer is doing next to nothing to prevent miners from changing the rules.

So when we talk about node count, the quality of those nodes matters not the absolute quantity.

Larger blocks will take a long time to propagate and can create a lot of problems for the network.

It’s true that issues would arise if blocks start taking a while to transmit around the network. This problem has already been partially addressed, however, by the creation of the high-speed mining relay network which uses compression techniques transmit blocks between miners very fast.

Looking forward, there are a number of optimizations that have been proposed that could potentially improve on the relay network and extend the functionality to all nodes. If something like IBLT (invertible bloom lookup tables) were adopted, only an 80 byte block header plus a relatively small IBLT object would need to be transmitted between nodes, not the full 10 megabyte (or whatever size) block. The result would (theoretically) be that transactions only need to be transmitted around the network once as opposed to twice as they are today ― representing a significant bandwidth savings in addition to high speed transmission.

If block propagation is not fixed I’d be more skeptical about larger blocks but I think there’s reason to be optimistic here.

But don’t transaction fees need to go up to pay network security?

Newly generated bitcoins are currently paid out to miners as compensation for securing the network. Satoshi designed the inflation rate to be cut in half every four years, meaning the compensation to miners (and their incentive to secure the network) is trending towards zero. The only other way we have to compensate miners is transaction fees. For the entire history of bitcoin, transaction demand has been below supply (the amount of space in the block dictated by the block size limit). This has resulted in very low transaction fees (the only reason they aren’t zero is because a minimum fee is enforced to prevent flooding).

The argument goes we need demand to exceed supply (the block size limit) so that it puts upward pressure on transaction fees to replace the dwindling inflation rate so the network doesn’t suffer a catastrophic loss in security.

Of course, higher fees is one way to compensate the miners, but it isn’t the only way. The other way is simply through higher volume. It would take an average transaction fee of about $5 to completely replace today’s block reward if blocks stayed at 1 MB, whereas it would take about 63 MB blocks if transaction fees stayed around today’s level of about 8¢. As I’ve said before, 63 MB isn’t really infeasible assuming we continue to optimize. And also note that we don’t need to totally replace the block reward today. We’re looking at probably decades from now, at which point I’m willing to bet 63 MB will be relatively trivial to process.

So yes, we could jack up fees as a way to pay for security, but we could also simply allow the network to grow within it’s technological limits. I prefer the latter.

Why on-chain transactions?

My preference for on-chain transactions comes from the fact that, up until this point, they have been the gold standard in decentralized, censorship-resistant payments. Prior to the publication of the Lightning Network paper last year, the best alternative proposal from small block advocates was to just use a centralized third party, like Coinbase, to pay people. Which is, of course, grossly inferior. What good is having the underlying Bitcoin protocol remain decentralized if it’s too expensive for everyday use and the payment layers we actually use are horrifically centralized?

The Lightning Network will hopefully be better than that, but until we see it in action we wont know if it’s a perfect substitute for an on-chain transaction. I suspect that it will still probably be a downgrade from how we use Bitcoin today. Once we see it in action I will be able to form a better opinion of it.

In either case, I think we should allow as many on-chain transactions as the network can safely handle without sacrificing decentralization ― which is much more than 1 MB.

Summary

So to sum up, Maxwell and the other small block folks have failed to convince me that transaction volumes that we are likely to see in the near future are a threat to the security and decentralization of bitcoin. Yes, if we experienced an explosion of growth, it could end up pricing some small businesses out of running a bitcoin node (while bad, I suspect even that wouldn’t be a complete disaster) but this is why most proposals revolve around keeping the block size limit in line with what the network will likely be able to handle ― which is much more than 1 MB today.

Higher level payment layers like the lightning network are nice to have, especially if there is an explosion in bitcoin adoption, but their (potential) existence doesn’t justify holding on chain transaction volume substantially lower than what the network can handle.

Advertisements

6 thoughts on “To The Moon With Larger Blocks

      • To be fair, I share Kess’ sentiment. This post doesn’t acknowledge multiple realities, like the reality that you don’t need to raise the block limit to get the effects that Chris wants (increased capacity), and the reality that increasing capacity is something that is happening already.

        Other unacknowledged realities is that we are already losing full nodes at the 1MB limit, and that this loss will most likely only continue. Increasing the blocksize is unnecessary (since you can increase capacity without it), and would accelerate the loss of full nodes, which would certainly harm Bitcoin’s decentralization (which is Bitcoin’s #1 priority).

        Nor does it acknowledge the reality that hard forks in general are a very bad idea in of themselves and should be extremely difficult to accomplish for very good reasons that are spelled out here:

        http://bitledger.info/hard-fork-risks-and-why-95-should-be-the-standard/

        And here:

        http://bitledger.info/why-a-hard-fork-should-be-fought-and-its-not-evil-to-discuss/

      • Greg the last couple posts of mine you’ve commented on you seem like you didn’t read them.

        > This post doesn’t acknowledge multiple realities, like the reality that you don’t need to raise the block limit to get the effects that Chris wants (increased capacity)

        I dedicated a section this. I haven’t see anything proposed that is robust as making an on chain transaction. We will see how the lightning network works in practice but I wrote an entirely other blog post explaining why I’m skeptical it will be as good as an on chain transaction. Maybe they will prove me wrong, but until I see it in action I’m going to favor a block size increase.

        > Other unacknowledged realities is that we are already losing full nodes at the 1MB limit, and that this loss will most likely only continue.

        I also spend an entire section addressing that and why the number of nodes isn’t the grand decentralization metric it’s made out to be. Feel free to criticize that argument, but just repeating something I addressed doesn’t do it.

        > Nor does it acknowledge the reality that hard forks in general are a very bad idea in of themselves

        And I’ve written two articles addressing this. Hardforks are all around better than softforks in my opinion.

  1. > Now we can differ about what a “reasonable cost” is, but I’d say somewhere around $30-$50/month

    And by this you mean datacenters, right? Not owning the hardware is our base acceptable level, is it? How about you redo these calculations assuming people actually want to run this node on localhost. 5 years at $50 a month is a “whopping” $3000. That’s barely enough to afford a single high performance server. The btcd team estimates you’ll need clustered computing before 32MB: https://blog.conformal.com/btcsim-simulating-the-rise-of-bitcoin/

    > But note that at those costs, not only can you process 2 MB, 4 MB, or 8 MB blocks, but also blocks much larger

    That all depends on what you mean by “much larger”: https://blog.conformal.com/btcsim-simulating-the-rise-of-bitcoin/

    You’re certainly not going to be able to do all credit card txs, let alone cash txs, microtxs and “blockchain 2.0”.

    > Like I stated eariler, if transaction volume increases to the point where the cost of running a node start to become unreasonable for a small business, it’s likely to be closer to a decade from now, and only if transaction volume continues to double every year.

    What happens if tx volume grows, and grows beyond what even these “small business owners” are willing to afford? Would you have me believe the /r/btc hate machine will reverse its iron clad views towards scaling because suddenly nodes are important to them? Do you suggest these people will suddenly reverse course and accept Layer-2 after demonizing it and its creators for *years*? No, it’s not going to happen that way.

    > As I said earlier, I expect my computer and home internet connection to be able to run a full node for many more years.

    Many more years according to you? To me? I expect so too, but I also recognize the futility of making any future predictions about Bitcoin. We don’t know what the network looks like in 6 weeks. To make long term predictions like that is buffoonery.

    > If, say, 51% of the miners colluded and decided they were going to increase the inflation rate to pay themselves more, the coins they generate would not be accepted by people running full nodes. In other words, full nodes enforce the network rules and prevent any shenanigans like this.

    Unrealistic. As this debate has shown: miners will want to follow the exchanges and the VC backed Corporations with the loudest voices. And increasingly, miners are huge Corporations themselves. If the trends remain on course, this should be deeply unsettling for your argument. These Corporations tend to be backed by well-known personalities who wouldn’t possibly risk losing face let alone doing jail time if the government were to ever cast an unfriendly eye towards them.

    Honestly, I can’t even express with words how disappointed I am reading this crap from you, Chris. It’s just mentally lazy. You’re not asking yourself any of the hard questions, you’re just making warm and fuzzy best-case-scenario predictions. Whatever helps you sleep at night.

    As if “full nodes” will enforce the network rules YOU want them to and not what rules the government wants them to when they’re all controlled by Corporations in datacenters…

    > We’re looking at probably decades from now

    This is yet another optimistic projection on your part. What ever happened to you people and the “Fidelity Effect” – I guess that only gets shoved out of your keyboards when it suits your agenda? Do you understand that Corporations will take advantage of unlimited free or nearly free distributed storage space, and will do so possibly without even buying BTC for themselves? Nowhere does it say Bitcoin miners *must* be paid with BTC – as rewards decrease we can also pay them with fiat currency colored coins. And with Gavin Andresen saying he doesn’t think proof of work is long term viable with this datacenter tx validation model, it really makes me wonder what the future is of BTC as a store of value.

    > the best alternative proposal from small block advocates was to just use a centralized third party, like Coinbase, to pay people

    Voting pools and sidechains. Wrong again. It’s also rather ridiculous to pretend as if exchanges like Coinbase won’t play a dominant role in the future no matter what – lots of people prefer having someone else deal with the security and would prefer trusting insured funds to regulated entities. This is going to be true even if blocks 100TB.

    > What good is having the underlying Bitcoin protocol remain decentralized if it’s too expensive for everyday use and the payment layers we actually use are horrifically centralized?

    How about we ask an Argentine what the value is of having a currency not controlled by their ruling State

    > So to sum up, Maxwell and the other small block folks have failed to convince me that transaction volumes that

    Wow, we failed to convince you! Who would’ve guessed it?! Gee, I guess that means we should fork over to Classic because you guys really seem to care **so much** about decentralization you’re willing to turn it all over to datacenters while rationalizing your decision with warm and fuzzy fucking BULLSHIT.

    • @ananobread this type of comment is exactly why nobody believes small block people. It’s filled with uninformed hysteria and doesn’t account for optimizations that can be made (and would need to be made) to support larger blocks.

      For example, you link to the conformal analysis from two years ago which estimates 10 minutes of signature verification for 32 mb blocks. I’m pretty sure that’s not considering the CPU gains from libsecp256k1, which doesn’t take anywhere near that amount of time.

      Also a prerequisite for larger blocks is a fix to block propagation. Ideally transactions would only be verified once, at the time they are received, not when the block is transmitted. This makes verifying a block super fast, like a second or two, and nowhere near the 10 minutes you link to in that article.

      Your analysis assumes no optimization. If that’s true the network can’t support larger blocks. But my entire article assumes those optimizations will be made.

      The rest of your comments follow the same pattern. This is why your side has very little support. People who follow this stuff know that the bottlenecks you point out can be fixed with optimizations and it makes your side seem disingenuous when you pretend like those optimizations don’t exist.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s