Rolling for initiative

At the start of the year, I wrote out some thoughts about Bitcoin priorities, probably most simply summed up as:

it’s probably more important to work on things that reinforce [Bitcoin’s] existing foundations, than neat new ideas to change them

In that post, I also wrote:

I’m particularly optimistic about an as yet unannounced approach that DCI has been exploring, which (if I’ve understood correctly) aims to provide long term funding for a moderate sized team of senior devs and researchers to focus on keeping Bitcoin stable and secure […]

It wasn’t something I’d even considered as a possibility at the time, but the world works in mysterious ways, and as it turns out, I’m now joining the Digital Currency Initiative to work on making that approach live up to its promise.

There are, I think, two ways to make systemic improvements in security. One is in code and tooling improvements — reworking the code directly to remove bugs and make it more robust, and building tools like linters, continuous integration and fuzz testers, that will then automatically eliminate potential bugs before they’re written or merged. I expect that will be where we’ll devote most of the effort.

But I think an equally important part of doing security well is having it be an integral part of development, not an add-on — while certainly some people will have more expertise than others, you want everyone thinking about security; in a similar way to wanting everyone to be thinking about performance if you want your system to work efficiently, or wanting everyone to be thinking about user experience if you want a smooth and consistent experience. That is, the other part of making systemic improvements in security is maintaining a culture that deems security a critical priority, and worth thinking about at all levels.

That may mean that I want to walk back my earlier conclusion that “neat new ideas [that] change [Bitcoin’s existing foundations]” are something to deprioritise. Because it certainly seems like people do want exciting new features, and given that, it quickly becomes super important that the people working on those features aren’t a separate group from the people who are deeply security-conscious, if we want to ensure those new features don’t end up compromising Bitcoin’s foundations. The alternative is to continually fight a rearguard action to either debug or prevent adoption of each neat new idea that hasn’t been developed with an appropriately adversarial mindset.

In particular, that may mean that working on things like ANYPREVOUT and TAPLEAFUPDATEVERIFY might have two ways of fitting into the “improve Bitcoin’s security” framework: it makes it easier to use bitcoin securely (ANYPREVOUT hopefully makes lightning safer by enabling eltoo and thus reduces the risks of toxic state; TAPLEAFUPDATEVERIFY may make improvements in cold storage possible, making on-chain funds safer from theft), but developing them in a way that puts security as a core goal (as compared to other priorities, eg “time to market”) might help establish traditions that improve security more broadly too.

(And I don’t mean to criticise the way things are going in Bitcoin core so far — it’s a great project where security does take a front row seat pretty much all the time. The question I’m thinking about is how to make sure things stay that way as we scale up)

Also, just to get it on the record: “security” means, in some sense, “the system works the way it’s intended to”, at least in regard to who can access/control what; but “who is intended to have what level of access/control” is a question you need to answer first. For me, Bitcoin’s fundamentals are that it’s decentralised, and that it’s a store of value that you, personally, can keep secure and choose to transfer if and when you please — which is really just another way of saying that it’s “peer-to-peer electronic cash”.

I don’t think Bitcoin gets anywhere by compromising on decentralisation: better to leave that to competing moneys whether that be Central Bank issued or altcoin tokens on the one hand, and higher layers that build on Bitcoin, like Liquid or exchanges, on the other. If those things succeed, that’s great — but having a money that’s an even playing field for everyone, powerful or not, is a fundamentally different thing that’s worth trying to make work.

There are plenty of details that go into that, and plenty of other things that are also important (for instance, I think you could also argue that many of Bitcoin’s other priorities, such as the fixed supply, or privacy or censorship resistance can only be obtained by having a decentralised system); but I think it’s worth trying to pick the principles you’re going to stand for early, and for Bitcoin, I think the best place to start is decentralisation.

Sturm und drang und taproot activation

The Shipwreck – Joseph Mallord William Turner 1775-1851

Back at the end of 2019, I said on Stephan Livera’s podcast that activation of taproot is “something a lot of people in the community have very strong opinions of; so it’s probably going to be a Twitter flamefest or whatever about it.” It’s turned out both better and worse than I expected — better in that we got decent agreement on an activation method, merged it into core, and so far appear to be getting uptake by miner more rapidly than I was expecting; worse in that the the “UASF” side of the debate seems to have gone weirdly off the rails.

(I’ve written in the past about activating soft forks in Bitcoin so I’ll just leave that link there if you want some general context for what the heck I’m talking about and otherwise dive right in)

Speedy Trial

The activation method included in the Bitcoin Core 0.21.1 is called “Speedy Trial” and it’s implemented as a variant of BIP 9 (which was used for activating segwit and CSV/relative timelocks) modified in a few ways:

  • rather than having signalling not start for a month after the release of the new version, signalling was scheduled for just a few weeks after the merge of the activation parameters, and ended up starting on the same day the software was actually released
  • rather than having signalling continue for a year, it only continues for a bit over three months (ending sometime after August 11)
  • rather than having activation occur two weeks after lock in occurs, it is delayed until block 709632 (expected around mid November)
  • rather than requring 95% of blocks to signal to lock in activation, only 90% of blocks are required to signal

The main idea is that we don’t have any reason to expect problems with taproot activation, so let’s just do something quickly: if we’re right, and there are no problems, the quick approach will work fine, and if we’re wrong and there are problems then we can do something else.

Shortening the timeframe (starting and ending signalling sooner than BIP 9’s recommendations) means that we can move on to dealing with problems much more quickly and delaying activation helps ensure that there’s still time for users to upgrade and ensure miners play fair, despite signalling starting so quickly. Finally the reduced threshold recognises that the BIP 9 mechanism doesn’t need as high a threshold as the old IsSuperMajority approach, so lowers it somewhat while remaining fairly conservative.

The broader rationale behind this approach was documented by Matt in his Modern Soft Fork Activation email early last year (point 4): adoption by the vast majority of hashpower reduces the risk to the network of activation while the rest of the network upgrades — users who don’t upgrade will follow the chain with the new rules because any chain that doesn’t follow the new rules will quickly become much shorter (the 90% figure means transactions in an chain invalid under the new rules only have ~1% chance of getting three confirms before being reorged to a chain that’s valid under the new rules).

That strategy fits well with voluntary signalling by miners: if miners upgrade quickly, it’s safe to activate the new rules fairly quickly. If they don’t upgrade quickly, well then we need to wait for users to upgrade to have a UASF-style activation — but if users have all upgraded, it doesn’t much matter what miners do: if they mine blocks invalid under the new rules, they’ll just be ignored, the same as BCH or BSV blocks are ignored by Bitcoin nodes. So “Speedy Trial” just deals with the easy case — let’s see if things will work well, and get it over with if it does. If it doesn’t work well, it’ll all be over quickly, and we can move on to an activation method that doesn’t rely on miners, knowing that it’s needed.

“UASF”

While most people were happy to release Bitcoin Core with the Speedy Trial method, that’s not true of everyone, and a few people are instead encouraging use of an alternative implementation, forked from the Bitcoin Core codebase, and using a different activation methodology, that requires taproot activation by signalling by a particular block height that is expected to arrive around November 2022.

I don’t recommend running that client under any circumstances.

The simplest reason is that it’s poorly maintained: for example, the change to set the activation parameters is PR#9, which was merged without any (public) review comments, about nine hours after it was filed, and about 29 hours before the meeting that was going to make the decision on those parameters. It also has a red-cross “failed CI tests” marker, mainly because various software-as-a-service CI systems have limits on how many jobs they’ll run for free, and in order to run all the CI tests for bitcoin, you either have to pay to get them run quickly, or you have to wait for a long time. Having been forked from Bitcoin Core 0.21.0, none of the backports targeted to Bitcoin Core 0.21.1 have been merged, such as #20901, #21469, or #21640 — the lack of #21469 in particular means the UASF client won’t correctly parse bech32m addresses when attempting to pay to taproot addresses following BIP 350, for instance, rendering it incompatible with any taproot wallets following the recommended address format. Are these serious bugs that will lose you money today? No, probably not. But is this the best software to secure your BTC so you won’t lose money tomorrow? Also, no.

Beyond poor quality, it’s also marketed deceptively — eg, announcing it as “the Bitcoin Core 0.21.0 build w/ Taproot added in” and naming it “Bitcoin Core version 0.21.0-based Taproot Client 0.1” rather than making it clear that it’s an entirely separate group of people working on it to the Bitcoin Core developers. Perhaps if you look carefully at their bitcointaproot.cc site, after skipping past the big, bold “Bitcoin Core” heading in the middle of your screen when you first load it up, you might see the “Who maintains the Taproot Client software?” and click through to see “Bitcoin Mechanic, Shinobi and stortz”, though you’ll likely have no way of figuring out who those people are.

“lot=false to lot=true”

It was (in my opinion) always likely we’d end up with one activation method in Bitcoin Core and a different UASF client published by Luke and others — when I did the survey of some devs two-thirds didn’t want to go straight to a “flag day” style activation, but the remaining one-third did, and the BIP 148 experience already demonstrated that releasing a forked client was a realistic gambit if things weren’t going your way.

The original pseudonymous author of BIP 8 approved Luke’s patch adding himself as a co-author of that document in June last year, and in the same patch set, Luke introduced the lockinontimeout parameter, which seemed (to me at least) like a way that the same codebase could satisfy both goals. So for the eight or so months following that, I spent a bunch of time (see #950, #1019, #1020, #1021, #1023, #1063) trying to refine that so it would work as smoothly as possible, even if miners or others were deliberately trying to game the system.

The advantage of that approach over where we are today is that when a few opinionated people decide that a UASF is the only reasonable approach it would be much easier to maintain a high quality fork of Bitcoin Core that has that feature — you’re only changing a single “false” to “true” and otherwise just updating documentation and the name; so there’s no particular difficulty in porting over other patches. (In contrast, when you’re using an entirely different mechanism, you have to touch code in lots of different places, and each of those has a chance of conflicting with any other patches you might also want to include)

By mid February it was looking (to me at least) pretty much like that was how things would play out, and we would merge BIP 8 into core with taproot set as lot=false, but with the code ready for a switch to lot=true. There was still work to be done to make lot=true safe in the adversarial conditions where it might have any value, but it seemed plausible that we could make progress on that over the next few months — pulling in some of the fixes that were already done for the BIP 148 code in 2017, and adding improvements on top of that.

But that was about the point any chance of consensus collapsed: the suggestions of “core should just release both clients and let the people choose” and the consensus risks that implies (which is complicated enough it’d take a whole article in itself to cover) concerned Suhas seriously enough to setup a blog and concerned Matt enough to go back to square one on activation methods. Meanwhile on the other side, Luke declared “LOT=False is dangerous and shouldn’t be used”.

The “UASF (LOT=true) kick off meeting” was then announced as happening a couple of days later (though work on the UASF website had already begun a week earlier, prior to “LOT=false is dangerous” post), which ended up including some gloating about the confusion, along with promises to make it hard to come to consensus (“<luke-jr> personally I plan to NACK any LOT=False; but I doubt I would need to at this point (devs pushing against LOT=True seem to be off shed-painting other bad ideas now)“).

Perhaps someone with more ranks in the Diplomacy skill could’ve done better and we could have stayed on that track, but, at least for me, those were pretty clear “Dead End” signs.

Speedy Trial was proposed a few days later, providing a new track. It took under six hours to get a first draft PR implementing that proposal up, but then an additional 57 days to actually get it included in a release. Some of that was due to problems with the original draft, some was due to improving test coverage, some was due to the regular release candidate process, and some was certainly due to an unexpected certificate revocation, but a lot of time was effectively wasted: eg, there was a port of Speedy Trial on top of the BIP 8 patches proposed a few days after the initial PR above, effectively doubling the review load and splitting the development effort for most of that time, and then there were the promised NACKs, along with long delays with getting BIP updates merged.

I say “effectively wasted”, but perhaps that’s not fair: exploring alternative ways of writing code helps you understand what’s going on, even if you throw it away (certainly, I learned something from it: notably that height based signalling isn’t compatible with testnet behaviour). Given the sudden failure of the previous lot=false approach, I don’t think there was ever any realistic hope of achieving a better degree of consensus, but of course, I could be wrong, and some things are worth trying even when it seems hopeless. So, obviously, draw your own conclusions on whether the time spent there was worthwhile or not.

Speedy Trial vs UASF

I think there’s fundamentally three reasons why some people are still sticking to the UASF approach rather than being happy with the Speedy Trial approach — whether in practice, despite the poor implementation, or more in principle, eg by suggesting that Bitcoin Core should be deploying some sort of a UASF backstop now, rather than solely doing Speedy Trial.

I think the simplest of these reasons is something along the lines of “BIP 8 was proposed in 2017, why go to all this hassle instead of just doing it?” (eg adam3us or michaelfolkson) or more assertively something like “We already agreed to do BIP 8, why are you violating community consensus?” (eg luke-jr or MarkFriedenbach) The problem with that is that the BIP 8 we have today is not the BIP 8 that was proposed in 2017 or even the one we had in January this year — over time it’s had the lot=true/false parameter added, had a compulsory signalling phase added, had numerous tweaks to the state behaviour added, and most recently had a lock-in delay added. It’s never been used outside of the regression test environment, and bugs were being found and fixed as recently as February and throughout March. That’s not unexpected — what we had in 2017 was an idea, but as with most ideas, things get more complicated when you try to actually make them a reality. And sometimes it turns out that your original idea wasn’t so great in the first place.

Another reason is something along the lines of “If we think a UASF might be necessary in a few months in the event Speedy Trial doesn’t hit 90%, why not do it now?” There’s two answers to that: the first is that the approach that we were working towards had collapsed, and it would likely take months to get agreement on an alternative one — by which time Speedy Trial would have finished anyway, whether it succeeds or fails. (That it took two months to even get Speedy Trial out suggests that might even be an underestimate). So why not get started with the easy part while we rethink the hard part? The second is that we’re likely to learn things from Speedy Trial and that can inform our decisions on how to deploy the UASF. From my perspective we’ve already learnt some things:

  • miners/pools didn’t start signalling prior to the activation logic reaching the STARTED state — that probably means there’s less “false” signalling than some us feared/expected
  • pools have been upgrading to signal fairly quickly, adding credence to their prior statements as recorded on taprootactivation.com
  • poolin have reported that some ASIC firmware doesn’t support signalling via version bits — maybe that’s an easy fix, or maybe we should move signalling to a different mechanism; as a result they’ve only enabled signalling for some of their servers, and maybe 1THash has done the same
  • maybe we should be expecting teething problems like this and in the future encourage signalling in advance of it actually mattering

An obvious thing we’ll learn if Speedy Trial fails is that we can’t reach 90% of miners signalling in three months — if we discover that’s for practical reasons (like the issue poolin described), then making signalling mandatory is probably a bad idea if a significant amount of hashrate can’t actually do it — so perhaps lowering the threshold further would be a good idea, or changing the way we signal to something that more mining hardware is compatible with would be worthwhile, or perhaps changing the approach entirely might be a better bet. We’ll also likely learn how enthusiastic businesses and node operators are to upgrade to support taproot — the faster everyone does that, the less time we need to wait before triggering a flag day. But there’s a chicken and egg problem — you can’t pick a flag day without knowing how fast people will upgrade, people can’t upgrade until there’s a client, you can’t release a flag day/UASF client until you pick a date for the flag day, you can’t pick a flag day without knowing how fast people will upgrade… rinse, repeat. So being able to release a client that doesn’t set a flag day lets you break that cycle and get a somewhat informed answer. And beyond all that, perhaps there are unknown unknowns that we’ll find out about.

The third reason for advocating for a UASF now, in my opinion, is just that a bunch of people enjoyed the BIP 148/no2x drama and want to play it out again in much the same. Looked at from the right viewpoint it was a really straightforward heroic saga: a small band of misfits get together to fight the big bad, against the advice of the establishment, build up a popular movement, and win the battle without a shot being fired. Total feel-good Hollywood blockbuster plot line.

You can see little demonstrations of this sentiment every now and then, eg when the BIP 148 big bad started signalling, adam3us’s reponse was “don’t thank them too much – last time they were among the ring leaders in tactical veto games. 148 lurking. never forget.” or zndtoshi’s take “Dude I wish that ST would fail so that miners get it that we can enforce via uasf. But I have a feeling they did learn from the segwit saga and will just signal now.”

And I mean, that’s fine — if you’ve got an empowering narrative for why you’re doing what you’re doing, good for you! But it becomes a problem if remembering the past blinds you to the present and your plotline doesn’t actually match reality — just because you won the last battle with a cavalry charge, doesn’t mean it’s necessarily a good idea for this battle, and just because it was fun fighting an enemy in the past doesn’t mean it’s a smart idea to find more enemies now.

Anyway, that’s my collection of thoughts on where we’re at. No particular conclusion from them — I guess I have a whole other set of thoughts on what to do next — but I wanted to get these written down somewhere before moving on.

Fixing UASF

Ambiguous titles are tight.

I’ve always been a little puzzled by the way the segwit/uasf/uahf/segsignal drama played out back in 2017 — there was a lot of drama about the UASF for a while, and then, when push came to shove, suddenly miners switch to being 100% in favour of it, and there were no problems at all. There was even the opportunity for a bit of a last minute “screw you”: BIP91 could have been activated so that it only locked in after BIP148 activated, potentially resulting in a segwit-enforcing chain that wasn’t valid according to BIP148 — not quite “if I can’t have it, nobody can”, but at least a way to get a final punch in prior to losing the fight. But it just didn’t happen that way.

So what if “losing the fight” wasn’t really what happened — what if the fight had been fixed right from the start?

"We are preparing a UAHF to the market. We will have two kinds of Bitcoin if UASF is activated. Big block vs 1MB block. Let us trade." - Jihan Wu Arp 5, 2017

I think that was a day or two before Greg posted about ASICBoost to the bitcoin-dev list — which is interesting since, prior to the ASICBoost factor being revealed, I don’t think the UASF approach had all that much traction. For example, here’s ebliever commenting on reddit in response to the ASICBoost reveal:

I didn’t like the UASF when first proposed because it seemed radical and a bad precedent. But given the crisis if Bitmain can’t be stopped in short order, to save the rest of the mining industry I’d favor the UASF as an emergency measure ASAP.

Why reveal plans for a UAHF to defend against a UASF before the UASF even has significant support?

Well, one reason might be that you wanted to do a UAHF all along. The UAHF became BCH, and between August 2017 and November 2018, BCH had the “advantage” over regular bitcoin in that you could do covert ASICBoost mining on it. (In November 2018 BCH was changed in a way that also prevented covert ASICBoost, and wonder of wonders, a new hard fork of BCH instantly appeared, BSV)

After all, there weren’t many other outcomes on the table that would have allowed covert ASICBoost to continue — the New York Agreement was aiming to do bigger blocks and segwit which still would have blocked it, the “Bitcoin Unlimited” split had basically failed, and stalling segwit probably wouldn’t work forever

There’s a history of the BCH fork from Haipo Yang of ViaBTC. I think it’s pretty interesting in its own right, but for the purposes of this post the interesting stuff is in section 2 — with Bitcoin Unlimited failing to achieve a split occuring just prior to that section. In particular, it includes the argument:

Fortunately, small-hashrate fork can be done without others’ support, and it seemed to be the only feasible direction for the big-block supporters.

Even at this time, most of the big block advocates still placed their hope on the SegWit+2MB plan reached by the New York Consensus. I made it clear that this road was not going to work

It also gives the following timeline leading up to the BCH split:

Wu Jihan has got a Plan B. While supporting the New York Consensus, he took the small-hashrate fork as a backup. […] At that time, the core members of the Core team launched the UASF (User Activated Soft Fork) campaign, and planned to force the activation of SegWit on August 1, 2017. So we decided to activate UAHF (User Activated Hard Fork) on the same day.

So at least according to that timeline, the NYA was already written off as a failure and the BCH UAHF was already being worked on prior UASF being a thing, and picking the same day to do the UAHF was just a matter of convenience, not a desperate attempt to save Bitcoin from a UASF at all. That’s not confirmation that the UAHF was planned from the start in order to save covert ASICBoost — but it is at least in line with the argument that UAHF was the goal all along, rather than a side effect of trying to oppose the UASF.

The thing is, that leaves nobody having been opposed to the UASF: the BCH camp was just using it as a distraction while they split off for their own reasons; the NYA camp were supporting it as the first step of S2X; conservative folks thought it was risky but were happy to see segwit activated; and obviously bip148 supporters were over the moon.

And that’s relevant today to discussions of the “bip8 lot=true” approach, which proposes using the same procedure as bip148 — by some only as a response to delaying tactics, by others as the primary or sole method.

Because despite there being claims that running a UASF client has no risks, that is fundamentally not true. There are at least two pretty serious risks: the first is that you’ll go out of consensus with the network, nobody will mine blocks you consider valid, and that you’ll be unable to receive payments until you abandon your UASF client — that alone is likely enough risk for businesses and exchanges to not be willing to run a UASF client; and the second is that people split down the middle on supporting and opposing the UASF and we have an actual chainsplit, resulting in significant work as one or both sides figure out how to avoid being spammed by nodes following the other chain, add replay protection and protect their preferred system from suffering from a difficulty bomb that makes it uneconomic to continue mining blocks.

All of that’s fine if you’re confident any UASF you support will easily win the day — shades of Trump’s “trade wars are good, and easy to win” there — but if you’re relying on the experience with segwit and bip148 as your evidence that UASF’s will easily win, perhaps the above is some cause for doubt. It is for me, at any rate.

(Of course, not being easy to win, doesn’t mean unwinnable or too scary to even risk fighting; but it does mean building up your strength before picking a fight. For bitcoin, at a minimum, that means a lot more work on making p2p robust against the potential of a more-work invalid chain that your peers may consider valid)

Bitcoin in 2021

I wrote a post at the start of last year thinking about my general priorities for Bitcoin and I’m still pretty happy with that approach — certainly “store of value” as a foundation feels like it’s held up!

I think over the past year we’ve seen a lot of people starting to hold a Bitcoin balance, and that we’ll continue to do so — which is a win, but following last year’s logic also means we’ll want to start paying more attention to the later parts of the funnel as well: if we (for instance) double the number of people holding Bitcoin, we also want to double the number of people doing self-custody, and double the number of people transacting over lightning, eg; and ideally we’d want that in addition to whatever growth in self-custody and layer 2 transactions we’d already been aiming for if Bitcoin adoption had remained flat.

That said, I’m not sure I’m in a growth mindset for Bitcoin this year, rather than a consolidation one: consider the BTC price at the start of the past few years: 2016: ~$450, 2017: $1000, 2018: $13,000, 2019: $3700, 2020: $8000, 2021: $30,000. Has there been a 8x increase in security and robustness during 2016, 2017 and 2018 to match the 8x price increase from Jan 2016 to Jan 2019? Yeah, that’s probably fair. Has there been another 8x increase in security and robustness during 2019 and 2020 to match the 8x price increase there? Maybe. What if you’re thinking of a price target of $200,000 or $300,000 sometime soon — doesn’t that require yet another 8x increase in security and robustness? Where’s that going to come from? And those 8x factors are multiplicative: if you want something like $250k by December 2021, that’s not a “three-eights-are-24” times increase in robustness over six years (2016 to 2022), it’s an “eight-cubed-is-512” times increase in robustness! And your mileage may vary, but I don’t really think Bitcoin’s already 500x more robust than it was five years ago.

So as excited as I am about taproot and the possibilities that opens up (PTLCs and eventually eltoo on lightning, scriptless scripts and discreet log contracts, privacy preserving proof of reserves, cheap multisig — the list might not be infinite but at least seems computationally intractable), I think I’m now even more of the view that it’s probably more important to work on things that reinforce the existing foundations, than neat new ideas to change them than I was this time last year.

There are already a bunch of areas where Bitcoin’s approach to security and robustness has improved technically over the past few years: we’ve got more people doing reviews (eg, via the PR review club, or getting introduced to Bitcoin via the Chaincode Residency etc), we’ve got deeper and more diverse continuous integration testing (thanks both to more integrations being enabled via github, and travis becoming unreliable enough to force looking at other approaches), fuzz testing has improved a lot and become a bit more broadly used, and I think static analysis of the codebase has improved a bit. There have been a bunch of improvements in code standards (eg using safe pointers, locking annotations, spans instead of raw pointers) too, I think it’s fair to say. I haven’t done an analysis here, just going from gut feel and recollection.

With a focus on robustness, to me, the areas to prioritise in the short term are probably:

  1. Modularisation — eg, so that we can better leverage process separation to reduce security impacts, and better use fuzz testing to catch bugs in edge cases. There’s already work to split the gui and wallet into separate processes, though while that’s merged, it’s not part of the standard build yet. Having the p2p-network-facing layer also be a separate process might be another good win. While it’s a tempting goal, I think libconsensus is still a ways off — p2p, mempool management, and validation rules are currently pretty tightly coupled — but there’s steps we can make towards that goal that will be improvements on their own, I think.
  2. The P2P network — This is the obvious way to attack Bitcoin since by its nature everyone has access to it. There are multiple levels to this: passively monitoring the p2p network may allow you to violate users’ privacy expectations, while actively isolating users onto independent networks can break Bitcoin’s fundamental assumptions (you can’t extend the longest chain if you can’t communicate with any of the people who have the longest chain). There are also plenty of potential problems that someone could cause in between those extremes that could, eg, break assumptions that L2 systems like lightning make. Third-party (potentially centralised) alternatives as backups for the p2p network may also be valuable support here — things like Blockstream Satellite, or block relay over ham radio, or headers over DNS: those can mend splits in the p2p network that the p2p layer itself can’t automatically fix. Or efficiency improvements like erlay or block-relay-only can allow a higher degree of connectivity making attacks harder.
  3. CI, static analysis, reproducible builds — Over the past year, travis seems like it’s gone from having the occasional annoying problem to being pretty much unusable for open source projects. CI is an important part of both development and review; having it break makes both quite a lot harder. What we’ve got at this point seems pretty good, but it’s new and not really time-tested yet, so I’d guess a year of smoothing out the rough edges is probably needed. I think there’s other “CI”-ish stuff that could be improved, like more automated IBD testing (eg, I think bitcoinperf is about 3 months out of date). Static analysis serves a similar goal to tests in a different way; and while we’ve already got a lot of low hanging fruit of this nature already integrated into CI via linters and compiler options, I suspect there’s still some useful automation that could happen here. Finally, nailing down the last mile to ensure that people are running the software the devs are testing is always valuable, and I think the nixos work is showing some promise there.
  4. Third-party validation — We’ve had a few third-party monitoring tools arise lately — various sites monitoring feerate and mempool sizes, forkmonitor checking for stale blocks (and double-spends), or, at a stretch, optech’s reviews of wallet behaviour and segwit support. There’s probably a lot of room for more of this.

I’d love to list formal verification at the consensus layer as a priority, but I think there’s too much yak-shaving needed first: it would probably need all the refactoring to get to libconsensus first, then would likely need that separated into its own process, which you could only then start defining a formal spec for, which in turn would give you something you could start doing formal verification against. I suspect we’ll want to be at that point within another cycle or two though.

I’m not concerned about mining — eventually there might be a risk that the subsidy is too small and there’s not enough fee income, but that’s not going to happen while the price doubles faster than the pre-scheduled halvings. There’s certainly centralisation risks, whether in ASIC manufacture, in hardware ownership/control, or at the pool level, but my sense of things is that’s not getting worse, and is not the biggest immediate risk. Maybe I’m wrong; if so, hopefully the people who think so are working on preventing problems from arising, rather than taking advantage of them.

There are other levels of robustness and security beyond just “keep the network working”, if you consider the question of “how to prevent my coins from being lost/stolen?” more broadly. The phishing attacks and potential for physical attacks resulting from the Ledger leak are an easy example of a problem of this sort, but exchange hacks/failures in general, malware swapping addresses so your funds go to an attacker instead of the intended recipient, and lost access to keys are also pretty bad. I think descriptors, miniscript and taproot multisig probably provide for a good path forward to help prevent losing access to keys; and it’s possible that progress on BIP322 (signing messages against a Bitcoin address) may provide a path to avoiding address swapping attacks.

Technical solutions are, in some sense, all you can hope for if you’re doing self-custody; but where a bank/custodian is involved (good) regulation might be useful too: requirements to keep customer data protected or destroyed, third-party audits to ensure the best-practices procedures you’re claiming to follow are actually being followed, etc. If custodians store funds in taproot addresses, it may be feasible to do privacy preserving (zero-knowledge) proofs of solvency, eg, making it harder for fly-by-night folks to run ponzi schemes or otherwise steal their customers’ funds.

Obviously where possible these sorts of standards should be implemented via audited open source code rather than needing extensive implementation costs by each company. But one other thing to think about is whether regulations of this nature could be setup as industry standards (“we comply with the industry standard, competitor X doesn’t”) rather than necessarily a government regulator — for one, it certainly seems questionable whether government regulators have the background to pick good best practices for cryptocurrency systems to follow. Though perhaps it would be to have something oriented towards “consumer rights” than “industry” per se, to avoid it just being a vector for regulatory capture.

I think there’s been good progress on stabilising Bitcoin development — in 2015 through 2017 we were in a phase where people were seriously thinking of replacing Bitcoin’s developers — devs were opposing a quick blocksize increase, so the obvious solution was to replace them with people who weren’t opposed. If you think of Bitcoin as an experimental, payments-oriented, tech startup, that’s perhaps not a bad idea; but if you think of it a store of value it’s awful: you don’t get a reliable system by replacing experts because they think your plan is wrong-headed, and you don’t get a good store of value without a reliable system. But whatever grudges might show up now and then on twitter, that seems to be pretty thoroughly in the past, and there now seems to be much broader support for funding devs, and much better consensus on what development should happen (though perhaps only because people who disagree have moved to different projects, and new disagreements haven’t yet cropped up).

But while that might be near enough the 64x improvement to support today’s valuation, I think we probably need a lot more to be able to supported continued growth in adoption.

Hopefully this is buried enough to not accidentally become a lede, but I’m particularly optimistic about an as yet unannounced approach that DCI has been exploring, which (if I’ve understood correctly) aims to provide long term funding for a moderate sized team of senior devs and researchers to focus on keeping Bitcoin stable and secure — that is auditing code, developing tools to find and prevent bugs, and doing targeted research to help the white hats stay ahead in the security arms race. I’m not sure it will get off the ground or pass the test of time, and if it does, it will probably need to be replicated by other groups to avoid becoming worryingly centralising, but I think it’s a promising approach for supporting the next 8x improvement in security and robustness, and perhaps even some of the one after that.

I’ve also chatted briefly with Jeremy Rubin who has some interesting funding ideas for Judica — the idea being (again, if I haven’t misunderstood) to try to bridge the charitable/patronage model of a lot of funding of open source Bitcoin dev, with the angel funding approach that can generate more funds upfront by having a realistic possibility of ending up with a profitable business and thus a return on the initial funding down the road.

That seems much more blue-sky to me, but I think we’ll need to continue exploring out-there ideas in order to avoid centralisation by development-capture: that is, if we just expand on what we’re doing now, we may end up where only a few companies (or individuals) have their quarterly bottom line directly affected by development funding, and are thus shouldering the majority of the burden while the rest of the economy more-or-less freeloads off them, and then having someone see an opportunity to exploit development control and decide to buy them all out. A mild example of this might be Red Hat’s purchase of CentOS (via an inverse-acquihire, I suppose you could call it), and CentOS’s recent strategy change that reduces its competition with Red Hat’s RHEL product.

(There are also a lot of interesting funding experiments in the DeFi/ethereum space in general, though so far I don’t think they feed back well into the “ongoing funding of robustness and security development work” goal I’m talking about here)

There are probably three “attacks” that I’m worred about at present, all related to the improvements above.

One is that the “modularisation” goal above implies a lot of code being moved around, with the aim of not really changing any behaviour. But because the code that’s being changed is complicated, it’s easy to change behaviour by accident, potentially introducing irritating bugs or even vulnerabilities. And because reviewers aren’t expecting to see behaviour changes, it can be hard to catch these problems: it’s perhaps a similar problem to semi-autonomous vehicles or security screening — most of the time everything is fine so it’s hard to ensure you maintain full attention to deal with the rare times when things aren’t fine. And while we have plenty of automated checks that catch wide classes of error, they’re still far from perfect. To me this seems like a serious avenue for both accidental bugs to slip through, and a risk area for deliberate vulnerabilities to be inserted by attackers willing to put in the time to establish themselves as Bitcoin contributors. But even with those risks, modularisation still seems a worthwhile goal, so the question is how best to minimise the risks. Unfortunately, beyond what we’re already doing, I don’t have good ideas how to do that. I’ve been trying to include “is this change really a benefit?” as a review question to limit churn, but it hasn’t felt very effective so far.

Another potential attack is against code review — it’s an important part of keeping Bitcoin correct and secure, and it’s one that doesn’t really scale that well. It doesn’t scale for a few reasons — a simple one is that a single person can only read so much code a day, but another factor is that any patch can have subtle impacts that only arise because of interactions with other code that’s not changing, and being aware of all the potential subtle interactions in the codebase is very hard, and even if you’re aware of the potential impacts, it can take time to realise what they are. Having more changes thus is one problem, but dividing review amongst more people is also a problem: it lowers the chance that a patch with a subtle bug will be reviewed by someone able to realise that some subtle bug even exists. Similarly, having development proceed quickly and efficiently is not always a win here: it reduces the time available to realise there’s a problem before the change is merged and people move on to thinking about the next thing. Modularisation helps here at least: it substantially reduces the chance of interactions with entirely different parts of the project, though of course not entirely. CI also helps, by automating review of classes of potential issues. I think we already do pretty well here with consensus code: there is a lot of review, and things progress slowly; but I do worry about other areas. For example, I was pretty surprised to see PR#20624 get proposed on a Friday and merged on Monday (during the lead up to Christmas no less); that’s the sort of change that I could easily see introducing subtle bugs that could have serious effects on p2p connectivity, and I don’t think it’s the sort of huge improvement that justifies a merge-first-review-later approach.

The final thing I worry about is the risk that attackers might try subtler ways of “firing the devs” than happened last time. After all, if you can replace all the people who would’ve objected to what you want to do, there’s no need to sneak it in and hope no one notices in review, you can just do it, and even if you don’t get rid of everyone who would object you at least lower the chances that your patch will get a thorough review by whoever remains. There are a variety of ways you can do that. One is finding way of making contributing unpleasant enough that your targets just leave on their own: constant arguments about things that don’t really matter, slowing down progress so it feels like you’re just wasting time, and personal attacks in the media (or on social media), for instance. Another is the cancel-culture approach of trying to make them a pariah so no one else will have anything to do with them. Or there’s the potential for court cases (cf Angela Walch’s ideas on fiduciary duties for developers) or more direct attempts at violence.

I don’t think there’s a direct answer to this — even if all of the above fail, you could still get people to leave by offering them bunches of money and something interesting to do instead, for example. Instead, I think the best defense is more cultural: that is having a large group of contributors, with strong support for common goals (eg decentralisation, robustness, fixed supply, not losing peoples funds, not undoing transactions) that’s also diverse enough that they’re not all vulnerable to the same attacks.

One of the risks of funding most development in much the same way is that it’s encourages conformity rather than diversity — an obvious rule for getting sponsored is “don’t bite the hand that feeds you” — eg, BitMEX’s Developer Grant Agreement includes “Not undertaking activities that are likely to bring the reputation of … the Grantor into disrepute”. And I don’t mean to criticise that: it’s a natural consequence of what a grant is. But if everyone working on Bitcoin is directly incentivised to follow that rule, what happens when you need a whistleblower to call out bad behaviour?

Of course, perhaps this is already fine, because there are enough devs who’ll happily quit their jobs if needed, or enough devs who have already hit their FU-money threshold and aren’t beholden to anyone?

To me though, I think it’s a bit of a red flag that LukeDashjr hasn’t gotten one of these funding gigs — I know he’s applied for a couple, and he should superficially be trivially qualified: he’s a long time contributor, he’s been influential in calling out problems with BIP16, in making segwit deployment feasible, in avoiding some of the possible disasters that could have resulted from the UASF activation of segwit, and in working out how to activate taproot, and he’s one of the people who’s good at spotting subtle interactions that risk bugs and vulnerabilities of the sort I talked about above. On the other hand he’s known for having some weird ideas and can be difficult to work with and maybe his expectations are unrealistic. What’s that add up to? Maybe he’s a test case for this exact attack on Bitcoin. Or maybe he’s just had a run of bad luck. Or maybe he just needs to sell himself better, or adopt a more business-friendly attitude — and I guess that’s the attitude to adopt if you want to solve the problem yourself rather than rely on someone else to help.

But… if we all did that, aren’t we hitting that exact “conformity” problem; and doesn’t that more or less leave everyone vulnerable to the “pariah” attack, exploitable by someone pushing your buttons until you overreact at something that’s otherwise innocuous, then tarring you as the sort of person that’s hard to work with, and repeating that process until that sticks, and no one wants to work with you?

While I certainly (and tautologically) like working with people who I like working with, I’m not sure there’s a need for devs to exclusively work with people they find pleasant, especially if the cost is missing things in review, or risking something of a vulnerable monoculture. On the other hand, I tend to think of patience as a virtue, and thus that people who test my patience are doing me a service in much the same way exams in school do — they show you where you’re at and what you need to work on — so it might also be that I’m overly tolerant of annoying people. And I did also list “making working on Bitcoin unenjoyable” as another potential attack vector. So I don’t know that there’s an easy answer. Maybe promoting Luke’s github sponsors page is the thing to do?

Anyway, conclusion.

Despite my initial thoughts above that taproot might be less of a priority this year in order to focus on robustness rather than growth, I think the “let wallets do more multisig so users funds are less likely to be lost” is still a killer feature, so I think that’s still #1 for me. I think trying to help with making p2p and mempool code be more resilient, more encapsulated and more testable might be #2, though I’m not sure how to mitigate the code churn risk that creates. I don’t think I’m going to work much on CI/tests/static analysis, but I do think it’s important so will try to do more review to help that stuff move forward.

Otherwise, I’d like to get the anyprevout patches brought up to date and testable. In so far as that enables eltoo, which then allows better reliability of lightning channels, that’s kind-of a fit for the robustness theme (and robustness in general, I think, is what’s holding lightning back, and thus fits in with the “keep lightning growing at the same rate as Bitcoin, or better” goal as well). It’s hard to rate that as highly as robustness improvements at the base Bitcoin layer though, I think.

There are plenty of other neat technical things too; but I think this year might be one of those ones where you have to keep reminding yourself of a few fundamentals to avoid getting swept up in the excitement, so keeping the above as foundations is probably a good idea.

Otherwise, I’m hoping I’ll be able to continue supporting other people’s dev funding efforts — whether blue sky, or just keeping on with what’s working so far. I’m also hoping to do a bit more writing — my resolution last year was meant to be to blog more, and didn’t really work out, so why not double down on it? Probably a good start (aside from this post) would be writing a response to the Productivity Commission Right to Repair issues paper; I imagine there’ll probably be some more crypto related issues papers to respond to over this year too…

If for whatever reason you’re reading this looking for suggestions you might want to do rather than what I’m thinking about, here are some that come to my mind:

  • Money: consider supporting or hiring Luke, or otherwise supporting (or, if it’s in your wheelhouse, doing) Bitcoin dev work, or supporting MIT DCI, or funding/setting up something independent from but equally as good as MIT DCI or Chaincode (in increasing order of how much money we’re talking). If you’re a bank affected by the recent OCC letter on payments, making a serious investment in lightning dev might be smart.
  • Bitcoin code: help improve internal test coverage, static analysis, and/or build reproducibility; setup and maintain external tests; review code and find bugs in PRs before they get merged. Otherwise there’s a million interesting features to work on, so do that.
  • Lightning: get PTLCs working (using taproot on signet or ecdsa-based), anyprevout/eltoo, improve spam prevention. Otherwise, implementing and fine-tuning everything already on lightning’s TODO list.
  • Other projects: do more testing on signet in general, test taproot integration on signet (particularly for robustness features like multisig), monitor blockchain and mempool activity for oddities to help detect and prevent potential attacks asap.

(Finally, just in case it’s not already obvious or clear: these are what I think are priorities today, there’s not meant to be any implication that anything outside of these ideas shouldn’t be being worked on)

Activating Soft forks in Bitcoin

General background: Bitcoin is a consensus system — it works because there are a set of rules on how Bitcoin transactions work, and everyone agrees on what they are. Changing those rules is called “forking” — when some people want to change the rules in a non-backwards compatible way while others don’t, that results in a new altcoin that follows the changed rules (eg BCH), while Bitcoin’s rules stay the same; and when everyone agrees to change the rules in a backwards compatible way, we have what’s called a soft fork. Most of the interesting development in Bitcoin doesn’t require changes to the consensus rules, but some changes do. In essence, these sorts changes touch the fundamentals of Bitcoin, and thus warrant extra care and attention.

Specific background: The proposed taproot soft fork is something we’ve been working on for quite a while now, and the underlying code changes got merged into the bitcoin codebase a bit over a week ago, just in time for the 0.21 feature freeze. Those changes allow the new taproot rules to be used for testing purposes on the regtest chain, and also on the new signet chain, but do not change how things work on the real, live, Bitcoin network. The idea there is to allow people to check that the major upgrade to 0.21 works as expected and is safe to widely deploy, and only after that’s done worry about the soft fork. Exactly how to activate the soft fork is something of an open question though — while we’ve done a number of them in the past, the last one ended up a bit of a debacle. Back in July, we started discussing activation methods more seriously, and came up with some ideas.

At the time, I wanted to get a better idea of what people thought of the fundamental constraints, so I tried writing up a survey and sent an email to a bunch of smart dev-type people inviting them to fill it in if they were interested:

We seem to be getting to the point where people are making memes about activation methods [0] [1] [2], but I think we’re also still at the point where pretty smart people still have big differences of opinion over technical issues [3].

I feel like we’ve made some progress on ##taproot-activation, but talking with harding after he did his wiki summary of the state of things, I didn’t feel like that was quite getting at the heart of the differences. So I’ve had a go at writing up a survey for (what I think are) the underlying questions that are driving the differences between the proposals. There’s only 10 real questions there, but I’ve added a whole bunch of text that (hopefully) neutrally explains most of the tradeoffs between the choices, hopefully without introducing too much of my own bias. I’m hoping it covers all the choices people are currently favouring, even if they’re “comically moronic”, and, ideally at least, will give some clue as to the tradeoffs people are considering/ignoring that’s leading them to that preference. Ideally the results might indicate where there’s already widespread agreement, what might be worth talking through more, and what productive ways there might be of dealing with any remaining disagreements…

If there’s some important issues / responses the survey doesn’t cater for, that would be good to know. And, obviously, if you’re happy to fill in the survey, that would be awesome

My thought is, assuming the response isn’t “this is a stupid, counter-productive idea”, to post the url at the next weekly core dev irc meeting for a broader but still cluey audience, and post to bitcoin-dev and ##taproot-activation afterwards, and then do something about collating and publishing the results, which might hopefully help promote intelligent discussion vs meme wars…

I’ve bcc’ed people so they don’t get included in replies if they’re not interested; but fwiw the list is […]. . Random collection of people who have participated in recent discussions, might have varying strong opinions on some of the topics, and/or who did bunches of work and who I’d be embarrassed to exclude.

Steve Lee, A. Jonas, and Mike Schmidt helped with drafting and will hopefully help with/do all the work of collating responses; Dave Harding, Russell O’Connor both offered helpful comments that assisted significantly with early drafting. Any remaining stupid counter-productivity is mine of course.

(I’m hoping this survey will help result in a better idea of what to do about activation which will then inform what we actually do. But either way it’s certainly not a “vote by sms now, and whichever answers get the most votes will be your new american idol, uh, taproot activation method” thing, or even a “nope, everyone else voted X, your opinion is unimportant”. Hopefully that didn’t need to be said.)

I sent the survey to about 20 people and got 13 responses (including my own). I figure not identifying who responded or tying responses with people is probably best, since that avoids tying anyone to their opinion from a month or three ago, and thus maybe makes it easier for people to adjust their views to new information and eventually come to an agreement.

If you’re interested in the details around this topic, I think the survey’s worth a read, and I’ve left it open in case anyone wants to fill in their own answers.

The results turned out harder to collate than I expected — mostly because google’s CSV export isn’t that great when you have “choose as many as you like” questions that each have full sentences for the answers, but also because there were fewer obvious patterns than I expected. But anyway.

Results for the first set of questions, about activation via enforcement by a supermajority of hashpower, ended up being:

  • What do you consider a reasonable threshold for activation by hashpower supermajority?
    • Eight people selected 90%-95%, 85%-95%, 90% or 95%
    • Four people selected 60%/70%/75% as the lower bound and 95% as the upper
    • One person selected just 75%
  • If everything goes well, how long will it take miners to upgrade and enable signalling for activation by hashpower supermajority?
    • Six people chose “up to 12 months
    • Five people chose “up to 3 months”
    • One person chose “up to 6 months”
    • One person didn’t answer
  • How long should it be at minimum between software release and activation actually taking effect?
    • Five people chose “6 retarget periods” (3 months)
    • Four people chose “4 retarget periods” (2 months)
    • Two people chose “2 retarget periods” (1 month)
    • One person didn’t answer
    • One person gave a free form answer: “Unpopular opinion: between 3 and 6 months. Need to give time for users to update too. Otherwise miners can do play dirty (I suppose but I haven’t thought deeply about this). “

For the “flag day activation” section, the answers were:

  • What concerns do you think should be taken into account in choosing a flag day?
    • Eleven people chose “plenty of people will enforce the rules, after the flag day, though maybe not the flag day itself”
    • Eleven people chose “sufficient number of people enforcing the flag day that ignoring it will be economically unviable”
    • Seven people chose “almost every node will enforce the flag day”
    • Five people chose “not introducing precedents that will cause problems”
    • Four people chose “soon enough to keep development momentum”
  • How long away should the flag day be?
    • Seven people found 12 or 18 months acceptable. Of those, six found 12, 18 or 24 months acceptable, and two of them also considered 36 or 48 months acceptable.
    • One person found only 6 or 12 months acceptable.
    • One person found only 36 or 48 months acceptable.
    • Two people only indicated 12 months.
    • One person only indicated 18 months.
    • One person chose “never”.
  • When should we decide on the flag day?
    • Nine people chose answers that depend on uptake (seven wanted to see how users upgrade; six wanted to see how miners behave; five wanted to be sure a flag day is actually needed)
    • Four people chose “before the first activation attempt”, though two of those also wanted to see how users upgrade, and one also selected the “never” option (not the same person that chose “never” for the previous question)
  • How should disagreement on a choice of flag day be resolved?
    • Six people indicated “whatever the BIP authors and core maintainers agree on is fine”
    • Four people indicated “only do a flag day if there’s clear consensus” (no overlap with the previous six)
    • Four people chose “Pick my answer (or a later one)”; one of those also chose “Pick the average answer”
    • There were a bunch of free form answers as well: “Pick a reasonable answer”, “6 months or 1 year only, unless there’s a clear reason more time is required (e.g., fixing timestamp overflow bugs in far future). Anything in between 6 months and 1 year is bikeshedding, anything less than 6 months is too fast, and anything further than 1 year is too far out”, “Pick the Nth percentile wait, where N is pretty high. I’m fine waiting longer, I just want the flag day locked in”, “rough consensus and running code”, “A thought I had during the segwit2x debacle is that I don’t think there is consensus for playing games of chicken with consensus. I think taproot is a good idea, but I don’t think chain splits are, and I think we should take our time to be careful about deploying consensus changes in a way that is not likely to produce a chain split. No one has any reason to think that taproot won’t activate, so let’s not rashly move forward in a way that could provoke a chain split due to errors or oversights.”
  • How will we know there is community support for a flag day by default?
    • Ten people chose “enough time for reasonable objections to be reported, but none have been”
    • Nine people chose “uptake of software supporting hashpower supermajority activation”
    • Eight people chose “we see manual signalling” (everyone chose at least one of these three responses, except for one person who only entered a free form response)
    • Five people chose “uptake of opt-in activation”
    • Four people chose “we see price information”
    • Four people chose “we already do”
    • One person chose “we never will” (along with other options)
    • There were also a couple of free form additions: “Every softfork is a user-activated softfork”, “when anyone on reddit/reading coindesk would understand that there are no objections, and understand the care that went into design.”, “Know it when we see it, but should only be used if necessary”
  • How should users opt-in to flag day activation?
    • Seven people chose “never opt-in, should be the default for everyone once community support is established”
    • Five people chose “upgrading to a new (optional) version of bitcoin core” — eleven people chose at least one of these two options
    • Four people chose “setting a configuration flag”
    • Four people chose “an alternative forked client”
    • Six people chose “editing the source and recompiling”, however all of those people also chose at least one other option
    • The only free form comment was: “Configs can be set wrong accidentally and is hard to test, a bit harder to run wrong binary for a long time. (speaking against config flag option)”
  • Signalling a flag-day activation?
    • Six people chose “mandatory signalling only when bringing activation forward”
    • Five people chose “always require signalling prior to activation” (nine people chose one of these options)
    • One person chose “never mandate signalling”
    • Two people gave no answer, and one just gave the free form response: “No opinion at this time”
    • Remaining free form comments were: “I think forced signaling flag days are really only interesting for two phase deployments where the first phase doesn’t know about the flag day but hasn’t timed out, and where the flag day is far enough out that disruption from it can be minimized (e.g. miners can get told to at least adjust their versions)”, “We want mandatory signalling to bring to ensure activation on nodes that do not enforce the flag day. The “proposed update to BIP 8 [3]” is a very good solution to this.”

I don’t think the “opinion weighting” answers ended up being very interesting:

  • How informed are your opinions?
    • Six people chose “based on years of experience with multiple activations”
    • Two people chose “in depth study of bitcoin activation”
    • Four people chose “knowledge about other aspects of bitcoin and reading the questions”
    • One person chose “you wanted my opinion, you got it. caveat emptor”.
  • How confident are you about your opinions?
    • Eight people chose “right balance of tradeoffs”
    • Three people chose “not very”
    • One person chose “anything else will be a disaster”

Overall free form comments were:

  • Your choices for how sure I am are pretty rough. I mean, there probably isn’t anyone with more activation experience than me (though several equal) but no one has activated anything in the current network, no one has activated taproot. etc. A sentiment I hoped to be able to express was support for nested activations like harding’s start now and improve later. Forced activation specifics are likely to be complicated and painful to decide and that decision would be greatly simplified by initial robust deployment. … plus I think there are good odds that forced activation will be unnecessary (esp if its clear that we will use it if needed)– so why serialize getting this stuff activated on figuring out forced activation details? Better to do whatever thing has good odds of getting it activated fast assuming miners cooperate then worry about more dramatic things if they don’t. Without the initial attempt everyone is just guessing– guessing on uptake– guessing on miner behaviour– etc. Plus people who want more aggressive and less aggressive approaches differ a lot based just on how pessimistic they are about miners, a question that will be resolved by seeing what miners do. The primary counter argument to this approach is that if we don’t plan for a flagday in advance there is a risk that the moment miners drag their feet at all, the pitchforks will come out and the least reasonable people will immediately move forward with a 30 day flagday or whatever. I think that this can be avoided by the author of the parameters making a clear statement of intent. That if users adopt but miners holdback the intention will be to flag day and we’ll start discussing the details of that in 3 months… or something like that.
  • I can’t answer about how “correct” my opinions are… My feelings about activation methods are strongest when it comes to the narrative around them, and least strong when it comes to the specifics (provided that they are reasonably in line with a good narrative). I think we could take almost any activation method and tell a story about it that is terrible — miner-activation means that miners dictate the rules; user-activation means that developers dictate the rules, etc. In my view the most important thing here is to have as strong a sense as reasonably possible that no one is opposed to the consensus change and that activation is very likely to not cause a chain split (at the time of activation or down the road). How we get there is a matter of debate and discussion, but if we can agree that those two principles are paramount and other issues are secondary, then I think I’d be on board with any number of proposals that are crafted around such a narrative.
  • To summarize my current thinking:
    • deploy bip8(false) in Bitcoin Core
    • If it becomes clear that the miners use their veto to prevent activation let users coordinate on a flag day. In order to opt-in to flag day activation (bip8(true)) users should create their own fork of Bitcoin Core. For this to work properly, it would be ideal to use Anthony Town’s suggested changes to BIP 8. If the users fail to cooperate and the softfork doesn’t activate, then that’s fine too, but maybe the softfork wasn’t useful enough then. We can propose another softfork that hopefully gets more user support (sigagg, etc.).
    • Thanks for making this! Looking forward to see what comes out of it.
  • I think that we’ve micro-managed soft-forking patterns as the biggest threat that bitcoin faces and worthy of all the attention and fuss… There are bigger problems and challenges facing bitcoin centralization that we can work on (e.g., proliferation of storing funds in custodial wallets), but that feel more outside our control, so we focus on something that is in our control. I think any reasonable choice that allows us to ship small soft-forks when recommended by devs and users is the right tradeoff when we’re already suffering from other centralization vectors.
  • Perhaps not a disaster for Taproot, but some deviation could set very harmful precedents. Looking at upgrade history, I worry we ought to set a minimum 1 year after software release before starttime, but the question only asked about a *minimum*.
  • Waiting too long means hats come out of closets and we get something much less organised and safe. Let’s get a flag day set far enough in the future and move on with our lives

Hopefully the length of that summary when there’s only 13 responses serves as a good explanation why I didn’t summarise this earlier, or try getting more responses first…

One thing I’ve been thinking more and more is that the exact activation method isn’t really what’s important. I think that the whole “BIP 9 allows an activation attempt to fail” has been somewhat misleading — while it’s technically true, the fact that the activation could succeed at all is more important, and that possibility implies that we have to be absolutely confident that a deployment is definitely worth activating before we risk activation. And if we are absolutely confident it’s worthwhile, then so is everyone else, and we will eventually activated it one way or another, and the details of exactly how it happens are just that: details. And while details matter, it’s much more important to make sure the idea is sound, and to do that first — so I think it’s actually more of a big deal that we’ve in addition to all the review and unit tests and guides and explanations we’ve now also got taproot activated on signet which should make third-party development feasible, and in some sense allow an extra layer of testing.

But anyway, as far as activation parameters go, your takeaways from the above might be different, but I think mine are:

  • Activation threshold should stay at 95% (or at most be reduced to 90%)
  • We don’t really know how quickly miners will react; so hope for a quick response within a few months, but plan for it taking up to a year, even if everything goes well
  • Setting the startheight to be about a month or two worth of blocks away is probably about right (along with a retarget period each of STARTED and LOCKED_IN this gets at least two or three months of deployment time before activation is possible)
  • Almost everyone is open to the idea of a flag day in some circumstances
  • If there’s a flag day, we should expect it to be a year or two away (though it might turn out sooner than that, or later)
  • There probably isn’t support for setting a flag day initially (only 4/13 support choosing a day early enough; only (a different) 4/13 think we already know there’s sufficient community support; 9/13 want to see adoption of hashpower-based activation to establish there’s consensus for a flag day)
  • Almost everyone wants to see as many nodes as possible enforce the rules after activation
  • Most people seem to be willing to accept bringing a flag day forward by mandatory signalling
  • There’s not a lot of support for opting-in to flag day activation by setting a configuration option

So I think to me that means:

  • Initial activation parameters should be included in a minor update release (eg, version 0.21.1 or 0.21.2) and be something like:
    • lockinontimeout = false
    • startheight = release height + 3 retarget periods, rounded up (to get a two or three month delay before activation is possible)
    • timeoutheight = startheight + 209,664 blocks (4 years worth — in case the 12-24 months estimates turn out to be too low)
    • threshold = 95% (no point changing it)
  • We then see what happens — if miners activate it within the first three or 12 months: great. If they don’t, we’ll have more data, and use that to refine the deployment strategy. For example, if miners have been having legitimate problems deploying the new software, we’ll help them fix that; if not, and there’s plenty of uptake of the new software and other support, we’ll run some numbers on the rate at which users are upgrading, and pick a date for a flag day based on that.
  • When we work out what flag day is best supported by the data (suppose for the sake of example that it’s startheight + 18 months, to be roughly in line with what people expected per the above), then we’d update the deployment parameters to:
    • lockinontimeout = true
    • startheight = unchanged
    • timeoutheight = startheight + 78,624 blocks (18 months’ worth)
    • threshold = unchanged
  • The updated activation parameters should be backported for each major version (eg, if startheight is March 2020 and timeoutheight is in September 2021, that might be 0.21.5, 0.22.3, and 0.23.1, and already included in master by the time 0.24.0 is branched off)

This is more or less the “gently discourage apathy” approach though with a longer initial timeout.

Note that with 13 version bits reasonably available for use (BIP320 reserves the remainder, and miners are actively using them), a four year timeout still allows for a new soft fork every four months on average without having to overlap version bits or come up with a new signalling method; which seems likely to be more than sufficient.

Compared to “modern soft fork activation“, I think the main differences are that it plans for an earlier flag day (though only if that’s actually supportable via adoption data), does not include a config parameter for updating to flag day activation but instead requires upgrading to a new minor release (unavoidable given the flag day isn’t decided in advance and manually setting the flag day would be too easy to get wrong, which risks breaking consensus), and requires mandatory signalling if the flag day occurs.

If you want to maximise the number of nodes that will enforce the rules should a flag day occur, but also only choose the flag day after an initial activation attempt is already widely deployed, then you have no choice but to make signalling mandatory when the flag day occurs. I think it’s a good idea to do a little more work to minimise the costs that mandatory signalling might impose on miners, so have proposed some updates to BIP 8 to that effect — one to not require signalling during LOCKED_IN, and one to reduce signalling during MUST_SIGNAL from 100% of blocks down to the threshold figure; I think the latter also is potentially somewhat protective against miner gamesmanship, as noted in the link. That’s still not zero-impact on miners in the way the “modern soft fork activation” approach is, but I think it’s near enough.

Apart from that, I think the current BIP 8 spec/code should more or less work for the above aleady.

A Paradigm Shift

(I would have liked to have come up with a more original post title, but found myself unable to escape this one’s event horizon)

I’ve been at Xapo for a bit over a couple years now, and it’s been pretty great. Earlier this year, we’d been coming up to performance review time, so, as you do, I’d been thinking about what changes would be cool — raise, promotion, different responsibilities, career growth, whatever — and, largely, coming up blank, particularly given we’d recently taken on Amiti as an additional dev working on bitcoin upstream. I mean, no one’s going to say no to more money for doing the same thing, but usually if you want significant changes you have to make significant change, and I was feeling pretty comfortable: good things to work on, good colleagues to work with, and not too much bureaucratic nonsense getting in the way. In many ways, my biggest concern was I was maybe getting complacement. So, naturally, come Good Friday, after responding to a late night ping on slack, I found out I was being fired — and being a remote worker, without even a kiss on the cheek as is traditional for betrayal at that time of year!

Okay, that’s not a precisely accurate take: I got made redundant along with plenty of others as part of a pretty major realignment/restructuring at Xapo. This was pretty unexpected, since the sale of the institutions part of the business to Coinbase had seemed like it had given Xapo a really long runway to avoid having to make painful cuts, though on the other hand I had been concerned enough about the lack of focus (or a nice brief elevator pitch for what Xapo was) to have been mailing Wences ideas about it last year, so some sort of big realignment was not a total surprise either. It’s summarised in a post on the Xapo blog in May as “relaunching as a digital bank” which I don’t think is really all that clear; and there’s a later post with a bunch of FAQs which is helpful for the details, but not really the big picture. The difference between “custodial wallet” and “bank” has always seemed pretty minor to me, so Xapo’s always seemed pretty bank-like to me anyway — although it’s still worth distinguishing between a bank where all the customers’ balances are fully backed, and the more normal ones with fractional reserve where funds in deposit accounts are mostly backed by other customers’ debts, and are thus at risk for bank runs, which requires deposit insurance backed by central bank money printing and so on.

I think it’s fair to describe Xapo’s new direction as a change of focus from something like “bitcoin’s cool, we’ll help you with it” to something like “protecting your wealth is cool, we’ll help you with it” — but when you do that, bitcoin becomes just one answer, with things like USD or gold or even some equities as other answers, just as they are for Libra. That’s also a focus that matches Wences’ attitude (or life story?) better — protecting you from currency collapses and the like is a mission; playing with cool new technology is a hobby. And while I think it’s a good mission in general, I think it’s particularly timely now with governments/banks/currencies facing pretty serious challenges as a result of response to the covid19 pandemic. It’s also a much tighter focus than Xapo’s had over the time I’ve been with the company — unless you’re a massive conglomerate like Google or Disney, it’s important to be able to say “no — that’s a good idea but it’s not for us, at least not yet” so that you limit the things you’re working on to things that you can do well, so I think that’s also a big improvement for Xapo. And as a result, I can’t even really object to Xapo not retaining a bitcoin core dev spot — in my opinion a focus on wealth preservation for bitcoin mostly means not screwing things up (at least for now) rather than developing new things. Hopefully once Xapo reopens to new customers and those customers are relying on bitcoin as a substantial store of wealth, and the numbers are all going up, it will make sense to have in-house expertise again, but, well, one of the benefits for companies that build on open source platforms is that you can free-ride for a while, and it doesn’t make much sense to begrudge that. I think it’s definitely going to be a challenging time for Xapo to re-establish itself especially with the big personnel changes, but I’m hopeful that it will work out well. I have exercised my stock options for what that’s worth, though I don’t know if that counts as skin in the game or a conflict of interest.

Wences was kind enough to provide a few months’ notice rather than terminating the contract immediately (not something that he was able to do for many of the other Xapo folks who were made redundant around the same time, as I understand it), and even kinder to provide some introductions to people who might fund me in continuing in the same role. It’s certainly a bad negotiating tactic, but the Paradigm guys (they’re a California based company, so guys still counts as gender neutral, right?) were Wences’ first recommendation, and after getting some surprisingly positive recommendations about them, talking to them, and reading some of their writings, I didn’t really see much need to look elsewhere. Like I said, complacent. (Or, if you prefer, perhaps “lacking even first-world problems” is a better description). Once word filtered through the grapevine a little, I did get an offer from the Chaincode folks to see if I needed some support so that I didn’t have to worry about urgently getting a new job in the midst of a global pandemic, but I figured it’s “better for bitcoin” for a company like Paradigm that hasn’t supported development directly until now to get some experience learning how to do it than to join an existing company that’s already doing pretty much everything right, and it didn’t feel like too much of a risk on my behalf. So maybe at least there I managed a not-completely-complacent choice? And while there’s no particular change in job description, I’m hoping working with folks like Arjun and Dan might help me actually finish fleshing out and writing up some ideas that aren’t able to be directly turned into code, and I’m hopeful for some cross-pollination from some of ideas in the DeFi space that they pay attention to, which I’ve mostly been studiously ignoring so far, so I hope there’s a bit of potential for growth there.

Anyway, given I’m doing the same job just with a different company, there wasn’t really any impetus, but I’ve been using it as an excuse to get some of the things I’ve been working on over the past little while actually published; hence the ANYPREVOUT update and the activation method draft in particular. Both those I’d been hoping to publish at or shortly after the coredev meeting in March, but covid19 cancelled that for us, and the times since have been kind of distracting.

In conclusion, the moral of the story: take performance reviews more seriously in future.

COVID19 Thoughts

A month and a bit ago, I wrote up my take on covid19 on facebook. At the time, Australia was at 1300 cases, numbers were doubling twice a week, and I’d been pessimistically assuming two weeks between infection and detection.That led me to pessimistically estimate that we’d be at 20,000 cases by Easter, and we’d be close to capacity for our hospital system, but I was pretty confident that the measures we’d put in place by then would be starting to have an effect and we’d avoid having an utter catastrophe. I’d predicted by late April we’d be “arguing about how to get out of the shutdown” and have a gradual reopening plan by May — that looks like it’s come about now, with the PM and state premiers coordinating on how that should work, and the Queensland one, at least, beginning next week.

The other “COVID SAFE checks” also seem good to me: widespread testing, effective tracking and tracing of outbreaks, and having each stage conditional on the outbreaks being contained. We’re in a much better state to do those things than we were two months ago, There’s also (as I understand it) been a lot of progress on increasing the capacity of hospitals to respond to outbreaks, so as far as “flattening the curve” so that we can go back to living a normal-ish life, without exponential growth causing a disaster, I think we’re doing great.

It’s a more cautious reopening than I would have expected though: the four week minimum time between stages is perhaps twice as long as the theoretical minimum, but even that was twice as long as what I’d have expected the minimum time people would tolerate at a political level. It’s not clear to me how bad the economics is — I think we’ll get the first real economic stats next week, but the numbers I’m seeing so far (7% of payroll employees out of work, eg) aren’t as bad as I was expecting, while the forecasts (which are expecting a sluggish recovery) are worse. Maybe that just means we’ll be able maintain patience in the short term, but should still expect things to be painful while the world tries to recover its supply chains over the next year or two?

The thing that has perhaps most impressed me about Australia’s response, especially compared to the US, has been the lack of politicisation. I don’t think you can have an effective emergency response when the people in charge of that response are pointing fingers at each other, and wasting time with gotcha questions to make each other look stupid.  The National Cabinet approach, the willingness of the both the federal government to bend to some of the states’ concerns (particularly Victoria’s push to close schools prior to Easter), the willingness of states to coordinate under federal leadership and be aligned where possible, and above all mostly managing to work together rather than the usual policy of exaggerating disagreements has been great. Unlike Soraya Lennie I think that’s a massive achievement by the PM and also the opposition leader. Morrison cancelling his trip to the footy was a good move, and Dan Tehan’s walkback of his criticism of Daniel Andrews was too — but forgiving both those mistakes rather than the usual approach of continually bring them back up is also important.

Where I got things wrong, was that it appears the virus is easier to limit than I’d expected. While I thought we’d be screwed for weeks yet, instead we started turning the corner just five days after my post, which itself was ten days after the government had started issuing bans on large gatherings and requiring overseas travelers to start self-isolating. We’ve also apparently had a much lower percentage of cases end up in the ICU — I think 1.75% of cases ended up in ICU in NSW versus figures like 5% from China, or 2.6% from Italy? We’re currently at 97 deaths out of 6913 confirmed cases, which is 1.4%, so double the 0.7% reported from non-Wuhan China.

That fatality rate figure still makes it hard for me to find “herd immunity” strategies plausible — you probably need about 60% or more of the population to have been infected to get herd immunity, but 0.7% of 60% of Australia’s population is 103,000 deaths; compared to 3500 deaths per year from the regular flu in Australia, that seems unacceptably many to me — and perhaps you have to double that to match our observed 1.4% fatality rate anyway. And conversely, it makes it seem pretty unlikely that there’s already herd immunity anywhere — if there haven’t been that many unexplained deaths, it’s pretty unlikely that covid19 swept through somewhere prior to this, granting everyone left alive herd immunity.

Nevertheless, that seems to be the strategy Sweden is taking; currently they have over 3000 deaths, so if the 0.7% ratio holds that’s 430,000 cases, fewer if the ratio’s more like Australia’s 1.4%. However they are currently only reporting 24,000 cases — which adds up to to an 12.5% fatality rate instead. Things seemed to have stabilised for them at about 60-100 deaths per day; so to get from 430k cases to 6M to achieve herd immunity, that’s presumably going to result in a further 39,000 deaths, which at 80 deaths per day will take another 16 months. And Sweden’s reportedly doing some lockdown measures anyway, so even if that number of deaths is acceptable, it’s not clear to me that it’s an argument for “life as normal” rather than “we can deal with this via modest restrictions over quite a long time”. And additionally, I think Sweden has doubled their normal ICU capacity, and may have needed that extra capacity already.

Still, that Sweden’s death rate has stabilised rather than continuing to double also seems to be evidence that the virus does end up limited almost no matter what — though my guess is that this is more because once it becomes obvious to everyone, people start voluntarily limiting their exposure without needing government to mandate it. So perhaps that means the best thing governments can do here is force people to make good choices early, when they have access to good advice that hasn’t percolated through to the rest of the public, then ease off once that advice has spread. Having leaders do the opposite, and spread bad advice early — Florence’s “hug a chinese” day, New York’s “keep going to restaurants” or Boris Johnson “shaking hands with everybody” — might therefore have been spectacularly harmful.

The US numbers don’t make sense to me at present: the CDC reports 1.2 million cases and 73 thousand deaths, but that’s a 6% fatality rate. If the deaths figure is accurate, but the real fatality rate is more like Australia’s 1.4% that would mean there’s really 5.2 million cases in the country (which is still only 1.6% of the population, miles away from herd immunity); while if the cases figure a fatality rate like Australia’s would imply only 17 thousand of the deaths were due to covid19, and 56 thousand were misreported. There’s certainly been reports of deaths being wrongly reported as due to covid19 in the US, but there’s also plenty of indications there hasn’t been enough testing, which would let to the reported case numbers being way too low.

I don’t really have a further prediction at this point; I think there’ll be people worried the staged reopening is both too slow (people need to get back to work) and too fast (there’ll be actual outbreaks that could perhaps have been prevented if we stay in lockdown), and maybe the timeline will get tweaked as a result, but there’s already some flexibility built in via the “COVID SAFE Plan” that will presumably allow things to open up further after some sort of government/health review, and the ability to defer stages if there’s an undue risk. As far as the economy goes, I expect we’ll see a quicker than expected recovery mostly: tourism and exporters will find it difficult but scrape by, I think — lack of international competition will probably mean some tourist places end up with a blow out year; industries relying on immigration such as higher ed and real estate will still be in trouble for a while; but I can’t put a figure on where that will all end up. The budget will be a mess, and worse for the fact that we didn’t get back into surplus between dealing with the last crisis and this one coming along. I expect we’ll be stuck with having to take effort to deal with avoiding covid19 until it either mutates into something more like a normal flu, dies out everywhere, or we get a vaccine, which seems likely to be years away.

Bitcoiner Maximalism

I’ve been trying to come up with a good way of thinking about what to prioritise in Bitcoin work for a little while now — there’s so much interesting stuff going around, all of it Good For Bitcoin, that you need some way to figure out which bits are more important or urgent than others. One way to think about it is “what will we make the price go up?”, another is “how do we beat all the altcoins?”, but both of those seem a bit limited in scope. Maybe an alternative is to think about it backwards: if Bitcoin gets better, more people will want to be Bitcoiners; so what would it take to make more people Bitcoiners? That sort of question is a pretty common one in sales/marketing, and they tend to use “sales funnels” for analysing it — before becoming a customer, people have to hear about a product, be interested in it, and find it for sale somewhere, and you get some attrition at each step; reducing the attrition at any step (without making it worse at any other) then increases your sales and your numbers go up.

One way of looking at that might be to consider the normal sorts of things Bitcoiners do: they buy some Bitcoin, setup their own wallet to have control over their funds, run a full node, and maybe eventually start giving some input into Bitcoin’s development (whether that be in the form of code, discussion, investment or making bets over twitter). The problem with thinking about things that way is that while there are some clear incentives for the first steps (Bitcoin’s increasing in value so a good investment or at least better than earning negative rates; self-custody reduces the risk of some company running off with all the coins you thought were yours), there’s a breakdown after that: having a hardware wallet under your mattress is cheap and easy, but running a full node constantly is an ongoing cost and maintenance burden, and what’s the actual direct benefit to you? If you look at the numbers, those steps are something like 8B to 160M (2%) to 4M (2.5%) to 50k (1.25%) to maybe 900 (1.8%), but there’s no obvious levers to use to increase either the 2.5% or 1.25% figures, so that approach doesn’t seem that useful.

A different way of looking at it might be to first break out people who regularly transact with their Bitcoin balance, rather than just buying and holding. The idea being that this covers traders who actively manage their Bitcoin investment, merchants who sell products for Bitcoin, people who get paid in Bitcoin, and so on. I’ve got no idea what a valid number for this is — BitPay claims to be “Trusted by thousands of businesses — worldwide” which makes it sound like the number probably isn’t in the millions, so I’ve picked a quarter of a million. Going from “actively transacting” to “self-custody” is a different step than self-custody for “buying-and-holding” — don’t think of installing a mobile wallet or buying a hardware wallet, but rather as using software like btcpay or lightning rather than hosted solutions like bitpay or travelbybit. I’ve picked 15k as the number there, based on the number of lightning nodes reported by 1ml.com, and rounded up a bit.

The nice thing about that approach is that the incentives at each stage are a fair bit clearer. You maintain a Bitcoin balance if it works as a store of value and fits into your investment strategy. You go from just holding a Bitcoin balance to actively transacting with it if spending Bitcoin is less of a pain than spending from your bank account — which makes it pretty clear why that step has a 99.85% attrition rate and what to do about it. Likewise, you go from transacting in general to self-custody when you decide that the costs of using a Bitcoin bank outweigh the benefits — risk of loss of funds or censorship, KYC frustrations, privacy concerns versus ease of setup and someone else taking care of ongoing maintenance. Having that option is hopefully a good incentive for businesses (and regulators) to keep those risks, frustrations and concerns relatively rare for everyone that doesn’t self-custody as well. Going from actively using Bitcoin to helping it develop is still a big step, but it’s also a fairly natural one (or so it seems to me). I think those levels also fit fairly well with business models: getting people into Bitcoin in the first place is financial education/advice and exchange services; actively transacting is banking and merchant services; self-custody is hardware wallets, and things like btcpay and lightning nodes; even consensus participation has been monetized by the likes of bitfinex’s chain-split tokens. (A nice thing about this approach is that self-custody for people actively transacting, generally implies running a node for technical reasons, and at that point the costs of running a node are a much smaller deal: you’re getting regular benefits from your regular transactions, so the small regular costs of running a full node are much easier to justify)

One way to view those levels might be as “pre-coiners”, “store-of-value”, “method-of-payment”, “self-sovereign” and “decentralised” — with each level implicitly depending on the previous levels. You can’t pay for things with money that nobody values; there’s no point being in control of money that no one will accept or that’s not worth anything; there’s not point having decentralised money if it can be stolen from you, etc. There’s some circularity too though: there’s no point storing value if you can’t eventually transfer it, and a significant part of the value proposition of Bitcoin for store of value or method of payment is that you can control your own funds and that there isn’t a central group able to inflate the money supply, confiscate funds or block transactions.

What does that mean for priorities? I think there’s a few general principles you can draw from the above:

  • From an industry-growth point-of-view, increasing the percentages for the top two levels and maintaining the percentages for the bottom two seems like a good focus: getting a billion people owning Bitcoin, and hundred of millions transacting using it, even with “only” 12M (6% of 200M) people running their own full nodes (due to self-hosting their lightning balance), and 750k (6% of 12M) people actively paying attention to how Bitcoin works and evolves seems like it could work out.
  • This approach has “store of value” as a foundation that the other properties of Bitcoin rely on — if that makes sense, it probably means messing with the “store of value” features of Bitcoin is a really risky idea. Instead, it’s probably more important to work on things that reinforce the existing foundations, than neat new ideas to change them.
  • The “having Bitcoin” to “transacting with Bitcoin” step is the one that needs the most work — probably in a million areas: not just all the things on the todo list for lightning, but UX stuff, and working with regulators to avoid knee-jerk money-laundering concerns, or with tax agencies to reduce the reporting burden due to Bitcoin valuation changes, to deploying point-of-sale systems, and whatever else.
  • If we do manage to get lots more people holding Bitcoin, and/or lots more people transacting with it, then maintaining the percentages of people doing self-custody or contributing in general will be hard, and require a lot of effort.

So for me (with an open source developer’s perspective), I think that adds up to:

  • Number one priority is keeping Bitcoin working technically — trying to avoid bugs, resist potential attacks (both ones we already know about, and those people have yet to come up with), stay backwards compatible, do clean upgrades. Things to work on here include monitoring, tests, code analysis, code reviews, etc. This also means keeping development of bitcoin itself relatively slow, since all these things take time and effort.
  • Number two priority is, I think, lightning: it seems the best approach for payments, both for people who want to do self-custody, and as the underlying payments mechanism for Bitcoin custodians to use when their customers instruct them to make a payment. There’s a lot of work to be done there: routing, reliability, spam/attack-resistance, privacy, wallet integration, etc. Other payments related things (like btcpay) are also probably pretty high impact.
  • After that, I think being prepared for growth is the next thing: finding ways of doing things more efficiently (eg, batching, consolidation), coping dynamically with changes to the system (eg, fee estimation), developing standards to make it easy to interoperate with new entrants to the ecosystem (eg, psbt, miniscript), and having good explanations of how Bitcoin works and why it works that way to newcomers (podcasts, books, academic papers, etc).

And more particularly, I think that means that I want to prioritise stability over new features (so work on analysis and reviews and tests and no rushing the taproot soft-fork), and as far as new features go, I’m more interested in ones that can provide boosts to lightning or payments in general (so taproot and ANYPREVOUT stay high on my list), but growth and interoperability are still important (so I don’t have to ignore cool things like CTV fortunately).

Libra, hot-take

Hot-take on Facebook and friends’ cryptocurrency. Disclaimer: I work at Xapo, and Xapo’s a founding member of the Libra Association; thoughts are my own, and are only based on public information.

So, first, the stated goal is “Libra is a simple global currency and financial infrastructure that empowers billions of people”. That’s pretty similar to Xapo’s mission (“We created Xapo to give everyone the freedom and security to be more and do more with their money” eg). It’s also something that Bitcoin per-se isn’t really good at: the famous “7 transactions per second” limit means 220 million transactions per year, which doesn’t seem like it really scales to billions of people for instance. And likewise Libra’s monetary policy (backed by a basked of “bank deposits and short-term government securities”) isn’t very interesting compared to just holding funds in USD, EUR, AUD or similar; but probably is pretty compelling compared to holding Bolivars, Zimbabwe dollars or Argentinian pesos. That could make it a death-knell for badly managed central banks in just a few years, which could be pretty interesting.

It doesn’t sound very censorship resistant — if you want to use it to buy hookers or guns or support political causes unpopular with Silicon Valley, you’re probably out of luck. Likewise if you want to pay for a VPN out of China, or similar. It seems like all of the association members will have access to all the transactions, and there’ll only be at most a few hundred megacorps to lean on to fully deanonymise everyone, so while it’s not a positive for shady central banks, I think it’s totally compatible with fascist police states and oppressing freedom of association/speech/thought. Not sure if it’s better or worse than today with almost everything done via credit card or bank transfers. Certainly much worse than cash (or lightning).

The amazing thing about Bitcoin is that there wasn’t a baked in rule along the lines of “Satoshi gets all the moneys” — instead Satoshi just ran the software in the same way any other early adopter could, and all the early adopters benefited essentially equally. So one thing that’s always interesting to me is to see the ways in which new cryptocurrencies have their rules tilted to favour the founders. In this case it looks like there’s three ways: (1) founders get to run validators which means they get to see all the data, control access to it, and (presumably) be paid in “gas” for the privilege; (2) the backing funds are invested in interest-bearing instruments, and the founders collect the interest, while Libra holders bear the investment risk; (3) the backing funds aren’t accessible to most users, but instead only to “authorized resellers” who will presumably charge a spread; these resellers are authorised by the association, and presumably will charge the resellers a membership fee for the privilege.

The consensus model they use is Byzantine consensus, rather than proof-of-work. So it’s immediately final (in much the same way as the Liquid sidechain is), rather than forcing people to have to worry about reorgs of 6 blocks or 100 blocks or 1000 blocks, etc. But that assumes that more than 2/3rds or players are honest — with 28 initial validators, if you had 10 nodes under your control, and could split the remaining 18 honest nodes into two groups of 9, you could collaborate with one group to create one history, and the other group to create a different history, and induce double spends. Essentially the coin’s security becomes vulnerable to a 34% attack, rather than Bitcoin’s nominal 51% attack vulnerability. There’s nothing particularly wrong with that, it just means you need to be careful not to let more than a third of nodes be vulnerable to attack. Probably not good to suggest “For organizations that would like to run a validator node via a cloud service provider …” on your website though.

Unlike proof-of-work, Byzantine consensus doesn’t scale in the number of validators. From their whitepaper: “Our goal was to choose a protocol that would initially support at least 100 validators and would be able to evolve over time to support 500–1,000 validators”. But that’s a feature not a bug if you want to make a profit by being part of a small oligopoly, though. I’m a little dubious about how reliable you can realistically make it too — to have a transaction confirm, 2/3rds of the global set of validators have to see it, so losing links between countries means entire country’s ecommerce systems become unavailable, and if there’s breaks or even just slow-downs between significant subsets of validators, potentially the entire currency becomes unavailable. Bitcoin is small enough that you can route around this via satellite links or SMS or similar, but Libra needs to be able to reliably throw lots of data around.

The whitepaper claims “The association does not set a monetary policy.” which seems a bit disingenuous to me. They’ll need to decide what will make up the basket that backs each Libra coin, and that’s a monetary policy. They also note they’ll have “The ability to customize the Libra coin contract using Move” which “allows the definition of this scheme without any modifications to the underlying protocol or the software that implements it. Additional functionality can be created, such as requiring multiple signatures to mint currency and creating limited-quantity keys to increase security”. There’s a few interesting cases bound up somewhere in there: what happens when the backing reserve loses value — eg, a country renegs on its bonds, or there’s a huge loss in value in one of the currencies, or one of the banks fails and can’t redeem its deposits? They’ve already covered what happens if the reserve gains value: the founders take it as profit. If that works out okay once it happens by accident, that opens up the option of “going off the fiat standard” and just having the coin be issued in its own right, rather than due to changes in a bank balance somewhere. It seems unlikely to me that the economists and MBAs that’ll be running the foundation eventually will be able to resist that temptation once it arises, and their shareholders may even consider them legal beholden to succumb to it.

The Move language doesn’t seem very interesting; it uses accounts rather than coins, will include a “standard library” for things like sha3 rather than having them as opcodes, and generally seems like an incremental simplification from where Ethereum is. Having a smallish group of validators means that upgrades to the language should be relatively easy to coordinate, so I’d expect it to seem cheap and powerful compared to Bitcoin script or Ethereum.

Like I said, I think the macroeconomic impact on bad central banks is probably pretty positive — it either forces them to match world best practices, or be obsoleted. For central banks that are in the basket, it’s not clear to me what the consequences are: if, say, Australians are holding Libra coins instead of AUD, and the Reserve Bank wants to stimulate the economy by printing money/dropping rates to make everyone feel richer, then it seems like there’s two possibilities: if goods remain priced in AUD, despite people holding their spending money in Libra, then prices immediately seem cheaper, and people buy more stuff, and the Reserve Bank is happy; or, what seems more likely, goods become priced in Libra coin as well because that’s what people have in their accounts, and it’s stable and international and cool, and the Reserve Bank loses the ability to counteract recessions. But that assumes Libra is used a lot by people with first-world currencies, rather than the target audience of the unbanked. And it’s not clear that makes sense: it doesn’t pay interest (the founders collect that), it’s vulnerable to foreign currency shocks, and there’s maybe other drawbacks (reliability, privacy concerns, cost/speed, hassles of KYC/AML procedures). You could trivially get around this by having actual stable coins on the Libre platform, ie having an “AUD” coin instead of a Libracoin, but still on the Libra blockchain, with the stable coin backed by a single-currency reserve, rather than a basket reserve.

Good for Bitcoin? I don’t think Libra really competes with Bitcoin — Bitcoin’s a scarce store of value with peer-to-peer validation and permissionless ledger additions; Libra isn’t scarce, its decentralisation is limited to the association members which is in turn limited due to the technology in use, and it’s got permissions at every layer. It seems like, in a world where Bitcoin is wildly successful, that Libra could easily add Bitcoin to its reserve basket, and perhaps that could bridge the gap between the two feature sets: Bitcoin ensures that there’s no hidden inflation where central banks give free money to their cronies, while Libra gives access to Bitcoin as a store of value to billions of people. If Libra takes the fight for sounder-money to third-world governments, that perhaps just makes it easier for Bitcoin to be the next step after that. If Libra looks like the bigger immediate threat, being both new and having well known people to subpoena, while Bitcoin looks like old news that’s reasonably well understood, maybe that means good things for “permissionless innovation” in the Bitcoin space over the next little while. Will be interesting to see how India and Turkey and similar places react — places where the local currency looks precarious but isn’t already a basketcase. If they either don’t try to block Libra, or try but can’t, that’s a really good sign for people being better able to save and control their wealth globally in future, which is definitely good for Bitcoin, while if it does get blocked, that’s probably not a good sign for Libra’s mission.

Better than the alternatives? If you consider this as just an industry association trying to enter underserviced markets to make more moneys, does it make sense? “Decentralised consensus” is a useful organising principle to let the association keep each other honest, and in finance you probably want to keep a permanent audit trail anyway, and the “blockchain” they’ve specified doesn’t seem like it’s much more than that. So that point of view seems to work to me. Seems kind of a weird thing for Facebook to be leading, though.

So yeah; kind of interesting, but not for any of the reasons Bitcoin is interesting. Potential positives for adoption in the third-world; but just another payment method for the first-world. Lots of rent-seeking opportunities, but less harmful seeming than that of third-world central banks. The tech seems fine, but isn’t crazy interesting.

Taxes, nine years on

About nine years ago, during the last days of the first Rudd government, the Henry Tax review came out and I did a blog post about it. Their recommendations were:

  • tax free threshold of $25,000
  • marginal rate of 35% between $25,000 and $180,000
  • marginal rate of 45% above $180,000
  • drop the Medicare levy, low income tax offset, etc
  • introduce a standard deduction to simplify tax returns

(Given inflation, those numbers should probably be $30,000 and $220,000 today)

The only one of those recommendations the Rudd/Gillard govt’s managed to implement was the increase in the tax free threshold from $6000 to $18,200, accompanied by compensating marginal rate increases from 15% to 19% and 30% to 32.5%.

What we’ve got in the budget now is a step closer to the Henry review’s recommendations:

  • tax free threshold remains at $18,000
  • marginal rate of 19% up to $45,000 (in 2022) instead of $37,000
  • marginal rate of 30% up to $200,000 (in 2024) instead of 32.5% to $120,000 (in 2022) or $90,000 (nowish)
  • marginal rate of 37% dropped (in 2024)
  • top marginal rate of 45% retained
  • low income tax offset is retained and increased (and remains regressive, as the marginal tax rate under $66k is larger than the marginal tax rate over $67k due to the offset phasing out as income increases)
  • temporary low-and-middle income tax offset introduced to stage in the change to the 19% marginal rate
  • medicare levy retained at 2% rather than increased to 2.5%

Most of that’s from last year’s budget, which looks like it passed despite opposition from the ALP, the Greens and independents Tim Storer, Andrew Wilkie and Cathy McGowan. This year’s budget just changes the 19% bracket’s cutoff from $41,000 to $45,000, increases the LITO, and drops the 32.5% bracket to 30%.

That’s still a bit worse than the Henry review’s recommendations from almost a decade ago: the 19% marginal rate should and the low-income tax offset should both be dropped, with the tax free threshold raised to compensate for both of those, and the medicare levy should be rolled into remaining rates, increasing them to 32% and 47%. But still, it’ll be the first reduction in the number of tax brackets since 1990, which isn’t nothing.

Despite the Henry review having been a Labor initiative, Labor’s plan seems to be to do the opposite, and re-legislate the 37% tax rate back in so that we won’t have to have “a cleaner [..] pay the same tax rate as a CEO”. Shorten’s explicit example of a nurse on $40,000 and a doctor on $200,000 paying the same rate doesn’t actually work; the nurse’s marginal rate drops to 19% even under existing law before the doctor’s marginal rate drops from 45% to 30%. Comparing marginal rates at wildly different incomes is absurd, however; and the Henry report addressed this concern directly, noting that a large tax free threshold and a flat marginal rate already achieves progressivity, so that, eg, a cleaner on $50,000 pa pays $6630 (13.3%) tax while the CEO on $150,000 pays $36,630 (24.4%) tax, despite both being on the same 30% marginal rate. This doesn’t seem to just be election sloganeering by Shorten, but an ongoing lack of understanding; O’Neill made a similar claim in the parliamentary debate last year, sayingLet’s be absolutely clear here: stages 2 and 3 of the government’s tax plan will flatten out Australia’s personal income tax system, and that structural change to the personal income tax system is eroding its progressivity.

The budget papers have an interesting justification for the changes: they keep the percentage of govt revenue collected from the top 1%, 5%, 10% and 20% of taxpayers roughly stable (in percentage of total terms). Without the changes, I think the numbers indicate that the top 1% of taxpayers and the bottom half of the top 20% of taxpayers currently pay around 16.7% and 16% of the government’s income tax revenue, but without the changes that would reverse to 15.6% and 16.1%, while with them it’s 17% and 15.5%, which seems fairer. On the other hand, in both cases the burden on the bottom 80% of taxpayers is slightly increased in both cases. Not really sure what good answers here are — it really depends on how much more the top x% earn compared to the top y%, and that’s easier to look at just by looking at average and marginal rates anyway — but it seems like an interesting thing to think about.

I did a followup post a few years later, shortly before Gillard got ousted for the brief second Rudd government, looking at something like:

  • tax free threshold of $25,000 [$28,000 inflation adjusted]
  • marginal rate of 35% between $25,000 and $80,000 [$90,000]
  • marginal rate of 40% between $80,000 and $180,000 [$200,000]
  • marginal rate of 46.5% above $180,000
  • dropping Medicare levy, low income tax offset, etc

and noting it’d result in pretty similar government revenue based on the reported taxable income distribution. It’s more effort to get the numbers from the ATO and run them than I can be bothered with for now (and would be pretty speculative trying to apply them to the world of 2024), but tax brackets like

  • tax free threshold of $20,000
  • marginal rate of 20% up to $45,000
  • marginal rate of 32% up to $200,000
  • marginal rate of 47% above that
  • drop Medicare levy, low income tax offset, etc

would be very close to the post-2024 plan, if anyone could manage the politics of not special casing the medicare levy or the low-income offset.

In the same post, I also thought about an unconditional $350 per fortnight payment as an optional alternative to the tax free threshold — so you get $350 a fortnight (tax free) direct into your bank account, but pay 35% from the first dollar you earn other than that all the way to $80k. That seemed like a fairly plausible way to start on a UBI to me — if you’re earning more than $25k per year, it doesn’t affect your total tax bill at all, but it’s a quarter of the minimum wage or about half the newstart allowance, so it’s not trivial, and doesn’t require any additional paperwork or convincing centrelink you’re not a bludger. If you could afford to raise the tax free threshold to $30,000 and just have a 32% rate from there to $200,000 (which would mean everyone earning over $45,000 pays the same tax, while everyone earning less than that pays less tax), you could have a UBI of up to $370/fortnight, without any impact on anyone earning more than $30,000 a year, or any disincentive to work for anyone earning less than that. That still means fitting up to an extra $10,000 per year for all the people who don’t earn more than $30,000 a year into the budget, which still isn’t easy. Maybe an easy way to start might be to make it so you can only opt-in if you’ve filed a tax return for the past three years and are 21 or over, which would exclude a lot of the people who’d otherwise be getting large payouts. Interactions with newstart, and various pensions would also need a bunch of work.

I wish there was a political party that had a policy like that. But the ALP and Greens seem to be against fewer brackets on the general principle that anything that’s good for the rich is bad for Australia (and the Greens think a good starting point for a UBI is $18,200 per year, or even better would be $23,000 per year funded by a top tax bracket of 78% which is just absurd), while the LDP wants a flat 20% tax with a $40,000 tax free threshold and fewer transfer payments rather than more, and everyone else tends to want to only give welfare payments to people who prove they need it, rather than a universal scheme, again on principle, despite that making it harder for welfare recipients to work. The Libs come the closest, but their vision still barely gets one and a half of the four income tax recommendations from the Henry report implemented one and a half decades after the report came out. Which is better than nothing, or going in the wrong direction, but it’s hardly very inspiring.

Myths and disinformation

As Mike Burgess, Director-General of the Australian Signals Directorate — one of roles that is a direct beneficiary of the Assistance and Access bill — points out “there has been considerable inaccurate commentary on the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018″. His attempt to calm the waters down follows the standard template of declaring everything opponents say to be based on myths; I guess that’s the “it’s all fake news!” defense. Let’s see how accurate that is.

#1: Your information is no longer safe

His first claim is that “if you are using a messaging app for a lawful purpose, the legislation does not affect you”. This isn’t true on two grounds.

The first is that the legislation doesn’t directly target users of messaging apps, but their providers. So if you write a messaging app, and only use it yourself for legal purposes, even in the best case you’re still affected because the police can come and demand you make it so they can spy on other people who may be using it to discuss illegal activities. But the legislation isn’t restricted to “messaging apps”, and the term “messaging” never actually appears in the legislation. The law is actually much broader and covers any “designated communications provider” which, amongst 14 other categories, includes anyone who “develops, supplies or updates software used, for use, or likely to be used in connection with (a) a listed carriage service; or (b) an electronic service that has one or more end-users in Australia”, then going on to note that “For the purposes of this Part, electronic service means .. (b) a service that delivers material to persons having equipment appropriate for receiving that material, […]” and “”For the purposes of subsection (1), service includes a website”. Run a website in Australia that someone else in Australia might look at? The law affects you.

But the second way it’s not true, is that you don’t have to be behaving unlawfully for the government to decide to snoop on your communications, they just have to think you are. That’s just normal policing, of course: you get a warrant to find out what’s going on, then if there really was something illegal, you present a case and get a guilty verdict. Well, that’s if you’re the police: the ASD is more about just getting information, not convicting anyone of an actual crime. As per their website, their mission is to “Inform” through “covertly accessing information not publicly available”, so while they’re also about “supporting military operations, law enforcement and criminal intelligence activity against cyber criminals” I guess it’s understandable they might not be on top of all the finer details of the process that you could pick up from an episode of Law&Order.

#2: Agencies get unfettered power

In any event, there are no protection measures in place against the nominated agencies misuing the new powers: there is no way for the website owners who are required to break the security of their websites (or messaging apps, or other software) to know the reason for the request, it is illegal to even tell others that their has been a request or to imply who the request came from, and even if it does become known, there are no statutory penalties for an agency issuing unsupported notice.

One such way this fails is the claim “Nobody’s personal communications can be accessed under the Act without a warrant”. Perhaps if the website owner being asked to make such changes has good enough legal advice, that might be true; but nowhere in the act does it actually say you have to have a warrant before making these requests. Instead it says something much weaker, such as: “A technical assistance notice or technical capability notice has no effect to the extent (if any) to which it would require a designated communications provider to do an act or thing for which a warrant or authorisation under any of the following laws is required: […]”. Which is actually almost the opposite: if you needed a warrant, the notice has no effect; but if you didn’t need a warrant, you have to comply with it.

#3: The security of the Internet is under threat

Mike writes “By their very nature, security and law enforcement investigations are highly targeted”. This is simply a lie: modern intelligence gathering often follows a “Big Data” approach, where as much data is collected as possible, and is then analysed after the fact. This was documented publicly by the Snowden leaks, and Australia in particular is known to participate in the “PRISM” program of dragnet surveillance at the Internet service provider level. That program has been previously addressed in parliament, with then Senator Xenephon asking if any emails might be excluded from the program, with then Foreign Minister Carr explaining that there were safeguards in place, but not answering the question asked.

Mike also points out the “systemic weakness” defense, but avoids mentioning any of the concerns about the ineffectiveness of that provision that were raised during the public consultation and senate review, or the fact that the proposals to address those flaws were abandoned in the rush to not leak weak on national security over Christmas.

#4: Tech companies will be forced offshore

Companies are already considering whether to offshore. Certainly they aren’t “forced” to do so by the legislation, but they’re certainly encouraged to do so by economic reality. This is simply the expected result of the destruction of trust this bill enabled; the PRISM revelations had a similar effect on compliant companies.

#5: The communications of Australians will be jeopardised

Mike claims the Act has built-in oversight mechanisms. As many of the responses to the public consultation noted, these oversight mechanisms are woefully limited. The act gives IGIS no additional powers over any of the agencies (and they only have power over the spy agencies, not the anti-corruption or police), though it does at least make it legal to inform them about notices. There is, per the IGIS website, no right to make a complaint to IGIS, nor any obligation on IGIS to investigate complaints about the intelligence and security services. The Commonwealth ombudsman does not seem to be mentioned in the Act at all, so it does not seem like it would be legal to even inform them that you have received a notice under the Act, in order to complain about it being illegal.

The problem with this is that oversight of national security agencies is almost impossible: the only way we find out about activities like PRISM that do affect large swathes of Australian citizens, rather than proven threats to national security, is when a disastrous leak occurs; and even when that occurs questions of what was actually going on are dismissed with platitudes that “there are procedures in place”. The public never has the opportunity to review those procedures in detail, of course.

#6: ASD will be able to spy on Australians

I think Mike is claiming this is a myth because, like, ASD cares about foreigners, why would they even want to spy on Australians? Which might be plausible, if we didn’t have a large migrant population, or ASD didn’t have alliances with foreign intelligence agencies that do want to spy on Australians. And maybe it’s true anyway; who knows? Though I notice he qualifies that as “everyday” Australians.

In any event, the question is whether they can, and the Act makes this easy: all they need is to convince one of the other interception agencies to issue the notice, and then communicate the results to them under the carve-out in Division 6 317ZF(3)(d)(ii) which allows the interception agency to pass on any info they obtain “in connection with the performance of functions, or the exercise of powers, by the Australian Signals Directorate”.

#7: The reputation of Australian tech companies will suffer

This is in fact a myth: the reputation of Australian tech companies is already suffering.

It is, at least, nice of Mike to have provided such a convenient list of headlines for why the Act is such a disaster, and why our “intelligence” agencies have been over-influenced by their own self-interest, rather than the national interest. The true danger of the act is not the usual grab-bag of “terrorists, pedophiles and other criminals” but rather law enforcement and security agencies who have to act with little or no public oversight gaining large powers of the remainder of the Commonwealth.

I still admire Mike for the ASD’s “long time listener, first time caller” tweet, but they’ve overreached here and come up with a true disaster of a policy, that should never have made it through Parliament.

Money Matters

I have a few things I need to write, but am still a bit too sick with the flu to put together something novel, so instead I’m going to counter-blog Rob Collins recent claim that Money doesn’t matter. Rob’s thoughts are similar to ones I’ve had before, but I think they’re ultimately badly mistaken.

There’s three related, but very different, ways of thinking about money: as a store of value, as a medium of exchange, and as a unit of account. In normal times, dollars (or pounds or euros) work for all three things, so it’s easy to confuse them, but when you’re comparing different moneys some are better at one than another, and when a money starts failing, it will generally fail at each purpose at different rates.

Rob says “Money isn’t wealth” — but that’s wrong. In so far as money serves as a store of value, it is wealth. That’s why having a million dollars in your bank account makes you feel wealthy. The obvious failure mode for store of value is runaway inflation, and that quickly becomes a humanitarian disaster. Money can be one way to store value, but it isn’t the only way: you can store value by investing in artwork, buying property, building a company, or anything else that you expect to be able to sell at some later date. The main difference between those forms of investment versus money is that, ideally, monetary investments have low risk (perhaps the art you bought goes out of fashion and becomes worthless, or the company goes bankrupt, but your million dollars remains a million dollars), and low variance (you won’t make any huge profits, but you won’t make huge losses either). Unlike other assets, money also tends to be very fungible — if you earn $1000, you can spend $100 and have $900 left over; but if you have an artwork worth $1000 it’s a lot harder to sell one tenth of it.

Rob follows up by saying that money is “a thing you can exchange for other things”, which is true — money is a medium of exchange. Ideally it’s cheap and efficient, hard to counterfeit, and easy to verify. This is mostly a matter of technology: pretty gems are good at these things in some ways, coins and paper notes are good in others, cheques kind of work though they’re a bit to easy to counterfeit and a bit too hard to verify, and these days computer networks make credit card systems pretty effective. Ultimately a lot of modern systems have ended up as walled gardens though, and while they’re efficient, they aren’t cheap: whether you consider the 1% fees credit card companies charge, or the 2%-4% fees paypal charges, or the 30% fees from the Apple App Store or Google Play Stores, those are all a lot larger than how much you’d lose accepting a $50 note from someone directly. I have a lot of hope that Bitcoin’s Lightning Network will eventually have a huge impact here. Note that if money isn’t wealth — that is, it doesn’t manage to be a good store of value even in the short term, it’s not a good medium of exchange either: you can’t buy things with it because the people selling will have to immediately get rid of it or they’ll be making a loss; which is why currencies undergoing hyperinflation result in black markets where trade happens in stable currencies.

With modern technology and electronic derivatives, you could (in theory) probably avoid ever holding money. If you’re a potato farmer and someone wants to buy a potato from you, but you want to receive fertilizer for next season’s crop rather than paper money, the exchange could probably be fully automated by an online exchange so that you end up with an extra hundred grams of fertilizer in your next order, with all the details automatically worked out. If you did have such a system, you’d entirely avoid using money as a store of value (though you’d probably be using a credit account with your fertilizer supplier as a store of value), and you’d at least mostly avoid using money as a medium of exchange, but you’d probably still end up using money as a medium of account — that is you’d still be listing the price of potatoes in dollars.

A widely accepted unit of account is pretty important — you need it in order to make contracts work, and it makes comparing different trades much easier. Compare the question “should I sell four apples for three oranges, or two apples for ten strawberries?” with “should I sell four apples for $5, or two apples for $3” and “should I buy three oranges for $5 or ten strawberries for $3?” While I suppose it’s theoretically possible to do finance and economics without a common unit of account, it would be pretty difficult.

This is a pretty key part and it’s where money matters a lot. If you have an employment contract saying you’ll be paid $5000 a month, then it’s pretty important what “$5000” is actually worth. If a few months down the track there’s a severe inflation event, and it’s only worth significantly less, then you’ve just had a severe pay cut (eg, the Argentinian Peso dropped from 5c USD in April to 2.5c USD in September). If you’ve got a well managed currency, that usually means low but positive inflation, so you’ll instead get a 2%-5% pay cut every year — which is considered desirable by economists as it provides an automatic way to devote less resources to less valuable jobs, without managers having to deliberately fire people, or directly cut peoples’ pay. Of course, people tend to be as smart as economists, and many workers expect automatic pay rises in line with inflation anyway.

Rob’s next bit is basically summarising the concept of sticky prices: if there’s suddenly more money to go around, the economy goes weird because people aren’t able to fix prices to match the new reality quickly, causing shortages if there’s more money before there’s higher prices, or gluts (and probably a recession) if there’s less money and people can’t afford to buy all the stuff that’s around — this is what happened in the global financial crisis in 2008/9, though I don’t think there’s really a consensus on whether the blame for less money going around should be put on the government via the Federal Reserve, or the banks, or some other combination of actors.

To summarise so far: money does matter a lot. Having a common unit of account so you can give things meaningful prices is essential, having a convenient store of value that you can use for large and small amounts, and being able to easily trade it for goods and services is a really big deal. Screwing it up hurts people directly, and can be really massively harmful. You could probably use something different for medium of exchange than method of account (eg, a lot of places accepting cryptocurrencies use the cryptocurrency as medium of exchange, but use regular dollars for both store of value and pricing); but without a store of value you don’t have a medium of exchange, and once you’ve got a method of account, having it also work as a store of value is probably too convenient to skip.

But all that said, money is just a tool — generally money isn’t what anyone wants, people want the things they can get with money. Rob phrases that as “resources and productivity”, which is fine; I think the economics jargon would be “real GDP” — ie, the actual stuff that goes into GDP, as opposed to the dollar figure you put on it. Things start going wonky quickly though, in particular with the phrase “If, given the people currently in our country, and what they are being paid to do today, we have enough resources, and enough labour-and-productivity to …” — this starts mixing up nominal and real terms: people expect to be paid in dollars, but resources and labour are real units. If you’re talking about allocating real resources rather than dollars, you need to balance that against paying people in real resources rather than dollars as well, because that’s what they’re going to buy with their nominal dollars.

Why does that matter? Ultimately, because it’s very easy to get the maths wrong and not have your model of the national economy balanced: you allocate some resources here, pay some money there, then forget that the people you paid will use that money to reallocate some resources. If the error’s large enough and systemic enough, you’ll get runaway inflation and all the problems that go with it.

Rob has a specific example here: an unemployed (but skilled) builder, and a homeless family (who need a house built). Why not put the two together, magic up some money to prime the system and build a house? Voila the builder has a job, and the family has a home and everyone is presumably better off. But you can do the same thing without money: give the homeless family a loaded gun and introduce them to the builder: the builder has a job, and the family get a home, and with any luck the bullet doesn’t even get used! The key problem was that we didn’t inspect the magic sufficiently: the builder doesn’t want a job, or even money, he wants the rewards that the job and the money obtain. But where do those rewards come from? Maybe we think the family will contribute to the economy once they have a roof over their heads — if so, we could commit to that: forget the gun, the family goes to a bank, demonstrates they’ll be able to earn an income in future, and takes out a loan, then goes to the builder and pays for their house, and then they get jobs and pay off their mortgage. But if the house doesn’t let the family get jobs and pay for the home, the things the builder buys with his pay have to come from somewhere, and the only way that can happen is by making everyone else in the country a little bit poorer. Do that enough, and everyone who can will move to a different country that doesn’t have that problem.

Loans are a serious answer to the problem in general: if the family is going to be able to work and pay for the house eventually, the problem isn’t one of money, it’s one of risk: whoever currently owns the land, or the building supplies, or whatever doesn’t want to take the risk they’ll never see anything for letting the house get built. But once you have someone with founds who is willing to take the risk, things can start happening without any change in government policies. Loaning directly to the family isn’t the only way; you could build a set of units on spec, and run a charity that finds disadvantaged families, and sets them up, and maybe provide them with training or administrative support to help them get into the workforce, at which point they can pay you back and you can either turn a profit, or help the next disadvantaged family; or maybe both.

Rob then asks himself a bunch of questions, which I’ll answer too:

  • What about the foreign account deficit? (It doesn’t matter in the first place, unless perhaps you’re anti-immigrant, and don’t want foreigners buying property)
  • What about the fact that lots of land is already owned by someone? (There’s enough land in Australia outside of Sydney/Melbourne that this isn’t an issue; I don’t have any idea what it’s like in NZ, but see Tokyo for ways of housing people on very little land if it is a problem)
  • How do we fairly get the family the house they deserve? (They don’t deserve a house; if they want a nice house, they should work and save for it. If they’re going through hard times, and just need a roof over their heads, that’s easily and cheaply done, and doesn’t need a lot of space)
  • Won’t some people just ride on the coat-tails of others? (Yes, of course they will. That’s why you target the assistance to help them survive and get back on their feet, and if they want to get whatever it is they think they deserve, they can work for it, like everyone else)
  • Isn’t this going to require taking things other people have already earnt? (Generally, no: people almost always buy houses with loans, for instance, rather than being given them for free, or buying them outright; there might be a need to raises taxes, but not to fundamentally change them, though there might be other reasons why larger reform is worthwhile)

This brings us back to the claim Rob makes at the start of his blog: that the whole “government cannot pay for healthcare” thing is nonsense. It’s not nonsense: at the extreme, government can’t pay for enough healthcare for everyone to live to 120 while feeling like they’re 30. Even paying enough for everyone to have the best possible medical care isn’t feasible: even if NZ has a uniform health care system with 100% of its economy devoted to caring for the sick and disabled, there’s going to be a specialist facility somewhere overseas that does a better job. If there isn’t a uniform healthcare system (and there won’t be, even if only due to some doctors/nurses being individually more talented), there’ll also be better and worse places to go in NZ. The reason we have worrying fiscal crises in healthcare and aged support isn’t just a matter of money that can be changed with inflation, it’s that the real economic resources we’re expecting to have don’t align with the promises we’re already making. Those resources are usually expressed in dollar terms, but that’s because having a unit of account makes talking about these things easier: we don’t have to explicitly say “we’ll need x surgeons and y administrators and z MRI machines and w beds” but instead can just collect it all and say “we’ll need x billion dollars”, and leave out a whole mass of complexity, while still being reasonably accurate.

(Similar with “education” — there are limits to how well you can educate everyone, and there’s a trade off between how many resources you might want to put into educating people versus how many resources other people would prefer. In a democracy, that’s just something that’s going to get debated. As far as land goes, on the other hand, I don’t think there’s a fundamental limit to the government taking control over land it controls, though at least in Australia I believe that’s generally considered to be against the vibe of the constitution. If you want to fairly compensate land holders for taking their land, that goes back to budget negotiations and government priorities, and doesn’t seem very interesting in the abstract)

Probably the worst part of Rob’s blog is this though: “We get 10% less things done. Big deal.” Getting 10% less things done is a disaster, for comparison the Great Recession in the US had a GDP drop of less than half that, at -4.2% between 2007Q4 and 2009Q2, and the Great Depression was supposedly about -15% between 1929 and 1932. Also, saying “we’d want 90% of folk not working” is pretty much saying “90% of folk have nothing of value to contribute to anyone else”, because if they did, they could do that, be paid for it, and voila, they’re working. That simply doesn’t seem plausible to me, and I think things would get pretty ugly if it ended up that way despite it’s implausibility.

(Aside: for someone who’s against carbs, “potato farmer” as the go to example seems an interesting choice… )

Buying in and selling out

I figured “Someday we’ll find it: the Bitcoin connection; the coders, exchanges, and me” was too long for a title. Anyhoo, since very late February I’ve been gainfully employed in the cryptocurrency space, as a developer on Bitcoin Core at Xapo (it always sounds pretentious to shorten that to “bitcoin core developer” to me).

I mentioned this to Rusty, whose immediate response (after “Congratulations”) was “Xapo is weird”. I asked if he could name a Bitcoin company that’s not weird — turns out that’s still an open research problem. A lot of Bitcoin is my kind of weird: open source, individualism, maths, intense arguments, economics, political philosophies somewhere between techno-libertarianism and anarcho-capatalism (“ancap”, which shouldn’t be confused with the safety rating), and a general “we’re going to make the world a better place with more freedom and cleverer technology” vibe of the thing. Xapo in particular is also my kind of weird. For one, it’s founded by Argentinians who have experience with the downsides of inflation (currently sitting at 20% pa, down from 40% and up from 10%), even if that pales in comparison to Venezuela, the world’s current socialist basket case suffering from hyperinflation; and Xapo’s CEO makes what I think are pretty good points about Bitcoin improving global well-being by removing a lot of discretion from monetary policy — as opposed to doing blockchains to make finance more financey, or helping criminals and terrorists out, or just generally getting rich quick. Relatedly, Xapo (seems to me to be) much more of a global company than many cryptocurrency places, which often seem very Silicon Valley focussed (or perhaps NYC, or wherever their respective HQ is); it might be a bit self-indulgent, but I really like being surrounded by people with oddly different cultures, and at least my general impression of a lot of Silicon Valley style tech companies these days is more along the lines of “dysfunctional monoculture” than anything positive. Xapo’s tech choices also seem to be fairly good, or at least in line with my preferences (python! using bitcoin core! microservices!). Xapo is also one of pretty few companies that’s got a strong Bitcoin focus, rather than trying to support every crazy new cryptocurrency or subtoken out there: I tend to think Bitcoin’s the only cryptocurrency that really has good technical and economic fundamentals; so I like “Bitcoin maximilism” in principle, though I guess I’m hard pressed to argue it’s optimal at the business level.

For anyone who follow Bitcoin politics, Xapo might seem a strange choice — Xapo not long ago was on the losing side of the S2X conflict, and why team up with a loser instead of the winners? I don’t take that view for a couple of reasons: I didn’t ever really think doubling the blocksize (the 2X part) was a fundamentally bad idea (not least, because segwit (the S part) already does that and more under some circumstances), but rather the problem was the implementation plan of doing it in just a few months, against the advice of all the most knowledgeable developers, and having an absolutely terrible response when problems with the implementation were found. But although that was probably unavoidable considering the mandate to activate S2X within just a few months, I think the majority of the blame is rightly put on the developers doing the shoddy work, and the solution is for companies to work with developers who can say “no” convincingly, or, preferably, can say “yes, and this is how” long enough in advance that solving the problem well is actually possible. So working with any (or at least most) of the S2X companies just seems like being part of the solution to me. And in any event, I want to live in a world where different viewpoints are welcome and disagreement is okay, and finding out that you’re wrong just means you learned something new, not that you get punished and ostracised.

Likewise, you could argue that anyone who wants to really use Bitcoin should own their private keys, rather than use something like Xapo as a wallet or even a vault, and that working on Xapo is kind-of opposed to the “be your own bank” philosophy at the heart of Bitcoin. My belief is that there’s still a use for banks with Bitcoin: safely storing valuables is hard even when they’re protected by maths instead of (or as well as) locks or guns; so it still makes sense for many people to want to outsource the work of maintaining private keys, and unless you’re an IT professional, it’s probably more sensible to do that to a company that looks kind of like a bank (ie, a custodial wallet like Xapo) rather than one that looks like a software vendor (bitcoin core, electrum, etc) or a hardware vendor (ledger or trezor, eg). In that case, the key benefit that Bitcoin offers is protection from government monetary policy, and, hopefully better/cheaper access or storage of your wealth, which isn’t nothing, even if it’s not fully autonomous control over your wealth.

For the moment, there’s plenty of things to work on at Xapo: I’ve been delaying writing this until I could answer the obvious “when segwit?” question (“now!”), but there’s still more bits to do there, and obviously there are lots of neat things to do improving the app, and even more non-development things to do like dealing with other financial institutions, compliance concerns, and what not. Mostly that’s stuff I help with, but not my focus: instead, the things I’m lucky enough to get to work on are the ones that will make a difference in months/years to come, rather than the next few weeks, which gives me an excuse to keep up to date with things like lightning and Schnorr signatures and work on open source bitcoin stuff in general. It’s pretty fantastic. The biggest risk as I see it is I end up doing too much work on getting some awesome new feature or project prototyped for Xapo and end up having to maintain it, downgrading this from dream job to just a motherforking fantastic one. I mean, aside from the bigger risks like cryptocurrency turns out to be a fad, or we all die from nuclear annihilation or whatever.

I don’t really think disclosure posts are particularly necessary — it’s better to assume everyone has undisclosed interests and biases and judge what they say and do on its own merits. But in the event they are a good idea: financially, I’ve got as yet unvested stock options in Xapo which I plan on exercising and hope will be worth something someday, and some Bitcoin which I’m holding onto and hope will still be worth something some day. I expect those to be highly correlated, so anything good for one will be good for the other. Technically, I think Bitcoin is fascinating, and I’ve put a lot of work into understanding it: I’ve looked through the code, I’ve talked with a bunch of the developers, I’ve looked at a bunch of the crypto, and I’ve even done a graduate diploma in economics over the last couple of years to have some confidence in my ability to judge the economics of it (though to be fair, that wasn’t the reason I had for enrolling initially), and I think it all makes pretty good sense. I can’t say the same about other cryptocurrencies, eg Litecoin’s essentially the same software, but the economics of having a “digital silver” to Bitcoin’s “digital gold” doesn’t seem to make a lot of sense to me, and while Ethereum aims at a bunch of interesting problems and gets the attention it deserves as a result, I’m a long way from convinced it’s got the fundamentals right, and a lot of other cryptocurrency things seem to essentially be scams. Oh, perhaps I should also disclose that I don’t have access to private keys for $10 billion dollars worth of Bitcoin; I’m happily on the open source technology side of things, not on the access to money side.

Of course, my opinions on any of that might change, and my financial interests might change to reflect my changed opinions. I don’t expect to update this blog post, and may or may not post about any new opinions I might form. Which is to say that this isn’t financial advice, I’m not a financial advisor, and if I were, I’m certainly not your financial advisor. If you still want financial advice on crypto, I think Wences’s is reasonable: take 1% of what you’re investing, stick it in Bitcoin, and ignore it for a decade. If Bitcoin goes crazy, great, you’ve doubled your money and can brag about getting in before Bitcoin went up two orders of magnitude; if it goes terrible, you’ve lost next to nothing.

One interesting note: the press is generally reporting Bitcoin as doing terribly this year, maintaining a value of around $7000-$9000 USD after hitting highs of up to $19000 USD mid December. That’s not fake news, but it’s a pretty short term view: for comparison, Wences’s advice linked just above from less than 12 months ago (when the price was about $2500 USD) says “I have seen a number of friends buy at “expensive” prices (say, $300+ per bitcoin)” — but that level of “expensive” is still 20 or 30 times cheaper than today. As a result, in spite of the “bad” news, I think every cryptocurrency company that’s been around for more than a few months is feeling pretty positive at the moment, and most of them are hiring, including Xapo. So if you want to work with me on Xapo’s backend team we’re looking for Python devs. But like every Bitcoin company, expect it to be a bit weird.

Bitcoin: ASICBoost – Plausible or not?

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

  • “Much conspiracy around today. I don’t believe SegWit non-activation has anything to do with AsicBoost!” – Timo Hanke, one of the patent applicants, on twitter
  • “there’s absolutely nothing but baseless accusations flying around” – Emin Gun Sirer’s take, linked from the Bitmain statement
  • “no company would ever produce a chip that would have a switch in to hide that it’s actually an ASICboost chip.” – Sam Cole formerly of KNCMiner which went bankrupt due to being unable to compete with Bitmain in 2016
  • “I believe their claim about not activating ASICBoost. It is very small money for them.” – Guy Corem of SpoonDoolies, who independently discovered ASICBoost
  • “No one is even using Asicboost.” – Roger Ver (/u/memorydealers) on reddit

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

  • If ASICBoost were in use, and no one had any reason to hide it being used, then people would admit to using it, and would do so by using bits in the block version.
  • If ASICBoost were in use, but people had strong reasons to hide that fact, then people would claim not to be using it for a variety of reasons, but those explanations would not stand up to more than casual analysis.
  • If ASICBoost were not in use, and it was fairly easy to see there is no benefit to it, then people would be happy to share their reasoning for not using it in detail, and this reasoning would be able to be confirmed independently.
  • If ASICBoost were not in use, but the reasons why it is not useful require significant research efforts, then keeping the detailed reasoning private may act as a competitive advantage.

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

  • Greg’s maths splits miners into two groups each with 50% of hashpower. One group, which is unable to use ASICBoost is assumed to be operating at almost zero profit, so their costs to mine bitcoins are only barely below the revenue they get from selling the bitcoin they mine. Using this assumption, the costs of running mining equipment are calculated by taking the number of bitcoin mined per year (365*24*6*12.5=657k), multiplying that by the price at the time ($1100), and halving the costs because each group only mines half the chain. This gives a cost of mining for the non-ASICBoost group of $361M per year. The other group, which uses ASICBoost, then gains a 30% advantage in costs, so only pays 70%, or $252M, a comparative saving of approximately $100M per annum. This saving is directly proportional to hashrate and ASICBoost advantage, so using Guy Corem’s figures of 13.2% hashrate and 15% advantage, this reduces from $95M to $66M, saving about $29M per annum.
  • Guy Corem’s maths estimates Bitmain’s figures directly: looking at the AntPool hashpower share, he estimates 500PH/s in hashpower (or 13.2%); he uses the specs of the AntMiner S9 to determine power usage (0.1 J/GH); he looks at electricity prices in China and estimates $0.03 per kWh; and he estimates the ASICBoost advantage to be 15%. This gives a total cost of 500M GH/s * 0.1 J/GH / 1000 W/kW * $0.03 per kWh * 24 * 365 which is $13.14 M per annum, so a 15% saving is just under $2M per annum. If you assume that the hashpower was 50% and ASICBoost gave a 30% advantage instead, this equates to about 1900 PH/s, and gives a benefit of just under $15M per annum. In order to get the $100M figure to match Greg’s result, you would also need to increase electricity costs by a factor of six, from 3c per kWH to 20c per kWH.
  • The approach I prefer is to compare what your hashpower would be keeping costs constant and work out the difference in revenue: for example, if you’re spending $13M per annum in electricity, what is your profit with ASICBoost versus without (assuming that the difficulty retargets appropriately, but no one else changes their mining behaviour). Following this line of thought, if you have 500PH/s with ASICBoost giving you a 30% boost, then without ASICBoost, you have 384 PH/s (500/1.3). If that was 13.2% of hashpower, then the remaining 86.8% of hashpower is 3288 PH/s, so when you stop using ASICBoost and a retarget occurs, total hashpower is now 3672 PH/s (384+3288), and your percentage is now 10.5%. Because mining revenue is simply proportional to hashpower, this amounts to a loss of 2.7% of the total bitcoin reward, or just under $20M per year. If you match Greg’s assumptions (50% hashpower, 30% benefit) that leads to an estimate of $47M per annum; if you match Guy Corem’s assumptions (13.2% hashpower, 15% benefit) it leads to an estimate of just under $11M per annum.

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).

 

It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.

 

Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least BTC.com and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (Bitcoin.com, BTC.top, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to BTC.top. Bitcoin.com is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and Bitcoin.com beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that Bitcoin.com, BTC.top, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

  • Direct ownership and control, that is being obscured in order to avoid an economic backlash that might result from people realising over 50% of hashpower is controlled by one group
  • Contractual control despite independent ownership, such that customers of Bitmain are committed to follow Bitmain’s lead when signalling blocks in order to maintain access to their existing hardware, or to be able to purchase additional hardware (an account on reddit appearing to belong to the GBMiners pool has suggested this is the case)
  • Contractual control due to offering essential ongoing services, eg support for physical hosting, or some form of mining pool services — maintaining the infrastructure for covert ASICBoost may be technically complex enough that Bitmain’s customers cannot maintain it themselves, but that Bitmain could relatively easily supply as an ongoing service to their top customers.
  • Contractual influence via leasing arrangements rather than sale of hardware — if hardware is leased to customers, or financing is provided, Bitmain could retain some control of the hardware until the leasing or financing term is complete, despite not having ownership
  • Coordinated investment resulting in cartel-like behaviour — even if there is no contractual relationship where Bitmain controls some of its customers in some manner, it may be that forming a cartel of a few top miners allows those miners to increase profits; in that case rather than a single firm having control of over 50% of hashrate, a single cartel does. While this is technically different, it does not seem likely to be an improvement in practice. If such a cartel exists, its members will not have any reason to compete against each other until it has maximised its profits, with control of more than 70% of the hashrate.

 

So, conclusions:

  • ASICBoost is worth using if you are able to. Bitmain is able to.
  • Nothing I’ve seen suggest Bitmain is economically clueless; so since ASICBoost is worth doing, and Bitmain is able to use it on mainnet, Bitmain are using it on mainnet.
  • Independently of ASICBoost, Bitmain’s most profitable course of action seems to be to control somewhere in the range of 50%-80% of the global hashrate at current prices and overall level of mining.
  • The distribution of hashrate between mining pools aligned with Bitmain in various ways makes it plausible, though not certain, that this may already be the case in some form.
  • If all this hashrate is benefiting from ASICBoost, then my estimate is that the value of ASICBoost is currently about $72M per annum
  • Avoiding dominant mining manufacturers tending towards supermajority control of hashrate requires either a high global hashrate or a relatively low price — the hashrate in TH/s should be about 5000 times the price in dollars.
  • The current price is about $2400 USD/BTC, so the corresponding hashrate to prevent centralisation at that price point is 12M TH/s. Conversely, the current hashrate is about 6M TH/s, so the maximum price that doesn’t cause untenable centralisation pressure is $1200 USD/BTC.

Bitcoin: ASICBoost and segwit2x – Background

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…