Bitcoin in 2021

I wrote a post at the start of last year thinking about my general priorities for Bitcoin and I’m still pretty happy with that approach — certainly “store of value” as a foundation feels like it’s held up!

I think over the past year we’ve seen a lot of people starting to hold a Bitcoin balance, and that we’ll continue to do so — which is a win, but following last year’s logic also means we’ll want to start paying more attention to the later parts of the funnel as well: if we (for instance) double the number of people holding Bitcoin, we also want to double the number of people doing self-custody, and double the number of people transacting over lightning, eg; and ideally we’d want that in addition to whatever growth in self-custody and layer 2 transactions we’d already been aiming for if Bitcoin adoption had remained flat.

That said, I’m not sure I’m in a growth mindset for Bitcoin this year, rather than a consolidation one: consider the BTC price at the start of the past few years: 2016: ~$450, 2017: $1000, 2018: $13,000, 2019: $3700, 2020: $8000, 2021: $30,000. Has there been a 8x increase in security and robustness during 2016, 2017 and 2018 to match the 8x price increase from Jan 2016 to Jan 2019? Yeah, that’s probably fair. Has there been another 8x increase in security and robustness during 2019 and 2020 to match the 8x price increase there? Maybe. What if you’re thinking of a price target of $200,000 or $300,000 sometime soon — doesn’t that require yet another 8x increase in security and robustness? Where’s that going to come from? And those 8x factors are multiplicative: if you want something like $250k by December 2021, that’s not a “three-eights-are-24” times increase in robustness over six years (2016 to 2022), it’s an “eight-cubed-is-512” times increase in robustness! And your mileage may vary, but I don’t really think Bitcoin’s already 500x more robust than it was five years ago.

So as excited as I am about taproot and the possibilities that opens up (PTLCs and eventually eltoo on lightning, scriptless scripts and discreet log contracts, privacy preserving proof of reserves, cheap multisig — the list might not be infinite but at least seems computationally intractable), I think I’m now even more of the view that it’s probably more important to work on things that reinforce the existing foundations, than neat new ideas to change them than I was this time last year.

There are already a bunch of areas where Bitcoin’s approach to security and robustness has improved technically over the past few years: we’ve got more people doing reviews (eg, via the PR review club, or getting introduced to Bitcoin via the Chaincode Residency etc), we’ve got deeper and more diverse continuous integration testing (thanks both to more integrations being enabled via github, and travis becoming unreliable enough to force looking at other approaches), fuzz testing has improved a lot and become a bit more broadly used, and I think static analysis of the codebase has improved a bit. There have been a bunch of improvements in code standards (eg using safe pointers, locking annotations, spans instead of raw pointers) too, I think it’s fair to say. I haven’t done an analysis here, just going from gut feel and recollection.

With a focus on robustness, to me, the areas to prioritise in the short term are probably:

  1. Modularisation — eg, so that we can better leverage process separation to reduce security impacts, and better use fuzz testing to catch bugs in edge cases. There’s already work to split the gui and wallet into separate processes, though while that’s merged, it’s not part of the standard build yet. Having the p2p-network-facing layer also be a separate process might be another good win. While it’s a tempting goal, I think libconsensus is still a ways off — p2p, mempool management, and validation rules are currently pretty tightly coupled — but there’s steps we can make towards that goal that will be improvements on their own, I think.
  2. The P2P network — This is the obvious way to attack Bitcoin since by its nature everyone has access to it. There are multiple levels to this: passively monitoring the p2p network may allow you to violate users’ privacy expectations, while actively isolating users onto independent networks can break Bitcoin’s fundamental assumptions (you can’t extend the longest chain if you can’t communicate with any of the people who have the longest chain). There are also plenty of potential problems that someone could cause in between those extremes that could, eg, break assumptions that L2 systems like lightning make. Third-party (potentially centralised) alternatives as backups for the p2p network may also be valuable support here — things like Blockstream Satellite, or block relay over ham radio, or headers over DNS: those can mend splits in the p2p network that the p2p layer itself can’t automatically fix. Or efficiency improvements like erlay or block-relay-only can allow a higher degree of connectivity making attacks harder.
  3. CI, static analysis, reproducible builds — Over the past year, travis seems like it’s gone from having the occasional annoying problem to being pretty much unusable for open source projects. CI is an important part of both development and review; having it break makes both quite a lot harder. What we’ve got at this point seems pretty good, but it’s new and not really time-tested yet, so I’d guess a year of smoothing out the rough edges is probably needed. I think there’s other “CI”-ish stuff that could be improved, like more automated IBD testing (eg, I think bitcoinperf is about 3 months out of date). Static analysis serves a similar goal to tests in a different way; and while we’ve already got a lot of low hanging fruit of this nature already integrated into CI via linters and compiler options, I suspect there’s still some useful automation that could happen here. Finally, nailing down the last mile to ensure that people are running the software the devs are testing is always valuable, and I think the nixos work is showing some promise there.
  4. Third-party validation — We’ve had a few third-party monitoring tools arise lately — various sites monitoring feerate and mempool sizes, forkmonitor checking for stale blocks (and double-spends), or, at a stretch, optech’s reviews of wallet behaviour and segwit support. There’s probably a lot of room for more of this.

I’d love to list formal verification at the consensus layer as a priority, but I think there’s too much yak-shaving needed first: it would probably need all the refactoring to get to libconsensus first, then would likely need that separated into its own process, which you could only then start defining a formal spec for, which in turn would give you something you could start doing formal verification against. I suspect we’ll want to be at that point within another cycle or two though.

I’m not concerned about mining — eventually there might be a risk that the subsidy is too small and there’s not enough fee income, but that’s not going to happen while the price doubles faster than the pre-scheduled halvings. There’s certainly centralisation risks, whether in ASIC manufacture, in hardware ownership/control, or at the pool level, but my sense of things is that’s not getting worse, and is not the biggest immediate risk. Maybe I’m wrong; if so, hopefully the people who think so are working on preventing problems from arising, rather than taking advantage of them.

There are other levels of robustness and security beyond just “keep the network working”, if you consider the question of “how to prevent my coins from being lost/stolen?” more broadly. The phishing attacks and potential for physical attacks resulting from the Ledger leak are an easy example of a problem of this sort, but exchange hacks/failures in general, malware swapping addresses so your funds go to an attacker instead of the intended recipient, and lost access to keys are also pretty bad. I think descriptors, miniscript and taproot multisig probably provide for a good path forward to help prevent losing access to keys; and it’s possible that progress on BIP322 (signing messages against a Bitcoin address) may provide a path to avoiding address swapping attacks.

Technical solutions are, in some sense, all you can hope for if you’re doing self-custody; but where a bank/custodian is involved (good) regulation might be useful too: requirements to keep customer data protected or destroyed, third-party audits to ensure the best-practices procedures you’re claiming to follow are actually being followed, etc. If custodians store funds in taproot addresses, it may be feasible to do privacy preserving (zero-knowledge) proofs of solvency, eg, making it harder for fly-by-night folks to run ponzi schemes or otherwise steal their customers’ funds.

Obviously where possible these sorts of standards should be implemented via audited open source code rather than needing extensive implementation costs by each company. But one other thing to think about is whether regulations of this nature could be setup as industry standards (“we comply with the industry standard, competitor X doesn’t”) rather than necessarily a government regulator — for one, it certainly seems questionable whether government regulators have the background to pick good best practices for cryptocurrency systems to follow. Though perhaps it would be to have something oriented towards “consumer rights” than “industry” per se, to avoid it just being a vector for regulatory capture.

I think there’s been good progress on stabilising Bitcoin development — in 2015 through 2017 we were in a phase where people were seriously thinking of replacing Bitcoin’s developers — devs were opposing a quick blocksize increase, so the obvious solution was to replace them with people who weren’t opposed. If you think of Bitcoin as an experimental, payments-oriented, tech startup, that’s perhaps not a bad idea; but if you think of it a store of value it’s awful: you don’t get a reliable system by replacing experts because they think your plan is wrong-headed, and you don’t get a good store of value without a reliable system. But whatever grudges might show up now and then on twitter, that seems to be pretty thoroughly in the past, and there now seems to be much broader support for funding devs, and much better consensus on what development should happen (though perhaps only because people who disagree have moved to different projects, and new disagreements haven’t yet cropped up).

But while that might be near enough the 64x improvement to support today’s valuation, I think we probably need a lot more to be able to supported continued growth in adoption.

Hopefully this is buried enough to not accidentally become a lede, but I’m particularly optimistic about an as yet unannounced approach that DCI has been exploring, which (if I’ve understood correctly) aims to provide long term funding for a moderate sized team of senior devs and researchers to focus on keeping Bitcoin stable and secure — that is auditing code, developing tools to find and prevent bugs, and doing targeted research to help the white hats stay ahead in the security arms race. I’m not sure it will get off the ground or pass the test of time, and if it does, it will probably need to be replicated by other groups to avoid becoming worryingly centralising, but I think it’s a promising approach for supporting the next 8x improvement in security and robustness, and perhaps even some of the one after that.

I’ve also chatted briefly with Jeremy Rubin who has some interesting funding ideas for Judica — the idea being (again, if I haven’t misunderstood) to try to bridge the charitable/patronage model of a lot of funding of open source Bitcoin dev, with the angel funding approach that can generate more funds upfront by having a realistic possibility of ending up with a profitable business and thus a return on the initial funding down the road.

That seems much more blue-sky to me, but I think we’ll need to continue exploring out-there ideas in order to avoid centralisation by development-capture: that is, if we just expand on what we’re doing now, we may end up where only a few companies (or individuals) have their quarterly bottom line directly affected by development funding, and are thus shouldering the majority of the burden while the rest of the economy more-or-less freeloads off them, and then having someone see an opportunity to exploit development control and decide to buy them all out. A mild example of this might be Red Hat’s purchase of CentOS (via an inverse-acquihire, I suppose you could call it), and CentOS’s recent strategy change that reduces its competition with Red Hat’s RHEL product.

(There are also a lot of interesting funding experiments in the DeFi/ethereum space in general, though so far I don’t think they feed back well into the “ongoing funding of robustness and security development work” goal I’m talking about here)

There are probably three “attacks” that I’m worred about at present, all related to the improvements above.

One is that the “modularisation” goal above implies a lot of code being moved around, with the aim of not really changing any behaviour. But because the code that’s being changed is complicated, it’s easy to change behaviour by accident, potentially introducing irritating bugs or even vulnerabilities. And because reviewers aren’t expecting to see behaviour changes, it can be hard to catch these problems: it’s perhaps a similar problem to semi-autonomous vehicles or security screening — most of the time everything is fine so it’s hard to ensure you maintain full attention to deal with the rare times when things aren’t fine. And while we have plenty of automated checks that catch wide classes of error, they’re still far from perfect. To me this seems like a serious avenue for both accidental bugs to slip through, and a risk area for deliberate vulnerabilities to be inserted by attackers willing to put in the time to establish themselves as Bitcoin contributors. But even with those risks, modularisation still seems a worthwhile goal, so the question is how best to minimise the risks. Unfortunately, beyond what we’re already doing, I don’t have good ideas how to do that. I’ve been trying to include “is this change really a benefit?” as a review question to limit churn, but it hasn’t felt very effective so far.

Another potential attack is against code review — it’s an important part of keeping Bitcoin correct and secure, and it’s one that doesn’t really scale that well. It doesn’t scale for a few reasons — a simple one is that a single person can only read so much code a day, but another factor is that any patch can have subtle impacts that only arise because of interactions with other code that’s not changing, and being aware of all the potential subtle interactions in the codebase is very hard, and even if you’re aware of the potential impacts, it can take time to realise what they are. Having more changes thus is one problem, but dividing review amongst more people is also a problem: it lowers the chance that a patch with a subtle bug will be reviewed by someone able to realise that some subtle bug even exists. Similarly, having development proceed quickly and efficiently is not always a win here: it reduces the time available to realise there’s a problem before the change is merged and people move on to thinking about the next thing. Modularisation helps here at least: it substantially reduces the chance of interactions with entirely different parts of the project, though of course not entirely. CI also helps, by automating review of classes of potential issues. I think we already do pretty well here with consensus code: there is a lot of review, and things progress slowly; but I do worry about other areas. For example, I was pretty surprised to see PR#20624 get proposed on a Friday and merged on Monday (during the lead up to Christmas no less); that’s the sort of change that I could easily see introducing subtle bugs that could have serious effects on p2p connectivity, and I don’t think it’s the sort of huge improvement that justifies a merge-first-review-later approach.

The final thing I worry about is the risk that attackers might try subtler ways of “firing the devs” than happened last time. After all, if you can replace all the people who would’ve objected to what you want to do, there’s no need to sneak it in and hope no one notices in review, you can just do it, and even if you don’t get rid of everyone who would object you at least lower the chances that your patch will get a thorough review by whoever remains. There are a variety of ways you can do that. One is finding way of making contributing unpleasant enough that your targets just leave on their own: constant arguments about things that don’t really matter, slowing down progress so it feels like you’re just wasting time, and personal attacks in the media (or on social media), for instance. Another is the cancel-culture approach of trying to make them a pariah so no one else will have anything to do with them. Or there’s the potential for court cases (cf Angela Walch’s ideas on fiduciary duties for developers) or more direct attempts at violence.

I don’t think there’s a direct answer to this — even if all of the above fail, you could still get people to leave by offering them bunches of money and something interesting to do instead, for example. Instead, I think the best defense is more cultural: that is having a large group of contributors, with strong support for common goals (eg decentralisation, robustness, fixed supply, not losing peoples funds, not undoing transactions) that’s also diverse enough that they’re not all vulnerable to the same attacks.

One of the risks of funding most development in much the same way is that it’s encourages conformity rather than diversity — an obvious rule for getting sponsored is “don’t bite the hand that feeds you” — eg, BitMEX’s Developer Grant Agreement includes “Not undertaking activities that are likely to bring the reputation of … the Grantor into disrepute”. And I don’t mean to criticise that: it’s a natural consequence of what a grant is. But if everyone working on Bitcoin is directly incentivised to follow that rule, what happens when you need a whistleblower to call out bad behaviour?

Of course, perhaps this is already fine, because there are enough devs who’ll happily quit their jobs if needed, or enough devs who have already hit their FU-money threshold and aren’t beholden to anyone?

To me though, I think it’s a bit of a red flag that LukeDashjr hasn’t gotten one of these funding gigs — I know he’s applied for a couple, and he should superficially be trivially qualified: he’s a long time contributor, he’s been influential in calling out problems with BIP16, in making segwit deployment feasible, in avoiding some of the possible disasters that could have resulted from the UASF activation of segwit, and in working out how to activate taproot, and he’s one of the people who’s good at spotting subtle interactions that risk bugs and vulnerabilities of the sort I talked about above. On the other hand he’s known for having some weird ideas and can be difficult to work with and maybe his expectations are unrealistic. What’s that add up to? Maybe he’s a test case for this exact attack on Bitcoin. Or maybe he’s just had a run of bad luck. Or maybe he just needs to sell himself better, or adopt a more business-friendly attitude — and I guess that’s the attitude to adopt if you want to solve the problem yourself rather than rely on someone else to help.

But… if we all did that, aren’t we hitting that exact “conformity” problem; and doesn’t that more or less leave everyone vulnerable to the “pariah” attack, exploitable by someone pushing your buttons until you overreact at something that’s otherwise innocuous, then tarring you as the sort of person that’s hard to work with, and repeating that process until that sticks, and no one wants to work with you?

While I certainly (and tautologically) like working with people who I like working with, I’m not sure there’s a need for devs to exclusively work with people they find pleasant, especially if the cost is missing things in review, or risking something of a vulnerable monoculture. On the other hand, I tend to think of patience as a virtue, and thus that people who test my patience are doing me a service in much the same way exams in school do — they show you where you’re at and what you need to work on — so it might also be that I’m overly tolerant of annoying people. And I did also list “making working on Bitcoin unenjoyable” as another potential attack vector. So I don’t know that there’s an easy answer. Maybe promoting Luke’s github sponsors page is the thing to do?

Anyway, conclusion.

Despite my initial thoughts above that taproot might be less of a priority this year in order to focus on robustness rather than growth, I think the “let wallets do more multisig so users funds are less likely to be lost” is still a killer feature, so I think that’s still #1 for me. I think trying to help with making p2p and mempool code be more resilient, more encapsulated and more testable might be #2, though I’m not sure how to mitigate the code churn risk that creates. I don’t think I’m going to work much on CI/tests/static analysis, but I do think it’s important so will try to do more review to help that stuff move forward.

Otherwise, I’d like to get the anyprevout patches brought up to date and testable. In so far as that enables eltoo, which then allows better reliability of lightning channels, that’s kind-of a fit for the robustness theme (and robustness in general, I think, is what’s holding lightning back, and thus fits in with the “keep lightning growing at the same rate as Bitcoin, or better” goal as well). It’s hard to rate that as highly as robustness improvements at the base Bitcoin layer though, I think.

There are plenty of other neat technical things too; but I think this year might be one of those ones where you have to keep reminding yourself of a few fundamentals to avoid getting swept up in the excitement, so keeping the above as foundations is probably a good idea.

Otherwise, I’m hoping I’ll be able to continue supporting other people’s dev funding efforts — whether blue sky, or just keeping on with what’s working so far. I’m also hoping to do a bit more writing — my resolution last year was meant to be to blog more, and didn’t really work out, so why not double down on it? Probably a good start (aside from this post) would be writing a response to the Productivity Commission Right to Repair issues paper; I imagine there’ll probably be some more crypto related issues papers to respond to over this year too…

If for whatever reason you’re reading this looking for suggestions you might want to do rather than what I’m thinking about, here are some that come to my mind:

  • Money: consider supporting or hiring Luke, or otherwise supporting (or, if it’s in your wheelhouse, doing) Bitcoin dev work, or supporting MIT DCI, or funding/setting up something independent from but equally as good as MIT DCI or Chaincode (in increasing order of how much money we’re talking). If you’re a bank affected by the recent OCC letter on payments, making a serious investment in lightning dev might be smart.
  • Bitcoin code: help improve internal test coverage, static analysis, and/or build reproducibility; setup and maintain external tests; review code and find bugs in PRs before they get merged. Otherwise there’s a million interesting features to work on, so do that.
  • Lightning: get PTLCs working (using taproot on signet or ecdsa-based), anyprevout/eltoo, improve spam prevention. Otherwise, implementing and fine-tuning everything already on lightning’s TODO list.
  • Other projects: do more testing on signet in general, test taproot integration on signet (particularly for robustness features like multisig), monitor blockchain and mempool activity for oddities to help detect and prevent potential attacks asap.

(Finally, just in case it’s not already obvious or clear: these are what I think are priorities today, there’s not meant to be any implication that anything outside of these ideas shouldn’t be being worked on)

5 Comments

  1. Neha says:

    This is a very good post. Thanks for writing it, AJ!

  2. 0xB10C says:

    > Finally, nailing down the last mile to ensure that people are running the software the devs are testing is always valuable, and I think the nixos work is showing some promise there.

    What you mean with “the nixos work” here?

  3. 0xB10C says:

    Thanks for writing this up. Enjoyed reading this earlier this year and re-reading it now!

  4. aj says:

    I get guix and nixos confused

  5. Yancy says:

    Thanks for the writeup. I’ve also have been thinking about how bitcoin would benefit from a formal spec, especially at the consensus layer. I’m familiar with the mantra that the C++ implementation is the standard, although, I feel like if we had a actually specification, that would allow more innovation in the area of formal proofs, and maybe even other implementations (for example Rust).

    Please let me know if you have any suggestions for getting involved with a spec. I’m also very interested in seeing provably correct code for libconsensus and learning more about formal verification. I have a bit of background in compiler design and programming languages.

Leave a Reply