Bitcoin Fees vs Supply and Demand

Continuing from my previous post on historical Bitcoin fees… Obviously history is fun and all, but it’s safe to say that working out what’s going on now is usually far more interesting and useful. But what’s going on now is… complicated.

First, as was established in the previous post, most transactions are still paying 0.1 mBTC in fees (or 0.1 mBTC per kilobyte, rounded up to the next kilobyte).

fpb-10k-txns

Again, as established in the previous post, that’s a fairly naive approach: miners will fill blocks with the smallest transactions that pay the highest fees, so if you pay 0.1 mBTC for a small transaction, that will go in quickly, but if you pay 0.1 mBTC for a large transaction, it might not be included in the blockchain at all.

It’s essentially like going to a petrol station and trying to pay a flat $30 to fill up, rather than per litre (or per gallon); if you’re riding a scooter, you’re probably over paying; if you’re driving an SUV, nobody will want anything to do with you. Pay per litre, however, and you’ll happily get your tank filled, no matter what gets you around.

But back in the bitcoin world, while miners have been using the per-byte approach since around July 2012, as far as I can tell, users haven’t really even had the option of calculating fees in the same way prior to early 2015, with the release of Bitcoin Core 0.10.0. Further, that release didn’t just change the default fees to be per-byte rather than (essentially) per-transaction; it also dynamically adjusted the per-byte rate based on market conditions — providing an estimate of what fee is likely to be necessary to get a confirmation within a few blocks (under an hour), or within ten or twenty blocks (two to four hours).

There are a few sites around that make these estimates available without having to run Bitcoin Core yourself, such as bitcoinfees.21.co, or bitcoinfees.guthub.io. The latter has a nice graph of recent fee rates:

bitcoinfees-github

You can see from this graph that the estimated fee rates vary over time, both in the peak fee to get a transaction confirmed as quickly as possible, and in how much cheaper it might be if you’re willing to wait.

Of course, that just indicates what you “should” be paying, not what people actually are paying. But since the blockchain is a public ledger, it’s at least possible to sift through the historical record. Rusty already did this, of course, but I think there’s a bit more to discover. There’s three ways in which I’m doing things differently to Rusty’s approach: (a) I’m using quantiles instead of an average, (b) I’m separating out transactions that pay a flat 0.1 mBTC, (c) I’m analysing a few different transaction sizes separately.

To go into that in a little more detail:

  • Looking at just the average values doesn’t seem really enlightening to me, because it can be massively distorted by a few large values. Instead, I think looking at the median value, or even better a few percentiles is likely to work better. In particular I’ve chosen to work with “sextiles”, ie the five midpoints you get when splitting each day’s transactions into sixths, which gives me the median (50%), the tertiles (33% and 66%), and two additional points showing me slightly more extreme values (16.7% and 83.3%).
  • Transactions whose fees don’t reflect market conditions at all, aren’t really interesting to analyse — if there are enough 0.1 mBTC, 200-byte transactions to fill a block, then a revenue maximising miner won’t mine any 400-byte transactions that only pay 0.1 mBTC, because they could fit two 200-byte transactions in the same space and get 0.2 mBTC; and similarly for transactions of any size larger than 200-bytes. There’s really nothing more to it than that. Further, because there are a lot of transactions that are essentially paying a flat 0.1 mBTC fee, they make it fairly hard to see what the remaining transactions are doing — but at least it’s easy to separate them out.
  • Because the 0.10 release essentially made two changes at once (namely, switching from a hardcoded default fee to a fee that varies on market conditions, and calculating the fee based on a per-byte rate rather than essentially a per-transaction rate) it can be hard to see which of these effects are taking place. By examining the effect on transactions of a particular size, we can distinguish the effects however: using a per-transaction fee will result in different transactions sizes paying different per-byte rates, while using per-byte fee will result in the transactions of different sizes harmonising at a particular rate. Similarly, using fee estimation will result in the fees for a particular transaction size varying over time; whereas the average fee rate might vary over time simply due to using per-transaction fees while the average size of transactions varies. I’ve chosen four sizes: 220-230 bytes which is the size of a transaction spending a single, standard, pay-to-public-key-hash (P2PKH) input (with a compressed public key) to two P2PKH outputs; 370-380 bytes which matches a transaction spending two P2PKH inputs to two P2PKH outputs; 520-520 bytes which matches a transaction spending three P2PKH inputs to two P2PKH inputs, and 870-1130 bytes which catches transactions around 1kB.

The following set of graphs take this approach, with each transaction size presented as a separate graph. Each graph breaks the relevant transactions into sixths, selecting the sextiles separating each sixth — each sextile is then smoothed over a 2 week period to make it a bit easier to see.

fpb-by-sizes

We can make a few observations from this (click the graph to see it at full size):

  • We can see that prior to June 2015, fees were fairly reliably set at 0.1 mBTC per kilobyte or part thereof — so 220B transactions paid 0.45 mBTC/kB, 370B transactions paid 0.27 mBTC/kB, 520B transactions paid 0.19 mBTC/kB, and transactions slightly under 1kB paid 0.1 mBTC/kB while transactions slightly over 1kB paid 0.2 mBTC/kB (the 50% median line in between 0.1 mBTC/kB and 0.2 mBTC/kB is likely an artifact of the smoothing). These fees didn’t take transaction size into account, and did not vary depending on market conditions — so they did not reflect changes in demand, how full blocks were, the price of Bitcoin in USD, the hashpower used to secure the blockchain, or any similar factors that might be relevant.
  • We can very clearly see that there was a dramatic response to market conditions in late June 2015 — and not coincidentally this was when the “stress tests” or “flood attack” occurred.
  • It’s also pretty apparent the market response here wasn’t actually very rational or consistent — eg 220B transactions spiked to paying over 0.8 mBTC/kB, while 1000B transactions only spiked to a little over 0.4 mBTC/kB — barely as much as 220B transactions were paying prior to the stress attack. Furthermore, even while some transactions were paying significantly higher fees, transactions paying standard fees were still going through largely unhindered, making it questionable whether paying higher fees actually achieved anything.
  • However, looking more closely at the transactions with a size of around 1000 bytes, we can also see there was a brief period in early July (possibly a very brief period that’s been smeared out due to averaging) where all of the sextiles were above the 0.1 mBTC/kB line — indicating that there were some standard fee paying transactions that were being hindered. That is to say that it’s very likely that during that period, any wallet that (a) wasn’t programmed to calculate fees dynamically, and (b) was used to build a transaction about 1kB in size, would have produced a transaction that would not actually get included in the blockchain. While it doesn’t meet the definition laid out by Jeff Garzik, I think it’s fair to call this a “fee event”, in that it’s an event, precipitated by fee rates, that likely caused detectable failure of bitcoin software to work as intended.
  • On the other hand, it is interesting to notice that a similar event has not yet reoccurred since; even during later stress attacks, or Black Friday or Christmas shopping rushes.

As foreshadowed, we can redo those graphs with transactions paying one of the standard fees (ie exactly 0.1 mBTC, 0.01 mBTC, 0.2 mBTC, 0.5 mBTC, 1m BTC, or 10 mBTC) removed:

fpb-by-sizes-nonstd

As before, we can make a few observations from these graphs:

  • First, they’re very messy! That is, even amongst the transactions that pay variable fees, there’s no obvious consensus on what the right fee to pay is, and some users are paying substantially more than others.
  • In early February, which matches the release of Bitcoin Core 0.10.0, there was a dramatic decline in the lowest fees paid — which is what you would predict if a moderate number of users started calculating fees rather than using the defaults, and found that paying very low fees still resulted in reasonable confirmation times. That is to say, wallets that dynamically calculate fees have substantially cheaper transactions.
  • However, those fees did not stay low, but have instead risen over time — roughly linearly. The blue dotted trend line is provided as a rough guide; it rises from 0 mBTC/kB on 1st March 2015, to 0.27 mBTC/kB on 1st March 2016. That is, market driven fees have roughly risen to the same cost per-byte as a 2-input, 2-output transaction, paying a flat 0.1 mBTC.

At this point, it’s probably a good idea to check that we’re not looking at just a handful of transactions when we remove those paying standard 0.1 mBTC fees. Graphing the number of transactions per day of each type (ie, total transactions, 220 byte transactions (1-input, 2-output), 370 byte transactions (2-input, 2-output), 520 byte transactions (3-input, 2-output), and 1kB transactions shows that they all increased over the course of the year, and that there are far more small transactions than large ones. Note that the top-left graph has a linear y-axis; while the other graphs use a logarithmic y-axis — so that each step in the vertical indicates a ten-times increase in number of transactions per day. No smoothing/averaging has been applied.

fpb-number-txns

We can see from this that by and large the number of transactions of each type have been increasing, and that the proportion of transactions paying something other than the standard fees has been increasing. However it’s also worth noting that the proportion of 3-input transactions using non-standard fees actually decreased in November — which likely indicates that many users (or the maintainers of wallet software used by many users) had simply increased the default fee temporarily while concerned about the stress test, and reverted to defaults when the concern subsided, rather than using a wallet that estimates fees dynamically. In any event, by November 2015, we have at least about a thousand transactions per day at each size, even after excluding standard fees.

If we focus on the sextiles that roughly converge to the trend line we used earlier, we can, in fact make a very interesting observation: after November 2015, there is significant harmonisation on fee levels across different transaction sizes, and that harmonisation remains fairly steady even as the fee level changes dynamically over time:

fpb-fee-market

Observations this time?

  • After November 2015, a large bunch of transactions of difference sizes were calculating fees on a per-byte basis, and tracking a common fee-per-byte level, which has both increased and decreased since then. That is to say, a significant number of transactions are using market-based fees!
  • The current market rate is slightly lower than the what a 0.1 mBTC, 2-input, 2-output transaction is paying (ie, 0.27 mBTC/kB).
  • The recent observed markets rate correspond roughly to the 12-minute or 20-minute fee rates in the bitcoinfees graph provided earlier. That is, paying higher rates than the observed market rates is unlikely to result in quicker confirmation.
  • There are many transactions paying significantly higher rates (eg, 1-input 2-output transactions paying a flat 0.1 mBTC).
  • There are also many transactions paying lower rates (eg, 3-input 2-output transactions paying a flat 0.1 mBTC) that can expect delayed confirmation.

Along with the trend line, I’ve added four grey, horizontal guide lines on those graphs; one at each of the standard fee rates for the transaction sizes we’re considering (0.1 mBTC/kB for 1000 byte transactions, 0.19 mBTC/kB for 520 byte transactions, 0.27 mBTC/kB for 370 byte transactions, and 0.45 mBTC/kB for 220 byte transactions).

An interesting fact to observe is that when the market rate goes above any of the grey dashed lines, then transactions of the corresponding size that just pay the standard 0.1 mBTC fee become now less profitable to mine than transactions that pay the fees at the market rate. In a very particular sense this will induce a “fee event”, of the type mentioned earlier. That is, with the fee rate above 0.1 mBTC/kB, transactions of around 1000 bytes that pay 0.1 mBTC will generally suffer delays. Following the graph, for the transactions we’re looking at there have already been two such events — a fee event in July 2015, where 1000 byte transactions paying standard fees began getting delayed regularly due to the market fees began exceeding 0.1 mBTC/kB (ie, the 0.1 mBTC fee divided by 1 kB transaction size); and following that a second fee event during November impacting 3-input, 2-output transactions, due to market fees exceeding 0.19 mBTC/kB (ie, 0.1 mBTC divided by 0.52 kB). Per the graph, a few of the trend lines are lingering around 0.27 mBTC/kB, indicating a third fee event is approaching, where 370 byte transactions (ie 2-input, 2-output) paying standard fees will start to suffer delayed confirmations.

However the grey lines can also be considered as providing “resistance” to fee increases — for the market rate to go above 0.27 mBTC/kB, there must be more transactions attempting to pay the market rate than there were 2-input, 2-output transactions paying 0.1 mBTC. And there were a lot of those — tens of thousands — which means market fees will only be able to increase with higher adoption of software that calculates fees using dynamic estimates.

It’s not clear to me why fees harmonised so effectively as of November; my best guess is that it’s just the result of gradually increasing adoption, accentuated by my choice of quantiles to look at, along with averaging those results over a fortnight. At any rate, most of the interesting activity seems to have taken place around August:

  • Bitcoin Core 0.11.0 came out in July with some minor fee estimation improvements.
  • Electrum came out with dynamic fees in 2.4.1 in August.
  • Copay (by bitpay) adder dynamic fees in 1.1.3 in August.
  • Mycelium added per-byte fees in 2.5.8 in December.

Of course, obviously many wallets still don’t do per-byte, dynamic fees as far as I can tell:

  • Blockchain.info just defaults to 0.1 mBTC as far as I can tell, the API seems to require a minimum fee of 0.1 mBTC
  • coinbase.com pays 0.3 mBTC per transaction (from what I’ve seen, they tend to use 3-input, 3-output transactions, which presumably means about 600 bytes per transaction for a rate of perhaps 0.5 mBTC/kB)
  • Airbitz seems to choose a fee based on transaction amount rather than transaction size
  • myTrezor seems have a default 0.1 mBTC fee, that can optionally be raised to 0.5 mBTC
  • bitcoinj does not do per-byte fees, or calculate fees dynamically (although an app based on bitcoinj might do so)

Summary

  • Many wallets still don’t calculate fees dynamically, or even calculate fees at a per-byte level.
  • A significant number of wallets are dynamically calculating fees, at a per-byte granularity
  • Wallets that dynamically calculate fees pay substantially lower fees than those that don’t
  • Paying higher than dynamically calculated market rates generally will not get your transaction confirmed any quicker
  • Market-driven fees have risen to about the same fee level that wallets used for 2-input, 2-output transactions at the start of 2015
  • Market-driven fees will only be able to rise further with increased adoption of wallets that support market-driven fees.
  • There have been two fee events for wallets that don’t do market based fees, and paid a flat fee of 0.1 mBTC already. For those wallets, since about July 2015, fees have been high enough to cause transactions near 1000 bytes to have delayed confirmations; and since about November 2015, fees have been high enough to cause transactions above 520 bytes (ie, 3-input, 2-output) to be delayed. A third fee event is very close, affecting transactions above 370 bytes (ie, 2-input, 2-output).

Bitcoin Fees in History

Prior to Christmas, Rusty did an interesting post on bitcoin fees which I thought warranted more investigation. My first go involved some python parsing of bitcoin-cli results; which was slow, and as it turned out inaccurate — bitcoin-cli returns figures denominated in bitcoin with 8 digits after the decimal point, and python happily rounds that off, making me think a bunch of transactions that paid 0.0001 BTC in fees were paying 0.00009999 BTC in fees. Embarrassing. Anyway, switching to bitcoin-iterate and working in satoshis instead of bitcoin just as Rusty did was a massive improvement.

From a miner’s perspective (ie, the people who run the computers that make bitcoin secure), fees are largely irrelevant — they’re receiving around $11000 USD every ten minutes in inflation subsidy, versus around $80 USD in fees. If that dropped to zero, it really wouldn’t make a difference. However, in around six months the inflation subsidy will halve to 12.5 BTC; which, if the value of bitcoin doesn’t rise enough to compensate, may mean miners will start looking to turn fee income into real money — earning $5500 in subsidy plus $800 from fees could be a plausible scenario, eg (though even that doesn’t seem likely any time soon).

Even so, miners don’t ignore fees entirely even now — they use fees to choose how to fill up about 95% of each block (with the other 5% filled up more or less according to how old the bitcoins being spent are). In theory, that’s the economically rational thing to do, and if the theory pans out, miners will keep doing that when they start trying to get real income from fees rather than relying almost entirely on the inflation subsidy. There’s one caveat though: since different transactions are different sizes, fees are divided by the transaction size to give the fee-per-kilobyte before being compared. If you graph the fee paid by each kB in a block you thus get a fairly standard sort of result — here’s a graph of a block from a year ago, with the first 50kB (the priority area) highlighted:

block

You can see a clear overarching trend where the fee rate starts off high and gradually decreases, with two exceptions: first, the first 50kB (shaded in green) has much lower fees due to mining by priority; and second, there are frequent short spikes of high fees, which are likely produced by high fee transactions that spend the coins mined in the preceeding transaction — ie, if they had been put any earlier in the block, they would have been invalid. Equally, compared to the priority of the first 50kB of transactions, the the remaining almost 700kB contributes very little in terms of priority.

But, as it turns out, bitcoin wallet software often pretty much just tends to pick a particular fee and use it for all transactions no matter the size:

block-raw-fee

From the left hand graph you can see that, a year ago, wallet software was mostly paying about 10000 satoshi in fees, with a significant minority paying 50000 satoshi in fees — but since those were at the end of the block, which was ordered by satoshis per byte, those transactions were much bigger, so that their fee/kB was lower. This seems to be due to some shady maths: while the straightforward way of doing things would be to have a per-byte fee and multiply that by the transaction’s size in bytes, eg 10 satoshis/byte * 233 bytes gives 2330 satoshi fee; things are done in kilobytes instead, and a rounding mistake occurs, so rather than calculating 10000 satoshis/kilobyte * 0.233 kilobytes, the 0.233 is rounded up to 1kB first, and the result is just 10000 satoshi. The second graph reverses the maths to work out what the fee/kilobyte (or part thereof) figure would have been if this formula was used, and for this particular block, pretty much all the transactions look how you’d expect if exactly that formula was used.

As a reality check, 1 BTC was trading at about $210 USD at that time, so 10000 satoshi was worth about 2.1c at the time; the most expensive transaction in that block, which goes off the scale I’ve used, spent 240000 satoshi in fees, which cost about 50c.

Based on this understanding, we can look back through time to see how this has evolved — and in particular, if this formula and a few common fee levels explain most transactions. And it turns out that they do:

stdfees

The first graph is essentially the raw data — how many of each sort of fee made it through per day; but it’s not very helpful because bitcoin’s grown substantially. Hence the second graph, which just uses the smoothed data and provides the values in percentage terms stacked one on top of the other. That way the coloured area lets you do a rough visual comparison of the proportion of transactions at each “standard” fee level.

In fact, you can break up that graph into a handful of phases where there is a fairly clear and sudden state change between each phase, while the distribution of fees used for transactions during that phase stays relatively stable:

stdfee-epochs

That is:

  1. in the first phase, up until about July 2011, fees were just getting introduced and most people paid nothing; fees began at 1,000,000 satoshi (0.01 BTC) (v 0.3.21) before setting on a fee level of 50000 satoshi per transaction (0.3.23).
  2. in the second phase, up until about May 2012, maybe 40% of transactions paid 50000 satoshi per transaction, and almost everyone else didn’t pay anything
  3. in the third phase, up until about November 2012, close to 80% of transactions paid 50000 satoshi per transaction, with free transactions falling to about 20%.
  4. in the fourth phase, up until July 2013, free transactions continue to drop, however fee paying transactions split about half and half between paying 50000 satoshi and 100000 satoshi. It looks to me like there was an option somewhere to double the default fee in order to get confirmed faster (which also explains the 20000 satoshi fees in future phases)
  5. in the fifth phase, up until November 2013, the 100k satoshi fees started dropping off, and 10k satoshi fees started taking over (v 0.8.3)
  6. in the sixth phase, the year up to November 2014, transactions paying fees of 50k and 100k and free transactions pretty much disappeared, leaving 75% of transactions paying 10k satoshi, and maybe 15% or 20% of transactions paying double that at 20k satoshi.
  7. in the seventh phase, up until July 2015, pretty much everyone using standard fees had settled on 10k satoshi, but an increasing number of transactions started using non-standard fees, presumably variably chosen based on market conditions (v 0.10.0)
  8. in the eighth phase, up until now, things go a bit haywire. What I think happened is the “stress tests” in July and September caused the number of transactions with variable fees to spike substantially, which caused some delays and a lot of panic, and that in turn caused people to switch from 10k to higher fees (including 20k), as well as adopt variable fee estimation policies. However over time, it looks like the proportion of 10k transactions has crept back up, presumably as people remove the higher fees they’d set by hand during the stress tests.

Okay, apparently that was part one. The next part will take a closer look at the behaviour of transactions paying non-standard fees over the past year, in particular to see if there’s any responsiveness to market conditions — ie prices rising when there’s contention, or dropping when there’s not.

Lightning network thoughts

I’ve been intrigued by micropayments for, like, ever, so I’ve been following Rusty’s experiments with bitcoin with interest. Bitcoin itself, of course, has a roughly 10 minute delay, and a fee of effectively about 3c per transaction (or $3.50 if you count inflation/mining rewards) so isn’t really suitable for true microtransactions; but pettycoin was going to be faster and cheaper until it got torpedoed by sidechains, and more recently the lightning network offers the prospect of small payments that are effectively instant, and have fees that scale linearly with the amount (so if a $10 transaction costs 3c like in bitcoin, a 10c transaction will only cost 0.03c).

(Why do I think that’s cool? I’d like to be able to charge anyone who emails me 1c, and make $130/month just from the spam I get. Or you could have a 10c signup fee for webservice trials to limit spam but not have to tie everything to your facebook account or undergo turing trials. You could have an open wifi access point, that people don’t have to register against, and just bill them per MB. You could maybe do the same with tor nodes. Or you could setup bittorrent so that in order to receive a block I pay maybe 0.2c/MB to whoever sent it to me, and I charge 0.2c/MB to anyone who wants a block from me — leechers paying while seeders earn a profit would be fascinating. It’d mean you could setup a webstore to sell apps or books without having to sell your sell your soul to a corporate giant like Apple, Google, Paypal, Amazon, Visa or Mastercard. I’m sure there’s other fun ideas)

A bit over a year ago I critiqued sky-high predictions of bitcoin valuations on the basis that “I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints)” — which is currently playing out as “OMG the block size is too small” debates. But the cool thing about lightning is that it lets you avoid that problem entirely; hundreds, thousands or millions of transactions over weeks or years can be summarised in just a handful of transactions on the blockchain.

(How does lightning do that? It sets up a mesh network of “channels” between everyone, and provides a way of determining a route via those channels between any two people. Each individual channel is between two people, and each channel is funded with a particular amount of bitcoin, which is split between the two people in whatever way. When you route a payment across a channel, the amount of that payment’s bitcoins moves from one side of the channel to the other, in the direction of the payment. The amount of bitcoins in a channel doesn’t change, but when you receive a payment, the amount of bitcoins on your side of your channels does. When you simply forward a payment, you get more money in one channel, and less in another by the same amount (or less a small handling fee). Some bitcoin-based crypto-magic ensues to ensure you can’t steal money, and that the original payer gets a “receipt”. The end result is that the only bitcoin transactions that need to happen are to open a channel, close a channel, or change the total amount of bitcoin in a channel. Rusty gave a pretty good interview with the “Let’s talk bitcoin” podcast if the handwaving here wasn’t enough background)

Of course, this doesn’t work very well if you’re only spending money: it doesn’t take long for all the bitcoins on your lightning channels to end up on the other side, and at that point you can’t spend any more. If you only receive money over lightning, the reverse happens, and you’re still stuck just as quickly. It’s still marginally better than raw bitcoin, in that you have two bitcoin transactions to open and close a channel worth, say, $200, rather than forty bitcoin transactions, one for each $5 you spend on coffee. But that’s only a fairly minor improvement.

You could handwave that away by saying “oh, but once lightning takes off, you’ll get your salary paid in lightning anyway, and you’ll pay your rent in lightning, and it’ll all be circular, just money flowing around, lubricating the economy”. But I think that’s unrealistic in two ways: first, it won’t be that way to start with, and if things don’t work when lightning is only useful for a few things, it will never take off; and second, money doesn’t flow around the economy completely fluidly, it accumulates in some places (capitalism! profits!) and drains away from others. So it seems useful to have some way of making degenerate scenarios actually work — like someone who only uses lightning to spend money, or someone who receives money by lightning but only wants to spend cold hard cash.

One way you can do that is if you imagine there’s someone on the lightning network who’ll act as an exchange — who’ll send you some bitcoin over lightning if you send them some cash from your bank account, or who’ll deposit some cash in your bank account when you send them bitcoins over lightning. That seems like a pretty simple and realistic scenario to me, and it makes a pretty big improvement.

I did a simulation to see just how well that actually works out. With “Alice” as a coffee consumer, who does nothing with lightning but buy $5 espressos from “Emma” and refill her lightning wallet by exchanging cash with “Xavier” who runs an exchange, converting dollars (or gold or shares etc) to lightning funds. Bob, Carol and Dave run lightning nodes and take a 1% cut of any transactions they forward. I uploaded a video to youtube that I think helps visualise the payment flows and channel states (there’s no sound):

It starts off with Alice and Xavier putting $200 in channels in the network; Bob, Carol and Dave putting in $600 each, and Emma just waiting for cash to arrive. The statistics box in the top right tracks how much each player has on the lightning network (“ln”), how much profit they’ve made (“pf”), and how many coffees Alice has ordered from Emma. About 3000 coffees later, it ends up with Alice having spent about $15,750 in real money on coffee ($5.05/coffee), Emma having about $15,350 in her bank account from making Alice’s coffees ($4.92/coffee), and Bob, Carol and Dave having collectively made about $400 profit on their $1800 investment (about 22%, or the $0.13/coffee difference between what Alice paid and Emma received). At that point, though, Bob, Carol and Dave have pretty much all the funds in the lightning network, and since they only forward transactions but never initiate them, the simulation grinds to a halt.

You could imagine a few ways of keeping the simulation going: Xavier could refresh his channels with another $200 via a blockchain transaction, for instance. Or Bob, Carol and Dave could buy coffees from Emma with their profits. Or Bob, Carol and Dave could cash some of their profits out via Xavier. Or maybe they buy some furniture from Alice. Basically, whatever happens, you end up relying on “other economic activity” happening either within lightning itself, or in bitcoin, or in regular cash.

But grinding to a halt after earning 22% and spending/receiving $15k isn’t actually too bad even as it is. So as a first pass, it seems like a pretty promising indicator that lightning might be feasible economically, as well as technically.

One somewhat interesting effect is that the profits don’t get distributed particularly evenly — Bob, Carol and Dave each invest $600 initially, but make $155.50 (25.9%), $184.70 (30.7%) and $52.20 (8.7%) respectively. I think that’s mostly a result of how I chose to route payments — it optimises the route to choose channels with the most funds in order to avoid payments getting stuck, and Dave just ends up handling less transaction volume. Having a better routing algorithm (that optimises based on minimum fees, and relies on channel fees increasing when they become unbalanced) might improve things here. Or it might not, and maybe Dave needs to quote lower fees in general or establish a channel with Xavier in order to bring his profits up to match Bob and Carol.

FUD from the Apache Foundation

At Bradley Kuhn’s talk at linux.conf.au this year, I was surprised and disappointed to see a slide quoting some FUD (in the traditional Fear-Uncertainty-Doubt model, a la the Microsoft Halloween documents from back in the day) about the GPL and the SFLC’s enforcement thereof. Here’s the quote:

This is not just a theoretical concern. As aggressively as the BSA protects the interests of its commercial members, [GPL enforcers] protect the GPL license in high-profile lawsuits against large corporations. [FSF] writes about their expansion of “active license enforcement”. So the cost of compliance with copyleft code can be even greater than the use of proprietary software, since an organization risks being forced to make the source code for their proprietary product public and available for anyone to use, free of charge. […]

The Apache Advantage

However, not all open source licenses are copyleft license. A subset of open source licenses, generally called “permissive” licenses, are much more friendly for corporate use.

The quote/slide is available at about 20m into Bradley’s talk. A quick google reveals the source of this as a page from openoffice.org which is, indeed, an Apache project. The revision history for that page is available via subversion.

The elisions in Bradley’s quote changed “the Software Freedom Law Centre” (Bradley’s employer) to “GPL enforcers”, simplified the reference to the FSF, and dropped off a couple of sentences of qualification:

To mitigate this risk requires more employee education, more approval cycles, more internal audits and more worries. This is the increased cost of compliance when copyleft software is brought into an organization. This is not necessarily a bad thing. It is just the reality of using open source software under these licenses, and must be weighed in considered as one cost-driver among many.

I don’t really think any of that changes Bradley’s point: the Apache Foundation is really saying that the GPL and the SFLC is worse than the BSA and proprietary licenses.

After getting home from LCA, I thought it was worth writing to the Apache Foundation about this. I tried twice, on 22nd January and again on 1st February. I didn’t receive any response.

From: Anthony Towns

I was at Bradley Kuhn’s talk at linux.conf.au 2015 last week, and was struck by a quote he attributed to the Apache Software Foundation which compared the SFLC’s efforts to enforce GPL compliance with the BSA’s campaigns on software piracy, and then went on to call the SFLC worse. The remarks and slide can be found at approximately the 20 minute mark in the recording on youtube:

www.youtube.com/watch?v=-ItFjEG3LaA#t=19m52s

Doing a google search for the quote, I found a hit on the Apache OpenOffice.org website:

http://www.openoffice.org/why/why_compliance.html

which although it’s a (somewhat major) project rather than the apache site itself, doesn’t give any indication that it’s authored or authorised by someone other than the Apache Foundation.

I couldn’t find any indication via web.archive.org that that page predated Apache’s curation of the OpenOffice.org project (I wondered if it might have been something Oracle would write, rather than the Apache Foundation).​ Doing some more searching, I found a svn log that seems to indicate it’s primarily authored by Rob Weir with minor edits by Andrea Pescetti (who I understand is the VP for Apache OpenOffice):

​http://svn.apache.org/viewvc/openoffice/ooo-site/trunk/content/why/why_compliance.mdtext?view=log

Is this really an accurate representation of the Apache Foundation’s current stance on copyleft licenses, the GPL and the SFLC’s enforcement efforts?

Apparently we now live in a world where Microsoft happily releases GPL-licensed software, while the Apache Foundation happily spreads FUD against it.

Bitcoincerns

Bitcoincerns — as in Bitcoin concerns! Get it? Hahaha.

Despite having an interest in ecash, I haven’t invested in any bitcoins. I haven’t thought about it any depth, but my intuition says I don’t really trust it. I’m not really sure why, so I thought I’d write about it to see if I could come up with some answers.

The first thing about bitcoin that bothered me when I first heard about it was the concept of burning CPU cycles for cash — ie, setup a bitcoin miner, get bitcoins, …, profit. The idea of making money by running calculations that don’t provide any benefit to anyone is actually kind of offensive IMO. That’s one of the reasons I didn’t like Microsoft’s Hashcash back in the day. I think that’s not actually correct, though, and that the calculations being run by miners are actually useful in that they ensure the validity of bitcoin transfers.

I’m not particularly bothered by the deflationary expectations people have of bitcoin. The “wild success” cases I’ve seen for bitcoin estimate their value by handy wavy arguments where you take a crazy big number, divide it by the 20M max bitcoins that are available, and end up with a crazy big number per bitcoin. Here’s the argument I’d make: someday many transactions will take place purely online using bitcoin, let’s say 75% of all transactions in the world by value. Gross World Product (GDP globally) is $40T, so 75% of that is $30T per year. With bitcoin, each coin can participate in a transaction every ten minutes, so that’s up to about 52,000 transactions a year, and there are up to 20M bitcoins. So if each bitcoin is active 100% of the time, you’d end up with a GWP of 1.04T bitcoins per year, and an exchange rate of $28 per bitcoin, growing with world GDP. If, despite accounting for 75% of all transactions, each bitcoin is only active once an hour, multiply that figure by six for $168 per bitcoin.

That assumes bitcoins are used entirely as a medium of exchange, rather than hoarded as a store of value. If bitcoins got so expensive that they can only just represent a single Vietnamese Dong, then 21,107 “satoshi” would be worth $1 USD, and a single bitcoin would be worth $4737 USD. You’d then only need 739k bitcoins each participating in a transaction once an hour to take care of 75% of the world’s transactions, with the remaining 19M bitcoins acting as a value store worth about $91B. In the grand scheme of things, that’s not really very much money. I think if you made bitcoins much more expensive than that you’d start cutting into the proportion of the world’s transactions that you can actually account for, which would start forcing you to use other cryptocurrencies for microtransactions, eg.

Ultimately, I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints), and for bitcoin, that would mean having alternate ledgers which is equivalent to alternate currencies. That would involve some tradeoffs — for bitcoin-like cryptocurrencies you’d have to account for how volatile alternative currencies are, and how amenable the blockchains are to compromise, but, provided there are trusted online exchanges to convert one cryptocurrency into another, that’s probably about it. Alternate cryptocurrencies place additional constraints on the maximum value of bitcoin itself, by reducing the maximum amount of GWP happening in bitcoin versus other currencies.

It’s not clear to me how much value bitcoin has as a value store. Compared to precious metals, is much easier to transport, much easier to access, much less expensive to store and secure. On the other hand, it’s much easier to destroy or steal. It’s currently also very volatile. As a store of value, the only things that would make it better or worse than an alternative cryptocurrency are (a) how volatile it is, (b) how easy it is to exchange for other goods (liquidity), and (c) how secure the blockchain/algorithms/etc are. Of those, volatility seems like the biggest sticking point. I don’t think it’s unrealistic to imagine wanting to store, say, $1T in cryptocurrency (rather than gold bullion, say), but with only 20M bitcoins, that would mean each bitcoin was worth at least $50,000. Given a current price of about $500, that’s a long way away — and since there are a lot of things that could happen in the meantime, I think high volatility at present is a pretty plausible outcome.

I’m not sure if it’s possible or not, but I have to wonder if a bitcoin based cryptocurrency designed to be resistant to volatility would be implementable. I’m thinking (a) a funded exchange guaranteeing a minimum exchange rate for the currency, and (b) a maximum number of coins and coin generation rate for miners that makes that exchange plausible. The exchange for, let’s call it “bitbullion”, should self-fund to some extent by selling new bitbullion at a price of 10% above guidance, and buying at a price of 10% below guidance (and adjusting guidance up or down slightly any time it buys or sells, purely in order to stay solvent).

I don’t know what the crypto underlying the bitcoin blockchain actually is. I’m surprised it’s held up long enough to get to where bitcoin already is, frankly. There’s nominally $6B worth of bitcoins out there, so it would seem like you could make a reasonable profit if you could hack the algorithm. If there were hundreds of billions or trillions of dollars worth of value stored in cryptocurrency, that would be an even greater risk: being able to steal $1B would tempt a lot of people, being able to destroy $100B, especially if you could pick your target, would tempt a bunch more.

So in any event, the economic/deflation concerns seem assailable to me. The volatility not so much, but I’m not looking to replace my bank at the moment, so that doesn’t bother me either.

I’m very skeptical about the origins of bitcoin. The fact it’s the first successful cryptocurrency, and also the first definitively non-anonymous one is pretty intriguing in my book. Previous cryptocurrencies like Chaum’s ecash focussed on allowing Alice to pay Bob $1 without there being a record of anything other than Alice is $1 poorer, and Bob is $1 richer. Bitcoin does exactly the opposite, providing nothing more than a globally verifiable record of who paid whom how much at what time. That seems like a dream come true for law enforcement — you don’t even have to get a warrant to review the transactions for an account, because everyone’s accounts are already completely public. Of course, you still have to find some way to associate a bitcoin wallet id with an actual person, but I suspect that’s a challenge with any possible cryptocurrency. I’m not quite sure what the status of the digicash/ecash patents are/were, but they were due to expire sometime around now (give or take a few years), I think.

The second thing that strikes me as odd about bitcoin is how easily it’s avoided being regulated to death. I had expected the SEC to decide that bitcoins are a commodity with no real difference to a share certificate, and that as a consequence they can only be traded using regulated exchanges by financial professionals, or similar. Even if bitcoins still count as new enough to only have gotten a knee-jerk regulatory response rather than a considered one (with at $500 a pop and significant mainstream media coverage, I doubt), I would have expected something more along the lines of “bitcoin trading is likely to come under regulation XYZ, operating or using an unregulated exchange is likely to be a crime, contact a lawyer” rather than “we’re looking into it”. That makes it seem like bitcoin has influential friends who aren’t being very vocal in public, and conspiracy theories involving NSA and CIA/FBI folks suggesting leaving bitcoin alone for now might help fight crime, seem more plausible than ones involving Gates or Soros or someone secretly creating a new financial world order.

The other aspect is that it seems like there’s only really four plausible creators of bitcoin: one or more super smart academic types, a private startup of some sort, an intelligence agency, or a criminal outfit. It seems unlikely to me that a criminal outfit would create a cryptocurrency with a strong audit trail, but I guess you never know. It seems massively unlikely that a legitimate private company would still be secret, rather than cashing out. Likewise it seems unlikely that people who’d just done it because it seemed like an interesting idea would manage to remain anonymous still; though that said, cryptogeeks are weird like that.

If it was created by an intelligence agency, then its life to date makes some sense: advertise it as anonymous online cash that’s great for illegal stuff like buying drugs and can’t be tracked, sucker in a bunch of criminals to using it, then catch them, confiscate the money, and follow the audit trail to catch more folks. If that’s only worked for silk road folks, that’s probably pretty small-time. If bitcoin was successfully marketed as “anonymous, secure cryptocurrency” to organised crime or terrorists, and that gave you another angle to attack some of those networks, you could be on to something. It doesn’t seem like it would be difficult to either break into MtGox and other trading sites to gain an initial mapping between bitcoins and real identities, or to analyse the blockchain comprehensively enough to see through most attempts at bitcoin laundering.

Not that I actually have a problem with any of that. And honestly, if secret government agencies lean on other secret government agencies in order to create an effective and efficient online currency to fight crime, that’s probable a win-win as far as I’m concerned. One concern I guess I have though, is that if you assume a bunch of law-enforcement cryptonerds build bitcoin, is that they might also have a way of “turning it off” — perhaps a real compromise in the crypto that means they can easily create forks of the blockchain and make bitcoins useless, or just enough processor power that they can break it by bruteforce, or even just some partial results in how to break bitcoin that would destroy confidence in it, and destroy the value of any bitcoins. It’d be fairly risky to know of such a flaw, and trust that it wouldn’t be uncovered by the public crypto research community, though.

All that said, if you ignore the criminal and megalomaniacal ideas for bitcoin, and assume the crypto’s sound, it’s pretty interesting. At the moment, a satoshi is worth 5/10,000ths of a cent, which would be awesome for microtransactions if the transaction fee wasn’t at 5c. Hmm, looks like dogecoin probably has the right settings for microtransactions to work. Maybe I should have another go at the pay-per-byte wireless capping I was thinking of that one time… Apart from microtransactions, some of the conditional/multiparty transaction possibilities are probably pretty interesting too.

BeanBag — Easy access to REST APIs in Python

I’ve been doing a bit of playing around with REST APIs lately, both at work and for my own amusement. One of the things that was frustrating me a bit was that actually accessing the APIs was pretty baroque — you’d have to construct urls manually with string operations, manually encode any URL parameters or POST data, then pass that to a requests call with params to specify auth and SSL validation options and possibly cookies, and then parse whatever response you get to work out if there’s an error and to get at any data. Not a great look, especially compared to XML-RPC support in python, which is what REST APIs are meant to obsolete. Compare, eg:

server = xmlrpclib.Server("http://foo/XML-RPC")
print server.some.function(1,2,3,{"foo": "bar"})

with:

base_url = "https://api.github.com/"
resp = requests.get(base_url + "/repos/django/django")
if resp.ok:
    res = resp.json()
else:
    raise Exception(r.json())

That’s not to say the python was is bad or anything — it’s certainly easier than trying to do it in shell, or with urllib2 or whatever. But I like using python because it makes the difference between pseudocode and real code small, and in this case, the xmlrpc approach is much closer to the pseudocode I’d write than the requests code.

So I had a look around to see if there were any nice libraries to make REST API access easy from the client side. Ended up getting kind of distracted by reading through various arguments that the sorts of things generally called REST APIs aren’t actually “REST” at all according to the original definition of the term, which was to describe the architecture of the web as a whole. One article that gives a reasonably brief overview is this take on REST maturity levels. Otherwise doing a search for the ridiculous acronym “HATEOAS” probably works. I did some stream-of-consciousness posts on Google-Plus as well, see here, here and here.

The end result was I wrote something myself, which I called beanbag. I even got to do it mostly on work time and release it under the GPL. I think it’s pretty cool:

github = beanbag.BeanBag("https://api.github.com")
x = github.repos.django.django()
print x["name"]

As per the README in the source, you can throw in a session object to do various sorts of authentication, including Kerberos and OAuth 1.0a. I’ve tried it with github, twitter, and xero’s public APIs with decent success. It also seems to work with Magento and some of Red Hat’s internal tools without any hassle.

Parental Leave

Two posts in one month! Woah!

A couple of weeks ago there was a flurry of stuff about the Liberal party’s Parental Leave policy (viz: 26 weeks at 100% of your wage, paid out of the general tax pool rather than by your employer, up to $150k), mostly due to a coalition backbencher coming out against it in the press (I’m sorry, I mean, due to “an internal revolt”, against a policy “detested by many in the Coalition”). Anyway, I haven’t had much cause to give it any thought beforehand — it’s been a policy since the 2010 election I think; but it seems like it might have some interesting consequences, beyond just being more money to a particular interest group.

In particular, one of the things that doesn’t seem to me to get enough play in the whole “women are underpaid” part of the ongoing feminist, women-in-the-workforce revolution, is how much both the physical demands of pregnancy and being a primary caregiver justifiably diminish the contributions someone can make in a career. That shouldn’t count just the direct factors (being physically unable to work for a few weeks around birth, and taking a year or five off from working to take care of one or more toddlers, eg), but the less direct ones like being less able to commit to being available for multi-year projects or similar. There’s also probably some impact from the cross-over between training for your career and the best years to get pregnant — if you’re not going to get pregnant, you just finish school, start working, get more experience, and get paid more in accordance with your skills and experience (in theory, etc). If you are going to get pregnant, you finish school, start working, get some experience, drop out of the workforce, watch your skills/experience become out of date, then have to work out how to start again, at a correspondingly lower wage — or just choose a relatively low skill industry in the first place, and accept the lower pay that goes along with that.

I don’t think either the baby bonus or the current Australian parental leave scheme has any affect on that, but I wonder if the Liberal’s Parental Leave scheme might.

There’s three directions in which it might make a difference, I think.

One is for women going back to work. Currently, unless your employer is more generous, you have a baby, take 16 weeks of maternity leave, and get given the minimum wage by the government. If that turns out to work for you, it’s a relatively easy decision to decide to continue being a stay at home mum, and drop out of the workforce for a while: all you lose is the minimum wage, so it’s not a much further step down. On the other hand, after spending half a year at your full wage, taking care of your new child full-time, it seems a much easier decision to go back to work than to be a full-time mum; otherwise you’ll have to deal with a potentially much lower family income at a time when you really could choose to go back to work. Of course, it might work out that daycare is too expensive, or that the cut in income is worth the benefits of a stay at home mum, but I’d expect to see a notable pickup in new mothers returning to the workforce around six months after giving birth anyway. That in turn ought to keep women’s skills more current, and correspondingly lift wages.

Another is for employers dealing with hiring women who might end up having kids. Dealing with the prospect of a likely six-month unpaid sabbatical seems a lot easier than dealing with a valued employee quitting the workforce entirely on its own, but it seems to me like having, essentially, nationally guaranteed salary insurance in the event of pregnancy would make it workable for the employee to simply quit, and just look for a new job in six month’s time. And dealing with the prospect of an employee quitting seems like something employers should expect to have to deal with whoever they hire anyway. Women in their 20s and 30s would still have the disadvantage that they’d be more likely to “quit” or “take a sabbatical” than men of the same age and skillset, but I’m not actually sure it would be much more likely in that age bracket. So I think there’s a good chance there’d be a notable improvement here too, perhaps even to the point of practical equality.

Finally, and possibly most interestingly, there’s the impact on women’s expectations themselves. One is that if you expect to be a mum “real soon now”, you might not be pushing too hard on your career, on the basis that you’re about to give it up (even if only temporarily) anyway. So, not worrying about pushing for pay rises, not looking for a better job, etc. It might turn out to be a mistake, if you end up not finding the right guy, or not being able to get pregnant, or something else, but it’s not a bad decision if you meet your expectations: all that effort on your career for just a few weeks pay off and then you’re on minimum wage and staying home all day. But with a payment based on your salary, the effort put into your career at least gives you six month’s worth of return during motherhood, so it becomes at least a plausible investment whether or not you actually become a mum “real soon now” or not.

According to the 2010 tax return stats I used for my previous post, the gender gap is pretty significant: there’s almost 20% less women working (4 million versus 5 million), and the average working woman’s income is more than 25% less than the average working man’s ($52,600 versus $71,500). I’m sure there are better ways to do the statistics, etc, but just on those figures, if the female portion of the workforce was as skilled and valued as the male portion, you’d get a $77 billion dollar increase in GDP — if you take 34% as the proportion of that that the government takes, it would be a $26 billion improvement to the budget bottom line. That, of course, assumes that women would end up no more or less likely to work part time jobs than men currently are; that seems unlikely to me — I suspect the best that you’d get is that fathers would become more likely to work part-time and mothers less likely, until they hit about the same level. But that would result in a lower increase in GDP. Based on the above arguments, there would be increase the number of women in the workforce as well, though that would get into confusing tradeoffs pretty quickly — how many families would decide that a working mum and stay at home dad made more sense than a stay at home mum and working dad, or a two income family; how many jobs would be daycare jobs (counted as GDP) in place of formerly stay at home mums (not counted as GDP, despite providing similar value, but not taxed either), etc.

I’m somewhat surprised I haven’t seen any support for the coalition’s plans along these lines anywhere. Not entirely surprised, because it’s the sort of argument that you’d make from the left — either as a feminist, anti-traditional-values, anti-stay-at-home-mum plot for a new progressive genderblind society; or from a pure technocratic economic point-of-view; and I don’t think I’ve yet seen anyone with lefty views say anything that might be construed as being supportive of Tony Abbott… But I would’ve thought someone on the right Bolt or Albrechtsen or Australia’s leading libertarian and centre-right blog or the Liberal party’s policy paper might have canvassed some of the other possible pros to the idea rather than just worrying about the benefits to the recipients, and how it gets paid for. In particular, the argument for any sort of payment like this shouldn’t be about whether it’s needed/wanted by the recipient, but how it benefits the rest of society. Anyway.

Messing with taxes

It’s been too long since I did an economics blog post…

Way back when, I wrote fairly approvingly of the recommendations to simplify the income tax system. The idea being to more or less keep charging everyone the same tax rate, but to simplify the formulae from five different tax rates, a medicare levy, and a gradually disappearing low-income tax offset, to just a couple of different rates (one kicking in at $25k pa, one at $180k pa). The government’s adopted that in a half-hearted way — raising the tax free threshold to $18,200 instead of $25,000, and reducing but not eliminating the low-income tax offset. There’s still the medicare levy with its weird phase-in procedure, and there’s still four different tax rates. And there’s still all sorts of other deductions etc to keep people busy around tax time.

Anyway, yesterday I finally found out that the ATO publishes some statistics on how many people there are at various taxable income levels — table 9 of the 2009-2010 stats are the best I found, anyway. With that information, you can actually mess around with the tax rules and see what affect it actually has on government revenue. Or at least what it would have if there’d been no growth since 2010.

Anyway, by my calculations, the 2011-2012 tax rates would have resulted in about $120.7B of revenue for the government, which roughly matches what they actually reported receiving in that table ($120.3B). I think putting the $400M difference (or about $50 per taxpayer) down to deductions for dependants and similar that I’ve ignored seems reasonable enough. So going from there, if you followed the Henry review’s recommendations, dropping the Medicare levy (but not the Medicare surcharge) and low income tax offset, the government would end up with $117.41B instead, so about $3.3B less. The actual changes between 2011-2012 and 2012-2013 (reducing the LITO and upping the tax free threshold) result in $118.26B, which would have only been $2.4B less. Given there’s apparently a $12B fudge-factor between prediction and reality anyway, it seems a shame they didn’t follow the full recommendations and actually make things simpler.

Anyway, upping the Medicare levy by 0.5% seems to be the latest idea. By my count doing that and keeping the 2012-2013 rates otherwise the same would result in $120.90B, ie back to the same revenue as the 2011-2012 rates (though biased a little more to folks on taxable incomes of $30k plus, I think).

Personally, I think that’s a bit nuts — the medicare levy should just be incorporated into the overall tax rates and otherwise dropped, IMO, not tweaked around. And it’s not actually very hard to come up with a variation on the Henry review’s rates that both simplify tax levels, don’t increase tax on any individual by very much, and do increase revenue. My proposal would be: drop the medicare levy and low income tax offset entirely (though not the medicare levy surcharge or the deductions for dependants etc), and set the rates as: 35% for earnings above $25k, 40% for earnings above $80k, and 46.5% for earnings above $180k. That would provide government revenue of $120.92B, which is close enough to the same as the NDIS levy. It would cap the additional tax any individual pays to $2000 compared to 2012-13 rates, ie it wouldn’t increase the top marginal rate. It would decrease the tax burden on people with taxable incomes below $33,000 pa — the biggest winners would be people earning $25,000 who’d go from paying $1200 tax per year to nothing. The “middle class” earners between $35,000 and $80,000 would pay an extra $400-$500; higher income earners between $80,000 and $180,000 get hit between $500 and $2000, and anyone above $180,000 pays an extra $2,000. Everyone earning over about $34,000 but under about $400,000 would pay more tax than if the NDIS were just increased, everyone earning between $18,000 and $34,000 would be better off.

On a dollar basis, the comparison looks like:

Translating that to a percentage of income, it looks like:

Not pleasant, I’m sure, on the dual-$80k families in western Sydney who are just-scraping buy and all, but I don’t think it’s crazy unreasonable.

But the real win comes when you look at the marginal rates:

Rather than seven levels of marginal rate, there’s just three; and none of them are regressive — ie, you stop having cases like someone earning $30,000 paying 21% on their additional dollar of income versus someone earning $22,000 paying 29% on their extra dollar. At least philosophically and theoretically that’s a good thing. In practice, I’m not sure how much of a difference it makes:

There’s spikes at both the $80,000 and $35,000 points which involve 8% and 15% increases in the nominal tax rates respectively, which I think is mostly due to people transferring passive income to family members who either don’t work or have lower paid jobs — if you earn a $90,000 salary better to assign the $20,000 rental income from your units to your kid at uni and pay 15% or 30% tax on it, than assign it to yourself and pay 38%, especially if you then just deposit it back into your family trust fund either way. The more interesting spikes are those around the $20,000 and $30,000 points, but I don’t really understand why those are shaped the way they are, and couldn’t really be sure that they’d smooth out given a simpler set of marginal rates.

Anyway, I kind-of thought it was interesting that it’s not actually very hard to come up with a dramatically simpler set of tax rates, that’s both not overly punitive and gives about the same additional revenue as the mooted levy increase.

(As a postscript, I found it particularly interesting to see just how hard it is to get meaningful revenue increases by tweaking the top tax rate; there’s only about $33B being taxed at that rate, so you have to bump the rate by 3% to get a bump of $1B in overall revenue, which is either pretty punitive or pretty generous before you’re making any useful difference at all. I chose to leave it matching the current income level and rate; longer term, I think making the levels something like $25k, $100k, $200k, and getting the percentages to rounder figures (35%, 40%, 45%?) would probably be sensible. If I really had my druthers, I’d rather see the rate be 35% from $0-$100,000, and have the government distribute, say, $350 per fortnight to everyone — if you earn over $26k, that’s just the equivalent of a tax free threshold of $26k; if you don’t, it’s a helpful welfare payment, whether you’re a student, disabled, temporarily unemployed, retired, or something else like just wanting to take some time off to do charity work or build a business)

On Employment

Okay, so it turns out an interesting, demanding, and rewarding job isn’t as compatible as I’d naively hoped with all the cool things I’d like to be doing as hobbies (like, you know, blogging more than once a year, or anything substantial at all…) Thinking it’s time to see if there’s any truth in the whole fitness fanatic thing of regular exercise helping…

Bits

Yikes. Been a while. I can’t think of anything intelligent to blog, so some linky tidbits instead:

  • rpm 4.10 includes “~” as a special versioning character, just like dpkg has for ages. Holds a special place in my heart since I did the original patch for dpkg a bit over 11 years ago now. (And looking at that bug history now, it appears it was accepted for my birthday a year later, awww). “ls” from coreutils also supports it (they borrowed the code from dpkg, based on a copyright disclaimer request I’ve finally gotten around to replying to), though I don’t think it’s actually documented.

  • Read a couple of interesting takes on the Facebook IPO: one from a “blue-collar hedge fund manager” (yeah, riiight), who blamed NASDAQ for not handling trades properly on opening day, then essentially forcing traders to sell their stock immediately to be compensated for trades not executed properly; and one from Nanex via ZeroHedge with an animation showing that NASDAQ was not actually making a market in Facebook stock for an extended period (claiming offers to buy for $43 and offers to sell for $42.99, but not executing them and clearing them out), screwing up the other exchanges they connect to and the high-frequency algorithmic traders that use them. To me, Google’s reverse-auction IPO that tried to ensure there wasn’t a day-one stock price “pop” just seems better all the time…

  • Like I needed more reasons not to be impressed by US politics.

  • SpaceX has been doing a pretty amazing demo: correctly handled launch abort, quick turn around on fix; successful launch; successful delivery of payload to the ISS. Get it back down again, and they’ve really got something. Also impressive: per Wikipedia at least, “As of May 2012, SpaceX has operated on total funding of approximately one billion dollars in its first ten years of operation”, 80% of which has come from payments by customers (“progress payments on long-term launch contracts and development contracts”). That is, about the same amount as what Facebook paid for Instagram and its 13 employees…

  • Via Andrew Pollock’s g+ feed, Leap Motion looks nifty.

  • I rode down to linux.conf.au in Ballarat this year — about 5000 kilometres of awesomeness. Here’s a photo from the way I took back, that I call “Hay, Hell and Booligal”:

Owning the new now

Things are different now. That’s certain.

Or at least that’s what one of the marketing sites for my new employer has to say.

Back in March I started at Red Hat’s Brisbane office working in release engineering (or the “Release Configuration Management” team). Short summary: it’s been pretty fun so far.

Googling just now for something to link that provides some sort of context, I came upon a video with my boss (John Flanagan) and one of my colleagues (Jesse Keating) — neither of whom I’ve actually met yet — giving a talk to the ACM chapter at Northeastern University in Boston. (It’s an hour long, and doesn’t expect much background knowledge of Linux; but doesn’t go into anything in any great depth either)

My aim in deciding to go job hunting late last year was to get a large change of scenery and get to work with people who understood what I was doing — it eventually gets a bit old being a black box where computer problems go in, solutions come out, and you can only explain what happens in between with loose analogies before seeing eyes glaze over. Corporate environment, Fedora laptop, enterprise customers, and a zillion internal tools that are at least new to me, certainly counts as a pretty neat change of scenery; and I think I’ve now got about five layers of technical people between me and anyone who doesn’t have enough technical background to understand what I do on the customer side. Also, money appears in my bank account every couple of weeks, without having to send anyone a bill! It’s like magic!

The hiring process was a bit odd — mostly, I gather, because while I applied for an advertised position, the one I ended up getting was something that had been wanted for a while, but hadn’t actually had a request open. So I did a bunch of interviews for the job I was applying for, then got redirected to the other position, and did a few interviews for that without either me or the interviewers having a terribly clear idea what the position would involve. (I guess it didn’t really help that my first interview, which was to be with my boss’s boss, got rearranged because he couldn’t make it in due to water over the roads, and then Brisbane flooded; that the whole point of the position is that they didn’t have anyone working in that role closer than the Czech Republic is probably also a factor…)

As it’s turned out, that’s been a pretty accurate reflection of the role: I’ve mostly been setting my own priorities, which mostly means balancing between teaching myself how things work, helping out the rest of my team, and working with the bits of Red Hat that are local, or at least operate in compatible timezones. Happily, that seems to be working out fairly okay. (And at least the way I’ve been doing it isn’t much different to doing open source in general: “gah, this program is doing something odd. okay, find the source, see what it’s doing and why, and either (a) do something different to get what you want, or (b) fix the code. oh, and also, you now understand that program”)

As it turned out, that leads into the main culture shock I had on arriving: what most surprised me was actually the lack of differences compared to being involved in Debian — which admittedly might have been helped by a certain article hitting LWN just in time for my first day. “Ah, so that list is the equivalent of debian-devel. Good to know.” There’s a decent number of names that pop up that are familiar from Debian too, which is nice. Other comfortingly familiar first day activities were subscribing to more specific mailing lists, joining various IRC channels, getting my accounts setup and setting up my laptop. (Fedora was suggested, “not Debian” was recommended ;)

Not that everything’s the same — there’s rpm/yum versus dpkg/apt obviously, and there’s a whole morass of things to worry about working for a public company. But a lot of it fits into either “different to Debian, but not very” and “well, duh, Red Hat’s a for-profit, you have to do something like this, and that’s not a bad way of doing it”.

Hmm, not sure what else I can really talk about without at least running it by someone else to make sure it’s okay to talk about in public. I think there’s only a couple of things I’ve done so far that have gone via Fedora and are thus easy — the first was a quick python script to make publishing fedora torrents easier, and the other was a quick patch to the fedora buildsystem software to help support analytics. Not especially thrilling, though. I think Dennis is planning on throwing me into more Fedora stuff fairly soon, so hopefully that might change.

Pro-Linux bias at linux.conf.au

Reading through some of the comments from last year’s Linux Australia Survey, a couple struck me as interesting. One’s on Java:

linux.conf.au seems to have a bias against Java. Since Java is an open source language and has a massive open source infrastructure, this has not made a lot of sense to me. It seems that Python, Perl, PHP, Ruby are somehow superior from an open source perspective even though they are a couple of orders of magnitude less popular than Java in the real world. This bias has not changed since openjdk and I’m guessing is in the DNA of the selectors and committee members. Hence *LUG* has lost a lot of appeal to me and my team. It would be good if there was an inclusive open source conference out there…

and the other’s more general:

I appreciate LCA’s advocacy of open source, but I feel that a decoupling needs to be made in the mindshare between the terms “open source” and “Linux”. Unfortunately, for people involved in open source operating systems that aren’t Linux, we may feel slightly disenfranchised by what appears to be a hijacking of the term “open source” (as in “the only open source OS is linux” perception).

My impression is that bias is mostly just self-selection; people don’t think Java talks will get accepted, so don’t submit Java talks. I guess it’s possible that there’s a natural disconnect too: linux.conf.au likes to have deep technical talks on topics, and maybe there’s not much overlap between with what’s already there and what people with deep technical knowledge of Java stuff find interesting, so they just go to other conferences.

That said, it seems like it’d be pretty easy to propose either a mini-conference for Java, or BSD, or non-traditional platforms in general (covering free software development for say BSD, JVM, MacOS and Windows) and see what happens. Especially given Greg Lehey’s on the Ballarat organising team from what I’ve been told, interesting BSD related content seems like it’d have a good chance of success at getting in…

Silly testcase hacks

Martin Pool linked to an old post by Evan Miller on how writing tests could be more pleasant if you could just do the setup and teardown parts once, and (essentially) rely on backtracking to make sure it happens for every test. He uses a functional language for his example, and it’s pretty interesting.

But it is overly indented, and hey, I like my procedural code, so what about trying the same thing in Python? Here’s my go at it. The code under test was the simplest thing I could think of — a primality checker:

def is_prime(n):
    if n == 1: return False
    i = 2
    while i*i <= n:
         if n % i == 0: return False
         i += 1
     return True

My test function then tests a dozen numbers numbers which I know are prime or not, return True if is_prime got the right answer, and False otherwise. It makes use of a magic "branch" function to work out which number to test:

def prime_test(branch):
    if branch(True, False):
        n = branch(2,3,5,7,1231231)
        return is_prime(n)
    else:
        n = branch(1,4,6,8,9,10,12312312)
        return not is_prime(n)

In order to get all the tests run, we need a loop, so the test harness looks like:

for id, result in run_tests(prime_test):
    print id, result

(Counting up successes, and just printing the ids of failures would make more sense, probably. In any event, the output looks like:

[True, 2] True
[True, 3] True
[True, 5] True
[True, 7] True
[True, 1231231] True
[False, 1] True
[False, 4] True
[False, 6] True
[False, 8] True
[False, 9] True
[False, 10] True
[False, 12312312] True

Obviously all the magic happens in run_tests which needs to work out how many test cases there'll end up being, and provide the magic branch function which will give the right values. Using Python's generators to keep some state makes that reasonable straightforward, if a bit head-twisting:

def run_tests(test_fn):
    def branch(*options):
        if len(idx) == state[0]:
            idx.append(0)
        n = idx[state[0]]
        if n+1 < len(options):
             state[1] = state[0]
         state[0] += 1
         vals.append(options[n])
         return options[n]
 
     idx = []
     while True:
         state, vals = [0, None], []
         res = test_fn(branch)
         yield (vals, res)
         if state[1] is None: break
         idx[state[1]] += 1
         idx[state[1]+1:] = []

This is purely a coding optimisation -- any setup and teardown in prime_test is performed each time, there's no caching. I don't think there'd be much difficulty writing the same thing in C or similar either -- there's no real use being made of laziness or similar here -- I'm just passing a function that happens to have state around rather than a struct that happens to include a function pointer.

Anyway, kinda nifty, I think!

(Oh, this is also inspired by some of the stuff Clinton was doing with abusing fork() to get full coverage of failure cases for code that uses malloc() and similar, by using LD_PRELOAD)

Silly hacks

One thing that keeps me procrastinating about writing programs I have is doing up a user interface for them. It just seems like so much hassle writing GUI code or HTML, and if I just write for the command line, no one else will use it. Of course, most of the reason I don’t mind writing for the command line is that interaction is so easy, and much of that is thanks to the wonders of “printf”. But why not have a printf for GUIs? So I (kinda) made one:

f, n = guif.guif("%t %(edit)250e \n %(button)b",             
                 "Enter some text", "", "Press me!");

In theory, you can specify widget sizes using something like “%10,12t” to get a text box with a width of 10 and a height of 12, but it doesn’t seem to actually work at the moment, and might be pixel based instead of character based, which I’m not sure is a win. I was figuring you could say “%-t” for left aligned, and “%+t” for right aligned; and I guess you could do “%^t” for top and “%_t” for bottom alignment. I’ve currently just got it doing a bunch of rows laid out separately — you’d have to specify explicit widths to get things lined up; but the logical thing to do would be to use “\t” to automatically align things. It also doesn’t handle literals inside the format string, so you can’t say “Enter some text: %e\n%b”.

At the moment the two objects that returns are the actual frame (f), and a dictionary of named elements (n) in case you want to reference them later (to pull out values, or to make buttons actually do something, etc). That probably should be merged into a single object though.

I guess what I’d like to be able to write is a complete program that creates and displays a simple gui with little more than:

#!/usr/bin/env python
import guif, wx

f = guif.guif("Enter some text: %(edit)250e \n %(done)b", 
    "", "Done!",
    stopon = ("done", wx.EVT_BUTTON))

print "Hey, you entered %s!" % f.edit.GetValue()
f.Close()

I figure that should be little enough effort over the command line equivalent to be pleasant:

#!/usr/bin/env python
import sys

print "Enter some text:",
x = sys.stdin.readline()
print "Hey, you entered %s!" % x.strip()

NBN Business Plan

We will be releasing a 50-page document that summarises the NBN Co business case, Ms Gillard said.

So the 50 page NBN Co business case summary came out yesterday. It runs to 36 pages, including the table of contents.

According to the document, they’re going to be wholesale providers to retail ISPs/telcos, and be offering a uniform wholesale price across the country (6.3). There’ll be three methods of delivery — fibre, wireless and satellite, though I didn’t notice any indication of whether people would pay more for content over satellite than over fibre. They’re apparently expecting to undercut the wholesale prices for connectivity offered today (6.3.1). They’ve pulled some “market expection” data from Alcatel/Lucent which has a trend line of exponential increase in consumer bandwidth expectations up to 100Mb/s in 2015 or so, and 1Gb/s around 2020 for fixed broadband — and a factor of 100 less for wireless broadband (6.3.2, chart 1). Contrary to that expection, their own “conservative” projections A1 and A2 (6.3.2, exhibit 2) have about 50Mb/s predicted for 2015, and 100Mb/s for 2020 — with A2 projecting no growth in demand whatsoever after 2020, and A1 hitting 1Gb/s a full 20 years later than the Alcatel/Lucent expectations.

Even that little growth in demand is apparently sufficient to ensure the NBN Co’s returns will “exceed the long term government bond rate”. To me, that seems like they’re assuming that the market rates for bandwidth in 2015 or 2020 (or beyond) will be comparable to rates today — rather than exponentially cheaper. In particular, while the plan goes on to project significant increase in demand for data usage (GB/month) in addition to speed (Mb/s), there’s no indication of how the demand for data and speed get transferred into profits over the fifteen year timespan they’re look at. By my recollection, 15 years ago data prices in .au were about 20c/MB, compared to maybe 40c/GB ($60/mo for 150GB on Internode small easy plan) today.

Given NBN Co will be a near-monopoly provider of bandwidth, and has to do cross-subsidisation for rural coverage (and possibly wireless and satellite coverage as well), trying to inflate the cost per GB seems likely to me: getting wires connected up to houses is hard (which is why NBN Co is budgeting almost $10B in payments to Telstra to avoid it where possible), and competing with wires with wireless is hard too (see the 100x difference in speed mentioned earlier), so you’re going to end up paying NBN Co whatever they want you to pay them.

However they plan on managing it, they’re expecting to be issuing dividends from 2020 (6.7), that will “repay the government’s entire investment by 2034″. That investment is supposedly $27.1B, which would mean at least about $2B per year in profits. For comparison, Telstra’s current profits (across all divisions, and known as they are for their generous pricing) are just under $4B per year. I don’t think inflation helps there, either; and there’s also the other $20B or so of debt financing they’re planning on that they’ll have to pay back, along with the 12-25% risk premium they’re expecting to have to pay (6.8, chart 5).

I’m not quite sure I follow the “risk premium” analysis — for them to default on the debt financing, as far as I can see, NBN Co would have to go bankrupt, which would require selling their assets, which would be all that fibre and axis to ducts and whatnot: effectively meaning NBN Co would be privatised, with first dibs going to all the creditors. I doubt the government would accept that, so it seems to me more likely that they’d bail out NBN Co first, and there’s therefore very, very little risk in buying NBN Co debt compared to buying Australian government debt, but a 12-25% upside thrown in anyway.

As a potential shareholder, this all seems pretty nice; as a likely customer, I’m not really terribly optimistic.