Passions

(not a post about Spike’s favourite soap opera)

As an INTP, I generally try to make “rational” decisions — which is to say, ones I can rationalise and explain and logically support. That in turn is something I can rationalise, explain and support: I’m fairly good at logic, and I’ve been taught lots of ways of analysing problems that people have discovered over centuries, which helps make better decisions. But the counter-argument to that is that it’s still easy to make mistakes, and mistaken logic can lead you to all sorts of bad ideas; a lot of deep, rational thought went into eugenics, for example. So for me, I like to keep a handle on that at least partly by trying to keep things aligned with my emotional response. If something doesn’t feel right, that’s a good time to look back through the logic, because there’s probably a mistake. If you can’t find a mistake, and that doesn’t make you feel better, it’s a good time to be cautious in other ways; if you do feel better, that probably means you’ve got a better understanding than you did before and definitely means you’ll be able to act on the ideas more effectively; and if you do find a mistake, well, you get a chance to fix it.

Going the other way — rationalising whatever you already feel — isn’t so effective to my mind; it’s often easier to make an apparently logical argument that’s actually wrong, than to work out how it’s wrong. That can be useful as a defense mechanism to rebuff challenges to what you want to do, but you can generally rationalise anything without much effort so it’s not actually adding information or improving your decision, and if you infer from the fact you’ve come up with a rationalisation that your decision is the only rational one to make, you can end up with a closed mind to better alternatives, leaving you with a worse decision than if you’d just continued going with your gut feeling. Personally, I tend to take that a fun game: take a completely subjective and illogical response to something (eg, “orange is the best colour”), then “logically” and “conclusively” prove it’s the only justifiable response. At the very least, it’s a good way to keep a little humility about the value of a good argument.

Logically investigating something (“I like orange. Hmm, is there some way to tell what the best colour is?”) you had a gut feeling about is a different matter entirely, of course — and as you think about it, if the analysis leads you to find something different to what you expected (“that’s odd, I think I just proved chartreuse is the best colour”), which might lead you to investigate different definitions for your terms (“perhaps orange is best in some ways, and chartreuse is better in others”) if it doesn’t lead you to change your opinions (“oh. my. god. this chartreuse cape is to die for!”)

Of course, if you’re not naturally comfortable with coming up with logical arguments, or trained enough to do them well, you’ve probably got other, more personally appropriate, ways of coming up with decisions anyway, and maybe none of this applies. But hey, that’s not my problem.

The other advantage of keeping your feelings in accord with your thoughts is that it tends to be more motivating — “passionate” tends to be a decent description both of someone pretty emotional and of someone pretty motivated and active, and there’s no point to making good, rational, decisions without acting on them. In some respects, the more intense the emotion the better; it’s easy to want to quit doing something difficult that’s not immediately rewarding, no matter how logically you’ve convinced yourself that it’s a good idea, but it’s a lot harder to shake off broiling rage, true love, or abject terror, eg.

The trick, then, is if you’ve found something that inspires that sort of emotion, to make sure it’s working in the same direction as the goals you’ve carefully and logically examined. That can be really easy: if your primary emotion is that you care deeply about helping people, seeing someone who’s had a bad day or week or year get a break and maybe break a smile is a good way to keep yourself working in a charity or a hospital, if that happens to be what you want to do. But it’s often not — maybe you’re overwhelmed by anger at the stupid bureaucratic nonsense that’s getting in the way of your hospital helping people, or maybe you’d like to help out in a soup kitchen but you’re terrified of violence in the area.

But, at least sometimes, those can be harnessed too. “Use your anger” isn’t exactly “use the force”, but it still gets some 40,000 hits on google offering useful advice. My feeling (which I wish I’d had earlier than today, but oh well) is there’s probably similar ways to grab most of those emotions, and turn them into allies, rather than just trying to figure out ways of making them go away.

  • frustration, anger, hate: Figure out exactly what it is that’s the deserving object of your ire, and find ways to harm it. There’s lots of entirely reasonable ways to hurt things: a death blow, divide and conquer, death by a thousand cuts, subversion and betrayal; and most entirely reasonable ways of contributing to the good of society can be rephrased into something that’s more acceptable to anger. Annoyed by ignorance on the Internet? Deal it a death blow by creating a site like Wikipedia or snopes; create a debating forum so ignorant people are fighting each other instead of you, and maybe learning something as a result; contribute to Wikipedia or snopes or just help your friends avoid spreading urban myths;  find a group of people who seem particularly ignorant, join them, become well-educated in their customs, befriend them, and then help them get access to all the knowledge they’ve been missing.
  • worry, fear, terror: Be thorough. If you’re worried anyway, you’re going to naturally be thinking of every single way every single thing can go wrong, so take advantage of that and do something about each of those things you think of. Maybe it seems more rational to ignore your worries, and just charge ahead (and maybe it is), but there’s an equally good chance that will just make you worry more and make you less effective, whereas if you actually nervously go around making sure everything is absolutely perfect, you’re at least spending your time contributing to your goals. And every little thing you do fix up is one less thing to worry about, so it’s possible you might end up naturally less worried anyway. Probably not, of course…
  • affection, appreciation, love: Dedicate your work, do it in appreciation, or in honour, and make it something that’s worthy of the object of your admiration, whether that be a person or an idea. It’s always tempting to cut corners or strive for something other than your absolute best, but much less so when what you’re doing is making a devout offering to something or someone you care about.
  • pride, arrogance, narcissism: You think you’re the best, so do something that demonstrates it. Repeat.
  • greed, envy, lust: Be a free-market capitalist — get more of what you want by doing more of what other people want.
  • embarrassment, guilt, shame: Accept, apologise, and then do something worthwhile to atone?
  • indifference, apathy, sloth: No idea. (Is this an emotion, or the absence of emotion? If the latter, find an actual emotion? If you can’t, just try to avoid watching Dexter for lifestyle tips?)

Anyway, there’s my thought for the day. YMMV. The following quote may or may not add support to the thesis presented:

Peace is a lie, there is only passion.
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.

The Sith Code

Social meeja

Ben posts about the point of twitter:

As far as I can tell, Twitter is a flakier, crappier knockoff of Facebook, that has even less monetization potential than Facebook.

Meanwhile, identi.ca is an open source knock-off of twitter, FriendFeed is a knockoff of Facebook, LinkedIn is knocking-off a little bit of both of them, everything’s getting comments and tags and automatic recommendations, and everyone and their lolcat is starting up their own social network of some sort or another. It’s all very confusing. Anyway, as a snapshot in time, the social media thingies I’m on at the moment:

  • blogging — almost finished my fifth year of irregular blogging; now with comments enabled, and using email-address based gravtars so people can have an identity while commenting. Still like it.
  • microblogging — have accounts on twitter and identi.ca under the userid “ajtowns”, with the twitter feed syndicated into the sidebar of my blog. Mostly used for trivial techy comments or link sharing — things too banal or already known, or just not fleshed out enough, to be worth blogging. The two accounts are hooked up via ping.fm, so they mostly get the same content. I’ve got both because twitter’s popular in general, and identi.ca’s the “alternative” version for the open source crowd, and the easiest way to follow other people’s comments via those services is if you’re signed up.
  • google reader — in practice I mostly just use this to read other blogs, but I occassionally use the “share” button that shares entries with friends in my (fairly minimal) google contact list. Not really convenient, since it’s a nuisance to share random webpage that you get to by following links, so I’m probably switching to tweeting interesting tech things instead
  • facebook — good for connecting up with non-tech friends and procrastination. Some (but not all) of my techy friends link this with their microblog accounts, so I get the same updates there and here, and any responses/comments they get then get further split. Don’t much like that, hope someone will fix it. Also the only way I ever know anyone’s birthday.
  • youtube — also procrastination
  • linkedin — really good for figuring out who some tech person is, haven’t tried
  • stackoverflow — seems to be better than IRC and forums for getting useful answers to programming questions atm, and answering programming questions is kinda fun too

Having one or more social media accounts seems to be (becoming) a significant part of the way business networking gets done now too — with an @reply/comment and a friend/follow instead of  some idle chat and swapping business cards. Don’t know whether I think that’s a good thing or not, but it seems useful to be aware of anyway.

As far as contributing to the fads go, I like to think I provide useful content to a few of these (blogging, microblogging, linkedin, stackoverflow), I pay my own way for blogging (they’re my thoughts and I’d like to keep them, thanks), and I’ve followed some ads on facebook (though I don’t think actually purchased anything as a result).

Anyway, that’s my take — whether my contribution is enough to justify my slice of the computing power associated with keeping those sites running, I don’t know, but they’re currently all of some value to me. I suspect my blog is the only one of them I’m willing to keep going at all costs (especially since I know at worst I can just move it to paper).

Funding the NBN

Simon Rumble posts some thoughts on the costings for the national broadband network. He offers some working, and requests corrections, so I thought I might redo the calculations.

First, at its most basic $42B divided by 7.4M households means infrastructure costs of about $5,700 per household. That seems like a lot of cash, when (non-Telstra) ADSL2+ upfront fees are only $130, with no contract. It’s covering businesses too though, and potentially is a qualitative change to the service compared to ADSL coverage (whether due to speed or reliability). For me, that’s way more than the government should be committing to this; if people really think fibre speeds are worth $6000 per household, let an ISP sell it to them privately. I’d completely support having local neighbourhoods able to vote to have fibre (or high speed wireless or similar) installed locally, paid for by an increase in everyone’s rates, eg.

But hey, the point isn’t to see whether it’s worth going ahead, it’s to see what it’ll end up costing when/if it does. Infrastructure development is apparently going to be funded by a bond issue, which means the government sells bits of paper with “$1000 treasury bond” written on it, with a “coupon” rate that’s currently around 5.75%, and a maturity date (up to around ten years away). The government will then pay $28.75 every six months to the bond holder, until the maturity date, at which point they’ll give them the full $1000. Over a ten year period, that’s a total of $1,575 paid out. The initial price is just whatever the government can get, which might be more than $1000 or less, but certainly won’t be $1,575. If I’m reading the RBA’s numbers right, the current price for a $1000 treasury bond with a coupon of 5.75% that mature in 2021 is about $1105.42.

So what’s that mean? To get $42B now at that rate, you have to issue just under 38 million bonds. You then have to pay each person who holds one $57.50 per year, and then pay them $1000 in 2021. That’s $2.185B per year, and $38B in 2021. If you’re balancing the budget, you’re thus hoping to collect $300 per year from every household ($25 per month) over the eleven year period for the coupon costs; and you also have to come up with $38B from somewhere. If you’re going to do a Telstra again and just sell the infrastructure you built, then hopefully it’ll be worth $38B (or more) at that point  and you’re okay. If you’re hoping to have built a public asset, you’ll want to have collected roughly $3B per year more than that, so you’ll be able to pay off your bond holders and keep the infrastructure, which means about $700 per year for every household, or about $60 per month.

This is averaged over every household in range of the NBN, including ones that don’t have computers or don’t want access to the internet. That’s likely inaccurate: if it’s paid for by people who use it, and only one in ten of the people who can, do, then that’s $250 to $600 per month instead. On the other hand, if it’s taken out of income tax/GST receipts, it’ll be a different group of people who end up paying for it, and working out how that’ll actually affect tax rates or consumption or other government projects is beyond my ken really.

Those are, presumably, just infrastructure costs, so additionally you presumably need to factor in maintenance fees, tech support, external bandwidth, and other costs too — as Simon pointed out in his post, effectively all your $60/month is getting you is the equivalent of the copper wires we already take for granted and pay Telstra about $20/month for (whether directly or not); running actual data over it is an add-on either way. Going on Internode’s charges, getting 40G of data a month would be an additional $55/month (if you want less than that, I’d presume you don’t care about the NBN anyway). It’s possibly slightly worse than that, in that the $20 that Telstra gets also covers routine network maintenance after the service is initially setup, while it’s not clear the $42B (and hence $25-$60/month) does. And of course, while $42B is a very Hitchhikers numbers, as a government project it might blow its budget and require additional financing later, so multiply it out for that reason as you see fit too.

So by my count, that means retail prices are something like $25-$60 (infrastructure) plus $? (infrastructure maintenance) plus $0-$540 (unused capacity costs borne by early adopters) plus $55 or more (data, service) plus $? (corruption, waste, budget blowouts, profit), which sums to a monthly retail broadband fee between $80 or more and $700 or more.

That sounds like it’s in the right ballpark for the scale they’re considering — about $80/month for low-end fibre sounds plausible if you don’t get forced to try to provide it outside of major population areas (ie, the new 90% of the population target, not the old 98%), but only if there’s a high takeup in the area, and it’s implemented with a lot of competence. If there’s low takeup, then you get to multiply the cost accordingly; and if there’s problems in the implementation, you get to add to the cost, and lower the adoption when people avoid it.

Of course, the more likely scenario is the budget doesn’t get balanced, and the final $38B is either rolled over into ongoing debt (“we need to come clean on $38B of bonds? let’s issue more bonds then and pay the old debts with the new debts! ponzi scheme? what’s that?”), taken out of taxpayers’ hides, or we have a round of inflation so that $38B is barely enough for a morning coffee. The other issue is that $42B worth of bonds would almost double the amount of debt Australia currently has on issue, which could easily affect the price we get for our debt — and that we’d have to issue more bonds than I’ve estimated above or have a higher coupon to get the same cash now, with corresponding increases in the prices needed to keep the budget balanced.

(And of course, if there really is a credit crisis, and people aren’t willing to loan money, issuing $42B of bonds wouldn’t be possible. If there’s just a credit crisis for private borrowers, this probably just makes it worse by giving even less reason for people to loan money to anyone who doesn’t have their own mint and tax agency)

WoBloMo, epilogue

Phew, so March is over in another hour or so, and this post will be my sixteenth of the month, thus kinda completing the woblomo challenge, even if it ended up pretty damn flaky after the 19th… But hey, it was kinda fun, at least from this end.

If you’re looking for actual interesting content, you might like to check out On Topology, by John Moeller who did a better (though still imperfect) job keeping consistent on his blog, with some interesting posts on hyperspheres amongst other things.

Internode quota redux

I guess I posted my previous post too soon, because just after midnight last night the usage that had disappeared magically reappeared. Traffic shaping had already started about six hours earlier, despite internode-quota-check telling me there was a few GB left, and I’d gotten the “over quota” email, so at least it’s all consistent now. Still all a bit odd though.

internode quota, day 2

So snarky comments notwithstanding, I’m sticking with my assessment of the quota behaviour as weird and confusing, but it’s nice to have a pretty picture tracking just how weird and confusing it is. Hopefully tomorrow Internode will be all “haha, it was all just a practical joke, April fools!” and give me another 40GB to play with. Since the first of the month is my usual rollover day, I’m quietly optimistic.

Munin

Okay, so I’m late to the party, but munin is great. I modified Mark Suter’s internode-quota-check to dump output in a form suitable for munin and ended up with some graphs. Today’s is a little confusing:

internode-day

Somehow the blocks of downloads that almost used up the remainder of my quota yesterday just vanished! Awesome. Especially since there apparently wasn’t any downloading happening during those blocks. Apparently there’s rumours that the quota exempt stuff is sometimes added to your usage as it happens, then deducted later, so maybe that’s what happened.

The red “variance” line is how much expected quota usage differs from actual usage — by the above I would’ve expected to have used 2.5GB more than I actually have so far. (The expected usage is just a constant rate that will exhaust the entire quota just in time for it to rollover)

Exponential Growth

On Wednesday the 25th, I was thinking about project growth. The day before I’d posed a question to the debian-vote list:

Over the next twelve months, what single development/activity/project is going to improve Debian’s value the most? By how much? How will you be involved?

There have only been a couple of replies so far, the first of which was from Russ Allbery, who took issue with the way I’d chosen to focus on growth. Which lead to some interesting thoughts; or at least, I find them interesting.

The examples I gave talked about how much that would improve Debian from users’ perspective in percentage terms — this would make Debian three times as good for one out of every ten of our users, eg. That, in turn, implies an exponential rate of growth: if you can consistently improve at a given percentage over a given timeframe, whether that’s 100% a month or 1% a year, you’ll eventually be doing better than anything that can’t remain exponential, given enough time. On the other hand, if you can’t maintain exponential growth, you won’t be able to maintain any given percentage either — linear growth, eg, will give you percentages that drop rapidly: 100%, then 50%, then 33%, then 25%, 20%, 17%, 14%, 12.5%, 11%, 10%, etc.

The interesting thing is how that looks from a user’s perspective. If a user’s already got a working system — running Debian, or Ubuntu, or anything else — what’s the incentive to either upgrade or switch to another distribution? One is that ongoing support might disappear, so you can’t get security support, or Oracle will stop answering your calls because your OS is too old and they just can’t be bothered anymore. That’s a mostly negative approach though: you’re not expecting any benefit, so you just want to minimise the pain of upgrading. New features, behaviour changes, all that stuff is a cost, because you’re just looking to keep doing what you were doing before. The other reason to upgrade is exactly the opposite: that there are new features, or new ways of doing things that are a real benefit to you personally. Perhaps it’s a bunch of small things — a little less power usage, a few less errors here and there, less obnoxious popups, a faster boot, fewer typos around the place, some fixed bugs, some more documentation — that just add up to a more pleasant experience. Perhaps it’s one or two big things — you can replace your last Windows box, eg. Perhaps it’s something that pretty much only matters to you — you changed your name to an unpronouncable symbol, and finally your preferred font has a unicode glyph included that you can use as your real name in your email program.

But it seems to me, that if you’re going to provide a new version of your software and you want users to be happy about upgrading, then there are two things to focus on: making the changeover completely unnoticable, and making the upgrade give an appreciable benefit to an significant number of users. And if you’re going for the latter path, then that does require a percentage improvement: for x% of your users, they’re experience using the new software has to be y% more pleasant than using the old software. (And if you decide to go exclusively for the former path, there’s an easy solution: don’t change the software at all — you can’t notice changes that aren’t there)

And note that that’s actually the right answer, too: if you aren’t improving your users’ lives, you shouldn’t be releasing new versions. Upgrades come at a cost, even if they’re nominally free: some things break, you need to learn new things, and you often get forced to upgrade other things too. If you’re only making things 0.0001% better, it’s probably better to delay the upgrade until there’s a bunch of improvements that can be combined to actually make the cost worthwhile.

Getting back to the original question, it seems to me like it’s also fair to focus first on the things that are going to provide the biggest benefit; though obviously there’s plenty of room for debate over whether three small things are better than one big thing, or how to compare a short term benefit with a longer term benefit. But ultimately, I think it’s fair to say that when you multiply all the improvements all your contributors are working on, scaled by the proportion of users they affect and how much they affect those users, you want to end up with something similar to Moore’s law: that is your project becoming twice as “useful” every eighteen months. Maybe it’s a different time frame, maybe it’s three years or five years, but if it’s not in the ballpark, then you’re basically not doing your users any favours.

So how do you get there? There’s a few ways of getting that sort of growth. You can keep your userbase constant, and improve your quality. That has the benefit that you’ll likely get more users as well, so do better than you expect, but eventually you’ll hit a wall because your users will be near enough to completely happy that you just can’t make things that much better. Alternatively, you can maintain your quality and expand your userbase; which largely means finding new things to do, rather than doing the current things better. For open source development, that has the benefit that it can increase your contributor base too — if you have one contributor for every twenty users, then if your userbase increases by 100% every eighteen months, so do your contributors. And if each of those contributors is focussed on individually linear growth — improving the system enough that twenty new users will adopt it, for instance — that will rebound back into sustainable exponential growth.

Now, there are limits to that sustainability: eventually you run out of people (or just match the rate at which the population is growing, anyway), or hit the absolute limits of a quality experience. But that just means one of three things: your project should expand into other areas that still usefully contribute to humanity at a significant level, people should stop spending much time on your project and work on other projects that usefully contribute to peoples’ well-being, or the human race has pretty much hit the absolute limits of its potential. Those seem like big calls to me, and at least in the areas that interest me, there still seems like plenty of potential for big improvements.

So at the moment there are three responses to my original mail, from Russ, from Raphael Hertzog, and from (DPL candidate) Stefano Zacchiroli. And they’ve all pretty much avoided the “By how much?” part of the question. If it’s really fair to expect Debian to improve significantly (by 10%, 100%, 300%, whatever) over the course of a year — and as I’ve argued above, I think it is — not making estimates of how much benefit things will actually result in seems both a bad way to establish the project’s priorities, and somewhat disconnected from the usual philosophy of wanting to measure performance and results, that we expect from scientific and engineering endeavours.

Elastic bands

So moving onto Monday the 23rd. Something I’ve been pondering blogging about for ages now is an analogy I came up with for the way Debian is organised. I’m not quite sure of the motivation, but it goes something like this: imagine all of the people in the organsiation arranged in a circle. That circle represents everything the organisation does. Around the outside is an elastic band, holding all the people together — and that’s the organisation itself. When someone wants the organisation to do something new, they try to move to the new area, but have to stretch the elastic band to do so, which might be easy or it might be hard, depending on how rigid the elastic is, or how rigid the organisation is. If it doesn’t stretch much, when you try to extend the organisation to do new things, you’ll find instead that you’re pulling the people on the other side of the circle away from the things they’re interested in; and that they’re doing the same to you. The most obvious solution to that is often to pull harder, or to tell the other people to stop getting in your way — but a better solution is often to find some way to make the elastic more stretchy, ie to make the organisation more flexible, or to make the things it does, and the people in it less tightly coupled.

Anyway, it seems a useful way of looking at conflicts to me: are you actually growing the organisation, or are you just spending all your effort getting other people to move in the direction you want, when they don’t actually care?

You could probably extend the analogy to cover forking too — stretching the band so far it snaps, then tying it back together again. I don’t know if that’s very helpful though… Also, there’s probably more than two degrees of freedom, so a hypersphere is probably technically a better model, but, well…

Linux Aus face to face

So, catching up on my WoBloMo posts. On the 21st I was in Melbourne for the Linux Australia council meeting. Saturday was mostly organisational stuff: basically getting an idea what each of the council members thought about the approach we’d take for the rest of the year. Stewart invited Andrew Cowie to give a presentation on corporate governance and related background from LA’s history. It was pretty similar stuff to what Andrew talked about when he was on the committee (from 2003 to 2006), basically that it’s important to have a split between oversight and executive roles (ie, making sure stuff is done properly and actually doing stuff), keeping your head around all the different sorts of strategies and objectives the organisation might pursue, and focussing on being a sustainable organisation, so dealing with people coming and going and prudent management of funds and resources. In some ways it’s a difficult issue for LA because we’re at the point where we have enough resources to want to do lots of cool stuff, without having the resources to handle it in a sustainable manner; we have money, but not enough to hire staff for an extended period; we have income from the conference, but it’s not very diversified and can be quite variable; we have volunteers, but they’re often already overloaded, etc. I don’t think we came up with any answers per se, rather than just kept an awareness of the questions; but compared to a few years ago, it seems like LA’s beginning to settle into something approaching a working compromise, which is good.

The other comparison that can be made to a few years ago is more of an absence. When Andrew was on the committee, at least in the year we had in common, he and Pia had a habit of butting heads, more or less on this topic. From where I sat, it was mostly entertaining: a real live dialectic, noble scholars jousting on the field of ideas wearing their philosophys’ favours on their arms — though I gather for both Andrew and Pia it was mostly just frustrating. Particularly since there was something of an impedance mismatch in their roles within the organisation, rather than being a debate between people with equal responsibilities that someone else gets to adjudicate. To attempt to paraphrase Pia’s line of thinking (and without the benefit of it having been reiterated just a few days ago), I’d say her view was that sustainability is very much a secondary issue compared to activity and actually getting things done; that Linux Australia is a volunteer community, so make use of that and get people to do things for free so you don’t have to worry about how much money you have, and reward that contribution with kudos and appreciation, and ultimately if your organisation is doing great things, people will find a way to keep it going one way or another anyway.

I’m personally more biassed towards Andrew’s focus than Pia’s — I’d rather work on the multiplier between effort and results, than increasing efforts with the same multiplier. But not completely so: there’s no point having huge results for very little effort if nobody’s putting in any effort, after all, and there’s no point having an organisation that can sustain itself forever, but that never actually does anyone any good. So for me, it seems valuable to keep the other side of the argument close to mind.

(We discussed a bunch more practical stuff on Sunday, but I couldn’t very well have posted about that the day before, which was the WoBloMo post I’m meant to be making up for here…)

Voted

So I’ve pre-poll voted in preparation for my trip to Melbourne for the LA face-to-face. Not a very exciting range of candidates: Anna Bligh for Labor who’s premier; Mary Carroll for LNP who’s apparently the state secretary of the party; Gary Kane for the Greens who’s running on an anti-developers platform, with light rail to solve local traffic problems; a Socialist Alliance guy who doesn’t actually live in the electorate; David Rendell who wants daylight savings; Derek Rosborough who’s a serial independent candidate and wants a review of water fluoridation; Matt Coates who’s part of the “Reclaim Queensland” bunch of independents; and Merilyn Haines and Greg Martin who I couldn’t find anything out about. Hrm, I suppose one of them might be the “sex party” candidate.

A day late for WoBloMo (unless you go by Hawaii time…) but I figured I shouldn’t blog ’til I’d actually voted…

Bubbles 2: Glubba glubba in the puddles

(Random topic courtesy of Dressy Bessy)

A couple more thoughts on Sunday’s post. In comments, Brendan Scott asks “Why would a trader extrapolate against their estimate v valuation?” But there’s actually a broader question — why would anyone trade at all? The initial scenario provided infinite supply at $500 per item, and gave a randomly chosen demand at $500 per item, which was then naturally fully satisfied. If they thought it was a good idea to buy more at more than $500, they should have bought upfront, and why would they want to sell something they just paid $500 for, for less than $500? Worse, the only difference between the traders is the input they get from the random generator: their strategies are explicitly the same. So no trader can potentially be smarter than another, and profit from their stupidity, they can only profit if the random number generator gives them a lucky number, and someone else an unlucky number. So I don’t think there’s a good answer to “why would they do this?” — participating in this market is fundamentally a mistake, the way most people view the world. For instance, it ends up with a 74% chance that you’ll have less money than you started with, and starts off with perfectly equal wealth amongst all the participants, and ends up with the wealthiest individuals having over $60,000 while the poorest have less than $10.

In a real market, as JD points out you have different people having different information — though of course you’d have to actually have something to have information about. If you decided the assets were batches of ten barrels of oil ready for delivery in twenty-four months time, different people would have better or worse estimates of the value, and depending on other changes in the economy, the underlying value would change too (maybe someone discovers a cheap oil replacement and it drops, maybe there’s a war and it rises).

An interesting theory that I’d never heard of until David Pennock posted about it the other day is the “Kelly Criterion”. Given an estimate of your odds of success, and how much you’ll make, it will tell you the optimal amount to risk to get the biggest advantage from compounding returns. The idea is if you’ve got an almost sure thing, and you only risk a few dollars on it, you won’t make much; but if you continually risk everything, even on sure things, you’ll eventually lose it all, and that’s no good either. The Kelly criterion makes that idea precise, telling you exactly how much of your resources you should commit, assuming you can come up with a reasonable estimate of your odds.

The original paper was from 1956 byJohn Kelly, apparently in collaboration with Claude Shannon, and as you might therefore expect, came from an information theoretic approach, rather than an economics one. The idea was that you have a secret channel that tells you exactly what to bet on, but unfortunately it’s not a clear channel, and sometimes you mishear what you’re being told and thus bet wrong. Fortunately you’re clever enough to figure out how often this is likely to happen, and thus you can work out when to follow the tips and how much to invest in them, which gives you the aforementioned Kelly criterion. But the signal you get doesn’t have to actually be from the future, it just has to be correct predictably often. If you want to apply that to your intuition, your astrologer, or a groundhog’s shadow, that’s fine, though the lower your odds of success, the lower the amount you’ll be encouraged to invest, and thus the lower your optimal returns will be.

But that’s only relevant if you’ve got an actual meaningful signal and a chance to actually profit, which isn’t what I gave my poor automated traders. If I had, an optimal system would’ve rewarded folks with the best signal, provided useful information for someone, and transferred physical wealth from people who wanted information to people who had it. Redoing the marketplace so that was actually possible would provide a much more interesting endgame.

Nevertheless, it’s interesting to me that even without any fundamentals at all, or any complicated trading techniques, you can pretty easily get behaviour that looks like a bunch of otherwise intelligent people bidding themselves into bubbles and then crashes. If you looked at those graphs as the price of oil or milk or similar, you’d naturally go looking for a cause for the price changes: but at heart, there actually wasn’t one in that case, it was just a combination of coin flips, that happened to be more or less likely, due to trader’s habits, and how much they could happen to afford at the time. You can only reasonably fix that by changing habits, and probably the only way to do that is to bankrupt folks with bad habits so they stop it…

Bubbles: the joy and the laughter

The efficient market hypothesis — that prices in a market immediately adjust to fully reflect new information as soon as it becomes available — is probably the primary foundation of the success of markets at allocating resources: eg, making the prices people are willing to pay at supermarkets influence what farmers produce and how much oil gets drilled to power trucks and trains to transport food around the country or the world. What’s interesting is that despite the copious evidence that it works in practice — food does make it to supermarkets both more reliably and more cheaply using free markets than alternatives — it’s clearly not actually true: prices do get completely out of whack with “reality” and you get a bubble, which ends up forcing pretty painful corrections when the eventually burst. It seems like avoiding bubbles would be a win, but it’s not usually clear when they’re actually happening (after all, big price increases could actually be an accurate indication of reality, right up until a crash proves they weren’t), and in some sense it’s not really even clear why they happen in the first place.

I had a thought the other day that it might be interesting to simulate an asset market to test out an idea that I’d been pondering, to see if bubbles and crashes occurred or not. It didn’t take long to get a pretty serious price crash:

crashBut after the crash, things bounced back okay and stayed pretty stable:

recoveryThat time series is twenty times as long as the first, so except for a bit of ups and downs, it looks like it might have actually stabilised at some sort of “fundamental” price. But sadly for our virtual traders, it turned out not:

instabilityThat time period’s about six times as long as the previous, so about 120 times as long as the initial crash snapshot. And it’s not really a market that looks very pleasant to participate in either — and going on just a little further ends up with what appears to be a permanent price crash:

endgameThe scale here is back to about the same as the initial recovery.

So what’s happening? As a pure simulation, all the behaviour here is implied by the initial setup. And that’s this: fifty simulated traders each start with $11,000, randomly choose an initial valuation for the assets available to trade, and based on that valuation purchase a number of those assets at $500 each. This is then followed by a series of rounds (the graphs above are from round 1 to 160,000), where each trader offers a buy and sell price for one asset. These prices are calculated by taking their current valuation for the assets, and adjusting it by two randomly calculated percentages r3, where r is between 1% and 51%, so they’re offering to buy for less than their valuation or sell for more than it. Those offers are collected, and if there’s any overlap (someone wants to buy for more than someone else wants to sell), a mid-point price is selected, and the trades are made at that price. If an individual trader is broke, their buy price is 0; if they don’t have any assets, their sell price is set to infinity.

Apart from the initial purchase, fundamentals have absolutely no influence on this market, or, more particularly the traders’ valuations of the worth of the assets being traded. There are no dividends, there’s no intrinsic worth to the assets, no monetary benefits or drawbacks to owning assets, no external influence after the initial setup, and it’s entirely zero-sum: any profits one trader makes come at an equal cost to some other trader.

So the traders try to out-interpret the market, by taking their previous estimated value and the price of the last trade, and extrapolating linearly. So if they thought it was worth $600, and the market thought it was worth $200 more at $800, they figure next round it will have another $200 increase and be worth $1000, and the same on the way down. If there weren’t any trades in the previous round, they’ll randomly change their valuation by between -5% and +5%. So when prices start going up, everyone keeps raising their prices for the next round, until nobody has enough money to buy anymore, and the prices reverse, until it’s down to just above zero, and then it’s back the other way. Medium-term stability only happens by good luck, long-term stability only happens when enough traders are broke that the ability of traders to randomly increase their valuations for even a few rounds in a row is heavily restricted.

Of course, that means every trader is explicitly refuting the efficient market hypothesis. If you do the opposite and assume it, ie have every trader immediately set their internal valuation to the market price, you get one round of trades setting a consensus price from trades based on each traders individual, private valuation, but then that’s it. No bubbles, no crashes, no periodic variation.

Which seems like it makes sense: if you’ve got a system that allows positive feedback loops on valuations, you’ll get prices rising or crashing; if the feedback’s limited and randomised, you’ll get both happening randomly. And “extrapolate the current trend” is definitely a positive feedback loop. I’d almost argue that all technical trading is positive-feedback: it assumes there are trends, then acts in ways that partially support the trend, and partially profit from the trend. If there’s an upward price trend, you buy, increasing demand, and supporting the upward price trend; if there’s a stabilising or downward price trend, you sell, dropping demand and supporting a downward price trend.

So I guess that’s my theory: bubbles and their corresponding crashes are a natural feature of markets that have a signficant number of technical traders, that is, traders who act based on their predictions of how other traders will act, rather than any inherent value they see in the things they’re trading. That might or mightn’t be an obvious conclusion. It seems kind-of obvious, but given the amount of purely technical advice in sharemarket books — teaching you not how to evaluate companies, so much as how you should expect the market to behave based on the same information everyone else has about how the market has been behaving; maybe it’s not.

If you could go a step further and say that the only profit technical investors can expect is from the losses of other technical investors, that could make things really interesting. I suspect something like it is close to true, if you can assert that your fundamental investors never individually make a loss. You can do that by assuming they have a personal, intrinsic valuation of the asset at $x, and will never buy one for more than that, or sell one for less than that, but that valuation is naturally going to change over time due to external factors (you become richer, so you’re willing to pay more for things; your priorities change and you value the asset more or less; the asset changes to become more or less useful/pleasing), and I don’t really see how to factor that in.

Eee box

So I fell for Zazz‘s “Thingy of the Day” last week and ordered an Asus Eee Box. It arrived today, and is pretty respectable — the “screws onto the back of your LCD” form-factor is pretty sweet, and having SD cards as the only removable media seems pretty decent too. Built in wireless, decent number of ports, and it all looks good. It came with Xandros installed (as opposed to XP) which I’ve now replaced with Debian, though haven’t finished setting it up yet. I’m planning on trying to make it replace azure (my co-located server) entirely, though I’m not sure it’ll actually have quite the capacity to handle that. And there’s a few things I’d possibly like to make it do in addition too, like act as a MythTV server for the PS3, provide IPv6 addresses for my home network, work with my laptop so I can have a triple-head display over ethernet since my DVI/VGA ports can’t go that far, etc.

Budgeting

For a variety of reasons — personal, business, linux australia, linux.conf.au — I’ve been poking at budgeting lately. I thought it might be worth a post on some of the ideas that I’ve found useful; if they turn out to be too obvious, well, bad luck. :)

The way to look at a budget, it seems to me, is as a way of balancing out priorities. For a personal budget, you want a roof over your head, you want to eat, have transport, spoil yourself, not work too hard, have a holiday now and then, and probably try to save up for retirement. A budget lets you translate those things into numbers, so you align your expectations: you’ll probably find, for example, that you can’t spend $500 a week partying, have an overseas holiday every six months, and be a multi-millionaire by the time you’re forty while only working twenty hours a week at minimum wage. But knowing what things you are trying to balance out gives you options: maybe you don’t have to be a multi-millionaire, maybe you’re happy to trade eating out and watching movies in order to have the holiday, and maybe if you’re cutting down your entertainment through the week anyway you can do more overtime and get paid a bit more.

If you don’t balance your budget that way, but instead leave some of the parts as assumptions — like “my salary is such-n-such, where does it go?” — you risk not being able to handle surprises. What if there’s a global financial crisis and you take a paycut to avoid losing your job? The more your budget can tell you “if such-n-such happens, then that means I can’t afford this anymore” and “if I want to afford that, I need to make sure such-n-such does happen”, the better off you are. And if you can take fairly conservative, pessimistic estimates — I won’t get a payrise or a bonus, I’ll probably lose $10,000 over the year from accidents or something, I’ll probably have to spend more on groceries than I expect, etc — and still keep the results roughly matching your goals (not working too hard, an annual holiday, not sleeping on the streets/starving to death, etc), then you’re likely to end up in a better situation than you planned. And even if everything does go wrong, you’ve already factored that in. And if you can’t come up with something like that, you at least know what sort of risk you’re running, and can start thinking about how you can minimise it in future.

For me, the two biggest tricks for getting my day-to-day expenses better managed were giving myself a weekly allowance for things like eating out, paying for petrol, buying alcohol, and such; and buying my groceries online once a fortnight with a fairly strict budget. The theory there is that they stop being variable expenses and become predictable, more than whether or not I’m spending less overall. If I find I’ve got $20 in my wallet to tide me over ’til next week, I’m not eating out; if I find I’ve collected a few hundred dollars because I haven’t been going out much over the past few weeks and a PS3 game catches my eye, I don’t have to feel guilty about splurging: it’s already been factored into my budget. And buying groceries online, while a little more expensive, means I can aim at a particular dollar figure, without having to keep track of amounts in my head, or end up in the last aisle with more than I want to spend, and having to wander around putting stuff back on the shelves. Having it delivered in a truck is also easier than carrying it in a backpack on a motorbike, too. Removing the hassle from regular expenses, and reducing the amount of guilt and worry that comes up about irregular expenses is a worthwhile win of its own.

Adding all your weekly, fortnightly, monthly and annual expenses like that up gives you a rough idea what your minimum disposable income is: how much you’re earning that just goes into keeping you pretty much exactly where you are. But you probably don’t want to just stay where you are: you probably want to move ahead at least a little. Maybe that means paying off debt, maybe it means owning a house so you don’t need someone’s permission to put in new powerpoints, maybe it means spending more on yourself each week, maybe it means having a million dollars just so you can brag about it. Which means earning more than you need to, so you can save it up and then either brag about it, or spend it all again while retired. I made a spreadsheet a while ago to see what that actually looked like — you set your current income and expenses, how much you think those will increase each year, and it tells you how much you’ll have in assets over the next twenty years, and how close you’ll be to being able to just live off the interest those assets earn. It doesn’t take inflation into account, does take (current) Australian tax brackets into account, but doesn’t deal with any of the differently taxed ways of building assets, like trading shares or property (capital gains), superannuation (tax breaks, co-contributions), first home savers, etc. But there are benefits in keeping things simple too.

One assumption that’s worth pointing out is that it assumes your assets actually earn an income for you; which isn’t something you get from owning your own home unless you either sell it and move somewhere cheaper, or rent out part of it. A few years ago when I last looked, owning a home in the city was a lot more expensive in interest costs than renting — which is to say purchasers were paying a premium for the expected capital gains when they eventually sold. Getting into that sort of situation, as far as I can see, is probably a bad idea if you’re comfortable with the vagaries of renting: it’s effectively increasing your fortnightly expenses, and locking up a lot of your assets in something you can’t easily convert into anything else should you need it. Maybe that’s desirable if you’ve already got a lot of investments and want to have something stable, or you’re expecting rents to rise substantially or similar, of course. And that’s not always the circumstances you’ll find yourself in; depending on where you live, renting can be more than the interest costs of a loan, in which case buying can be a good idea from your budget’s perspective. You’re reducing your regular costs, and are likely to be putting a decent amount of your savings somewhere you’re not going to easily fritter them away. And if you’re lucky when you do eventually sell, you’ll not only benefit from the savings in rent, but you’ll also sell for more than you paid, and be even better off. But unless you’re buying out in the country, you’re probably not that lucky these days.

Anyway, at over 1000 words, I guess that’s probably long enough. I wonder if it’s actually interesting to folks out there…

Obligatory Watchmen post

Summary: meh. Universe, cinematography, characters, drama, humour: 9 or 10 out of 10.  Plot: 3 or 4. It was fun to watch, but I didn’t end up caring about anything much that happened. Probably would’ve been more interesting if I was more invested in either the world or the characters — ie, if I’d read the comics — but over a three hour movie, it didn’t happen for me. The production values, though, were… visceral — surprising it’s MA15+ and not R, really.