RedCat

I don’t have anything interesting to say today, so instead I’m going to link to an oldish post on my junkcode wiki (rss feed). Namely redcat — a program that merges ed-format diffs, so they can be applied in a single pass. That makes a big difference when dealing with anywhere more than a few small patches to a single large file, most particularly when working with apt and pdiffs. There’s no actual plans to improve apt along these lines as far as I know — at least, there wasn’t any response when I poked the apt list some time ago — but it’s still got a couple of interesting coding techniques folks might find interesting.

The Red Pill

ajtowns – Scamming my way onto “Team Samba” (“hey, I use it!”) was a good idea. Winners! #lca2010 #hackoff

Wellington Perl Mongers were awesome enough to run the Hackoff during LCA 2010. It consisted of a couple of hours of team hacking to decode craftily hidden eight character tokens. I’d seen Rusty carefully putting the “Samba Team” earlier (“Here’s what’ll happen: I’ll put together a team so awesome that they won’t want me on it”), so when I wandered in with Biella to check out what was going on and found myself sticking around to see how it went, it seemed obvious which team I needed to latch onto.

The first four problems went down pretty well, albeit with a chunk of brute force rather than any elegance. The first question was some text munging of an HTML document, pretty much perfectly designed for a perl solution. Personally, I got stuck on trying to get the damn thing to display on my underpowered netbook, but that’s what teams are for, right? Problem two was pretty similar — it had an ascii punched tape that you had to treat as binary (holes are one, untouched is zero), translate to hex, translate to ascii, then actually read the resulting text rather than just plugging in the promising look eight characters from the end. I think I abused iprint and shell to get that one out — see: users can be contributors too! Number three was one for the spreadsheet mungers: a fixed width text database where you had to pull out various bits of information. We solved this one as a team: people worked on different queries however they liked; Andrew Bartlett and I imported it into openoffice.org and used its sorting facilities; other people did cleverer things. The fourth problem, which was the last one anyone got out before the organisers had to call time (and that after an extension), was an OpenOffice document inside a tarfile with various letters highlighted; putting the highlighted letters together gives you the answer. I think it was Jelmer who managed to pull the document apart and programmatically extract the answer from that, and with it the win.

The next question we got was somewhat horrific. It was an animated gif, consisting of 1300 odd frames of green text falling down the screen Matrix-style, with one or two characters randomly adorned with a blue or red square. The instructions on the first frame suggests choosing the “red pill” (the rabbit hole one) or the “blue pill”, with little pointers at the different coloured frames. So the answer seems to be go through every frame and pull out the characters highlighted in red, and see what they end up saying.

We were kinda stymied by this, and after some back and forth, pulling the gif apart, converting to png, and basically getting nowhere ended up just assigned chunks of 1000 frames to different team members. We got a bunch of frames done, but were still a ways off an answer when it got shutdown. Then it was off to the nearby pub for dinner and beers (and Tridge’s analysis of the the “Harry Read Me” climate data stuff).

(We’d only looked at the red squares, trusting in fandom to not let us down; one of the other teams had actually looked at the blue highlighted letters though — turns out they were just DEADBEEF repeating. Nice.)

After heading back I spent a bit of time playing with my pygame project; I think getting from just having a triangular grid, to having little moving circles on top of the grid or something equally compelling. But that then inspired me to have another poke at that problem again. So I pulled out pygame, and the 1349 frames converted into PNM, and set about automatically extracting the relevant characters.

I took a few assumptions: that the characters would be laid out on a nice rectangular grid, that all the characters would look the same (every A is the same combination of pixels as every other A), and that the highlights would be in a predictable position relative to the character grid. I pulled up a frame in gimp to work out the pixel width of each character and the offsets for the frames, at which point it was just a simple matter of programming.

And rather than explicate that further here, I’ll point at my junkcode page for the explanation of that programming and the prettified source instead. All in all, pretty neat, I thought.

Anyway, submitting that (sitting on the pavement outside the venue; a security guard questioned me about stealing the conference wifi, and a group of party girls walked past and commented “He’s facebooking!” — sadly, they were right) then allowed us access to the final problem they’d prepared, which was a midi file disguised as a spreadsheet where the answer was encoded as errors in a repeating tune. Beyond listening to it for a bit, I didn’t make an attempt at that point: graphics and OCR were scary enough; audio? Please no. I see now that a couple of other teams actually got that one out too eventually. Kudos!

(Tridge and Jelmer had a go at solving it independently too; they did it in C, loading each PNM into a structure in memory directly, scanning that for the red square, copying it into a separate structure, writing that to a new file, and running gocr over the file; then doing the next frame. PNM is a particularly great format for doing that in C.)

PyGame

At this year’s linux.conf.au I decided it was high time I learnt how to program simple graphics. The use case I had in mind in particular was simulating/visualising resource transportation in a grid based real-time strategy game like Widelands, but really I’ve been a bit annoyed that I haven’t been able to do basic graphics on Linux at a similar level to what I used to be able to do with AMOS Basic on the Amiga or QuickBASIC on the PC back in the early ’90s. I used to be able to write a clone of QBasic‘s “nibbles” in a couple of hours, with just “print” and “locate” and a bunch of logic; doing the same today would mean figuring out all the stuff required to setup ncurses, and well, augh.

Or doing it as a webapp, of course, and that tends to involve even more setup.

So with Carl and Keith in attendance I figured Cairo would be a good bet, but the documentation that’s out there for python-cairo didn’t seem very helpful as far as making it trivially easy (probably in part because cairo’s kinda multi-targetable and kinda powerful). Carl’s suggestion was python-gtk, which uses cairo as it’s drawing engine; I didn’t actually try that though — gtk is a bit more than I want to have to think about, in theory anyway. I thought about trying out nickle’s support for cairo, but it still seemed to involve a bit more setup work than I really wanted, and, ultimately, Python’s my play language of choice, and it seemed a bit overwrought switching languages just for pretty pictures.

Ultimately I went with PyGame, which seemed to have the right emphasis on “simple”, “works with python”, and “popular enough I’d heard of it before”. It’s possibly a bit kludgy in some respects — but it’s also pretty easy. I was a bit surprised that it doesn’t seem to actually use Cairo for its drawing, instead relying on SDL which seems to hit X natively. At some point, that might annoy me enough to revisit the topic, but then again, by that point it’ll probably be using Cairo one way or another anyway, the way things are going.

My first program was to switch from an example bouncing ball in a window to a bunch of lines whose end points bounce around a window. The code:

#!/usr/bin/env python
import pygame

size = (640,480)
colour = (40, 50, 0)
bgcolour = (255,255,255)

n_lines = 13

screen = pygame.display.set_mode(size)
pygame.display.set_caption("not a game")
running = 1

class BouncePoint:
    def __init__(self, start, velocity, colour, bbox):
        self.coord = tuple(start)
        self.speed = list(velocity)
        self.colour = colour
        self.bbox = bbox

    def update(self, time):
        self.coord = tuple( a+b*time
                 for a,b in zip( self.coord, self.speed ) )
        for d in [0,1]:
            if self.coord[d] < 0:
                self.speed[d] = abs(self.speed[d])
            if self.coord[d] > self.bbox[d]:
                self.speed[d] = -abs(self.speed[d])

pts = []
for i in range(n_lines):
    pts.append( (BouncePoint( (i*10,i*10), (30*i+45,50*i+12), colour, size),
                 BouncePoint( (i*50,i*50), (i*65+23,i*13+125), colour, size ))
    )
    colour = (colour[1], colour[2], colour[0])

clock = pygame.time.Clock()
clock.tick()

while running:
    event = pygame.event.poll()
    if event.type == pygame.QUIT:
        running = 0
    time = clock.tick()/1000.0
    for i in range(n_lines):
        pts[i][0].update(time)
        pts[i][1].update(time)

    screen.fill(bgcolour)
    for i in range(n_lines):
        pygame.draw.line(screen, pts[i][0].colour, pts[i][0].coord, pts[i][1].coord, i+1)
    pygame.display.flip()

Works adequately — I mean, it pegs the CPU, redraws the whole window every refresh, and flickers the cursor when you move it across the window; but it displays what I wanted to see, and doesn’t have too much excess code. Really, there’s just a call to “pygame.display.setmode” to create a window, “pygame.event.poll” to watch for events like the window close button, and “pygame.display.flip” to let me write my code to redraw the entire screen from scratch without the display actually flickering due to partially complete updates being visible.

There’s also a bit of boilerplate around the class declarations for the lines; I forget whether that’s there because the example I copied from did it that way, or because what I wanted to end up with would need separate objects for most of the drawable items, but either way it seemed like a reasonable approach. Even if it is a major departure from what you might see in QuickBASIC in ’93.

I suspect I’m abusing modern CPU speeds and such to get away with that code — like having a personal bodybuilder on hand just to unscrew a stuck jar or something; but equally it runs fine on my netbook, and I can always add a “time.sleep(0.1)” in here and there if I only actually want, say, ten frames per second.

(As far as I can pick, the right way to throttle graphics apps is by frames per second; and adding a sleep in there lets you do that directly — and if you make it sleep for “max(0, secs_per_frame – elapsed_time)”, you get it throttled on quick enough CPUs, and pegged out when your hardware isn’t as quick as your hopes. Maybe there’s better criteria out there, but that seems much closer to perfect than I’d expected, given how easy I’m insisting this be)

So yeah, PyGame: recommended. AAA+++ module, will do more braindead hacks with it.

WoBloMo 2

So with March coming up again, I thought I might have another go at some regular blogging. The World Blogging Month page doesn’t seem to have been updated (and David Pennock hasn’t blogged at Oddhead since October…) but I figure every other day still sounds more reasonable than every day, so I’ll stick with that, at least as the goal.

Hmm, looks like On Topology hasn’t been updated much either — last post July/August last year (depending on if you count test posts); and only a page of posts since last year’s WoBloMo.

As an aside, don’t “WoBloMo” and “NaBloPoMo” sound like something the Judoon might have come up with? Maybe that’s the problem: if you start one of these things and don’t finish, the Shadow Proclamation hunt you down and trap you in the body of a lolcat. Scary stuff.

Joyce on debts

From the ABC today:

Opposition finance spokesman Barnaby Joyce is courting controversy again, warning that Australia is getting to the point where it will not be able to repay its overseas debt.

That’s a big call, which I don’t think is actually true. But how close are we? In the news at the moment is Greece, which is currently on the verge of bankruptcy with a 300 billion euro debt. (And if nations defaulting wasn’t hard enough, this is harder because it’s not just Greece’s problem to deal with, but the EU’s too)

But compare that to Queensland’s debt, which according to the last budget was $57.7 billion AUD and is expected to continue to rise each year until 2017. That’s about 36.6B euros at current rates, or about 12% of Greece’s debt. Though given Greece has a population of about 11M and Queensland only 4.4M, on a per-capita basis that brings it up to about 30% of Greece’s debt.

I can’t actually seem to find any simple summary of what NSW’s government debt is like to compare, but NSW Treasury seems to be claiming (pdf) “net financial liabilities” of about $50B at the moment (with a population of about 7M to service that debt), which would put it at about 16% of a Greece-esque basket case. Victoria’s apparently recently jumped from only a few billion in total public sector debt to an expected $23B, for its 5.4M people to pay back (just under 10% of Greece-esque craziness).

The Federal Government, meanwhile, has lost it’s brief run of being a net creditor (2005-2008, RIP), and according to the budget is currently around $53B in debt, and expected to hit $188B in 2013. Over 22M people, that’s currently 5.6% of Greece, or 20% of Greece by 2013. Given Queensland’s debt’s expected to be $85.5B by then, that puts us folks in the Smart State with 66.8% of Greece’s burden in the next couple of years.

That isn’t quite as bad as it seems — we’ve got a higher per-capita GDP than Greece, and plenty of resources to dig out of the ground and sell to China, and there might still be some room to have those debts conveniently vanish depending on how the Global Financial Crisis continues to play out, and Rudd apparently thinks it will all be paid off by 2022 anyway. So yeah… Barnaby aside, we haven’t hit the point of no return, but a little less of the “just think of it as an investment” excuse for borrowing more might still be a good idea…

Oh, and wow. Also via the ABC:

Brendan Flynn, who analyses sovereign risk for Standard and Poor’s, gives the Federal Government the highest triple-A credit rating.

“With the triple-A rating, that’s indicative of the extremely strong ability to meet financial obligations and therefore in our opinion, very little chance of defaulting on debt,” Mr Flynn said.

That’d be the same AAA rating from the same firms that those CDS and CDO things were getting just a few years ago, right?

Python on Nokia/S60

I finally got a smartphone a week or two ago. I ended up getting a Nokia N86, based on a combination of form factor (I would’ve preferred a flip phone, but apparently not enough people do for there to be lots of different models, and I’m still a bit reluctant to go for touch interfaces), camera spec (optical zoom would’ve been a killer feature, but if you’re not Korean, apparently not until next year), and hackability. There seem to be a variety of smartphone flavours at the moment: there’s Windows CE (which I had on an old phone, and which sucked worse than I expected), there’s Blackberry (which is kinda closed and corporate, and tends to come with relatively crappy cameras for the price), there’s the iPhone (which has a mediocre camera, and no Python support at least as far as the app store is concerned), there’s Symbian (Nokia and Sony-Ericsson and maybe Samsung smartphones), and then there’s the Linux variants: Maemo (Nokia N900), WebOS (Palm Pre), and Android. Android would probably have been my preference there, but any of the Linuxes would work too, and I found out Symbian had decent Python support so that works too. And as it turned out the older Linux phones didn’t really match my feature list, and the newer ones, while pretty, are either as yet unavailable or kinda expensive (or both). So an N86 it was. Which has turned out to do pretty much everything I wanted so far.

Nokia’s phone roadmap is pretty confusing (which seems to be par for the course for the telco industry admittedly) supporting both Maemo as well as different versions of Symbian everywhere (the most recent ones of which are apparently open sourced) as well as lower end “feature” phones. Apparently my Symbian version is “S60 3rd Edition”, and if you want to write apps the difference between that and “2nd edition” is a major one. S60 3rd ed is also, apparently, known as “Symbian OS 9.3”, the followup to which is “Symbian^1” which is also known as “S60 5th Edition” (4th Edition got skipped because 4 is unlucky for some). It’s on the latest Nokia smart phones — N97, 5800 XPressMusic, etc. “Symbian^2” and beyond will apparently be the open source versions, except that maybe it’ll be “Symbian^3” before any phones ship with it. There was also S90 which was more advanced than S60 at the time, but then got merged back in, so now it’s obsolete and S60 is better. Apparently the way phone OSes work is that you only upgrade when you get new hardware, so all the different versions ever are still floating around on old phones.

Anyway, add ons for Symbian come packaged in “sis” files (“Symbian Installation Source” supposedly), which come signed in various different ways (some are signed for only a particular phone, in part to make it harder to write viruses, eg) and with different listed capabilities (so if your app doesn’t need to use gps or wifi, it will be blocked by the OS from doing so). When I first got the phone, the deal was you’d scour the web, and discover that you could download Python for S60 1.9.7 from garage.maemo.org — which is a SourceForge-clone that presumably was originally for Maemo stuff, but is now for anything Nokia-ish. Of course, there was no Python 1.9.7, and this is really Python 2.5.4 with miscellaneous Symbian extensions. In order to get a Python REPL prompt, you need to install both the Python_1.9.7.sis runtime and the PythonScriptShell sis, of which there are a few with different capabilities. Of course, you can only do this after unpacking the PythonForS60_1.9.7.tar.gz, which you can’t do on the phone.

Anyway, get that done and look at the docs and you can actually do something, which is kinda cool. I’ve been just plugging it into my laptop as USB mass storage and copying py scripts across, then running them from the ScriptShell menu so far, which is a bit kludgy but at least usable. So far it seems like lots of little bits of the API aren’t quite implemented for Python — I haven’t found a way to change the top right softkey hint from “Back” or “Exit” to “Save” or “Hide”; but that might just be unfamiliarity. More concerning was that when I tried to use time.mktime to get a Unix timestamp, the interpretor just crashed entirely, so it seems like there are some bugs around too. But the fact that you seem to be able to get at pretty much all the phone features (camera, gps, gsm location, sms, etc, etc) from pretty simple python still makes it a win in my book.

Shortly after I’d gotten that far, I did a random invocation of the “SW Update” app to see if there was any new stuff for me, and got informed “Python for S60 2.0” was available for download. Neato, I thought, and went looking to see what the deal was — but there’s almost nothing out there discussing it. I tried installing it anyway, but apparently the download got cut off, and SW Update isn’t smart enough to continue or start again in that event. But deleting the partial download and trying again worked, and eventually I got me some Python for S60 2.0, which seems to be the same 2.5.4 version 1.9.7 was. The advantage, in theory, should be that programs written in python can just have a sis file that says “I need Python” and the official version will be automatically downloaded. But my first go at making that happen seems to indicate that the dependency used to be on “Python for S60”, but maybe now needs to be on “Python runtime”. Which, of course, is hardcoded into the app (ensymble), and although that’s an open source Python app (and packaged for Debian at that), it’s distributed as a base64 encoded blob so you have to go right back to the source to change the appropriate seven characters. Assuming, of course, that I’m on the right track in my guess as to what the problem is.

As far as I can tell, there’s still not much in the way of any sort of official announcement as to what’s going on with PyS60 2.0, but it seems that they’re rolling it out to some handsets, and the N86 is just lucky on that score. It’s bizarre to me that the Nokia devteam aren’t doing any bragging about getting Python for S60 up to 2.0 and into the official distribution, but I get the impression there isn’t much communication going on in general. I haven’t been able to spot the source for 2.0 either, though I haven’t exactly looked very hard.

No Clean Feed?

So the government’s decided to go ahead with the “clean feed” filtering scheme. This doesn’t seem remotely surprising to me, they’d already committed to it in their election policies:

That is why Labor will:
Provide a mandatory ‘clean feed’ internet service for all homes, schools and public
computers that are used by Australian children. Internet Service Providers (ISPs) will filter out content that is identified as prohibited by the Australian Communications and Media Authority (ACMA). The ACMA ‘blacklist’ will be made more comprehensive to ensure that children are protected from harmful and inappropriate online material.

Unless the tests somehow proved the idea utterly infeasible — slowing broadband down to modem speeds, or requiring wholesale replacement of routers and servers — there’s not really any way to weasel out of that promise. Perhaps there can be exceptions: “homes, schools and public computers” doesn’t necessarily include businesses or universities, so if you want to get the filter box removed from your uplink, there could be a legal way to arrange it, but that’s about it. But anything more than targeted exceptions would be breaking their election promises and giving up their electoral mandate.

Of course, that’s why we have two houses of parliament, and a tradition of not giving the government a majority in the upper house. And with reportedly both the Greens and the independent senators against the “clean feed” scheme, it’s all up to whether the Coalition sides with Family First and Labor to support the bill or not.

Historically, the Coalition’s been relatively good about it, as far as I can see, mostly by deferring to bureaucrats in the ABA and subsequently ACMA.

It first came up, to my knowledge, in the ABA’s “Investigation into the content of on-line services”, which started in July and had its final report of which is dated 30th June 1996 (and actually straddles the defeat of the Keating government and the election of Howard government). The “clean feed” at the time was the “Refused Access List” which was “suggested as a mechanism for restricting the availability in Australia of material that would be refused classification under the existing classification guidelines” and was “based on the possibility of identifying Universal Resource Locators host addresses and the names of newsgroups, containing Objectionable material”.

The ABA ended up recommending that “the proposal for a Refused Access List to facilitate the comprehensive blocking of Objectionable material should not proceed” as well as setting up a task force for investigating ways to limit accessibility further, and setting up an “e-mail hotline” for reports of objectionable material. The report also reacts fairly negatively to knee-jerk responses about censorship:

The RAL scheme as proposed in the issues paper drew much criticism from submitters who mistakenly considered that its intention was the censorship of classes of content which were not currently subject to any restriction. This misapprehension led other submitters to strongly oppose the RAL concept on free speech grounds, believing the proposal would result in unnecessary and additional censorship of the Internet. Some submitters expressed fear than an RAL would restrict access to a wide range of content, including political speech and hence could be subject to abuse by governments.

Since then, there’s been various industry codes of practices worked out, with procedures for filing complaints about objectionable content and getting those complaints investigated and (when appropriate) forwarded onto the police, support for and promotion of filtering products, and giving out advice like talk to your kids about their internet experiences.

On the other hand, they don’t get much credit for it, eg:

  • @stephendevice: Fortunately we have an opposition who won’t stand for this shocking and misguided assault on our freedoms. Oh … damn. #nocleanfeed
  • @renailemay: OK, so now the Greens and Xenophon oppose the filter, all we need to block it is the goddamn OPPOSITION #nocleanfeed #openinternet
  • @Tucniak: Liberals no help to Australian public: Opposition leader @tonyabbottmhr avoids taking a #nocleanfeed [position] http://bit.ly/4W0FKB

Conceptually, it seems to me like it’d be pretty easy to get internet freedom as a core component of coalition policy — “liberal”, “liberty” and “freedom” are words that fit easily together, promoting unrestricted tubes with voluntary filters at the end-points and police enforcement doesn’t contradict any of their policies; and expecting people to take care of their own families and just coming down hard on people actually distributing child porn seems like a reasonable fit for regional Australia. But political parties are only ever going to do things if it actually helps them win elections, and whether all the online sturm and drang will even make a blip on the coalition’s radar as things stand seems really dubious.

It would be pretty easy to actually make such a blip appear though — send a letter to your local Liberal/National member/candidate and senators expressing your support of Tony Smith’s scepticism of the trial, accompanied by your views on how the Internet should be regulated, along with an actual donation as thanks for their opposition to bad legislation and you’ll likely grab their attention pretty quickly.

Making a similar donation to the Greens in support of their opposition would also be obviously appropriate (even if their opposition to the clean feed doesn’t entirely square with their nomination of Clive Hamilton in the recent Higgins by-election).

Of course, all that depends on whether this issue actually matters enough to anyone to warrant more than some bitching online.

(Personally I’ve just made a small donation to both the Libs and the Greens. I wonder how many people have ever had reason to say that before…)

Speedy stimulation

(Continued from yesterday, and referring to the MV=PQ equation)

Presumably most people care most about increasing Q (how much useful stuff people end up producing — the more the better) and keeping P roughly constant (if prices increase a lot, you can’t buy anything unless you’ve got lots of savings; if prices decrease a lot, you can’t get any money to pay back any borrowings you made, and if you can’t predict future prices you’ll have to forego pleasures now just in case necessities cost a lot tomorrow). M doesn’t really matter much in its own right by contrast, and V effectively just measures how long people keep money around before giving it to someone else.

There’s presumably a limit to how large Q can be — 20,000 years ago no matter how much money was around, how quickly it changed hands, or how much/little people charged for things, Q wouldn’t be a high enough value to include a dozen iPhones. That limit has to be set both by both how many people are around who want things from other people, are willing to do things for other people, and what they’re actually capable of doing. But if there’s any significant unemployment, or even any useful new technology that hasn’t been widely distributed, Q probably isn’t maxed out. Which in turn is to say that people could be doing more for each other, but aren’t because they don’t have cash to pay for it.

And in turn, ways to fix that are to find ways to give these groups of people enough money to get started — so that Alice has enough money to buy something from Bob, who then can buy something from Carol, who can buy something from Dave, who can buy something from Alice, rinse and repeat. That might require giving Alice a chunk of money up front (eg a $900 stimulus payment), or Alice getting a short term loan that she can pay back when Dave pays her, or there being a way for all those transactions to require less money up front — eg, each of them paying a $900 total bill in $100 installments.

Of course, except for when Alice gets a free cheque, the above assumes that she has some immediate expectation of future work, either to pay back the loan, or to pay off each upcoming installment. And that expectation will only be there if Dave expects to have money to spend, which will only happen if Dave expects work, and so on. And each step in that will have a slight discount — if Alice is 100% confident of being able to pay Bob, Bob may be only 80% confident of getting paid, since he might justifiably worry that Alice’s confidence is misplaced. But the same discount leaves Carol with only 64% confidence of getting paid, and Dave at 51%, which means Alice should only have been 41% confident. So either someone in that chain needs to be irrationally overconfident, or the only stable solution is that everyone thinks there’s 0% chance of paying the bills and no one buys anything. That seems to me to be the best argument for stupid ideas like paying people to dig holes then fill them back up — it means Alice can decide that even if there aren’t good enough odds that Dave (or someone like him) will pay her, she can always get a lame government job to pay for what she buys from Bob.

It seems to me that something like the MV=PQ formula probably applies on personal level as well as a societal level — if you want to increase your supply of whatever you value (which is Q, at least in so far as anyone can help you with it for money), you either need to hope it magically becomes cheaper (P decreases, not so unlikely if you’re into tech), that you get more money (M increases), or you don’t let what money you have sit around as much (V increases). If you’re already living hand-to-mouth and don’t have any savings, you probably can’t do much about V — if you want more stuff, you just have to work harder or hope something magic happens. But finding ways to make your money move more quickly around the economy (and find its way back to you) probably is a good way to do things. I’m not really sure what that implies though: certainly it means not having your savings on call; probably it means finding alternative ways to let money come back to you (if it can only come back via your boss, guess who gets first pickings of whatever friends it brought back with it); and maybe it implies something about who you give it to or what you spend it on.

Of course, the other aspect of that equation is that it only really says things about instantaneous changes — money could change hands very slowly (which is roughly the same as saying people have more savings), but if that’s compensated by a lot of money in circulation, it doesn’t necessarily imply any difference in prices or GDP compared to today. But when you’re already at some point, in the immediate future the variables tend to be constrained to change at different rates: usually M won’t change very quickly (making money appear or disappear is relatively difficult and heavily regulated), V is heavily dependent on and constrained by public sentiment, and P has a tendency to be a smooth exponential, either with a low or zero exponent if you’re lucky, or a high exponent in hyperinflation.

So, if you get a sudden large drop in the supply of money for trade (which can happen if banks suddenly all significantly reduce their lending), then either the public has to respond quickly by spending their money faster, or you get a resulting drop in production, which is to say, a recession, until either the monetary supply can re-expand, or the central bank fails in its mission to ensure a stable inflation rate, and prices head downwards — ie wages drop, house prices drop, food and petrol prices drop, etc.

Further, if people spotted the signs of a recession/price deflation as a sign the economy was in trouble and they might decide to save more, spend less, or otherwise lower V, and that’s exactly as bad as decreasing the money supply was in the first place. And then there’s a good chance that the attempts to increase the money supply — low interest rates, bank bailouts and guarantees, government spending — will lead people to expect the potential for sudden price increases, which may in turn cause them to save more, just in case.

In so far as that’s an accurate description of the global financial crisis to date (which it may or may not be), it implies the only way to avoid a recession is to avoid the banks not lending in the first place (the only other options are consumer sentiment exactly counterbalancing it, or significant deflation, neither of which are really plausible). In a “functioning” market, banks shouldn’t all suddenly act the same way, without it being in the obvious interests of their customers. That is, the only reason for no one to lend is that no one wants to borrow, which in turn means either everyone already has the money they need saved and are now confident in spending it (V goes up, M goes down correspondingly), or everyone’s decided they do want to take a holiday, work less, and buy less.

But if that’s not the case, then while it’s reasonable for a few banks to stop lending for one reason or another, others (either pre-existing or new) should have picked up the slack and the profits that went with it.

Perhaps that’s the real conclusion: some countries’ banking sectors were full of banks that were too heavily invested in bad bets and not open to new players, while others were luckier. As far as I can see, that means the solution is to either ensure that at least some of your existing banks aren’t making bad bets, or make it possible for new banks to appear fairly quickly when existing banks fail.

Multiplying money

The financial crisis seems to have devolved into a debate about whether stimulus packages are a good idea or not, or possibly whether debt-financed ones are.

Really, the stimulus debate seems pretty irrelevant to me — the treasury estimates put Australia’s economic growth as declining by 1.3% without the stimulus package, which barely makes a change in comparison to other countries — only moving us a couple of places down. And those other countries almost uniformly had stimulus packages of their own anyway. So if stimulus doesn’t explain most of the difference between 1% growth and 5% decline, something else must, but the question of “what?” doesn’t seem to get all that much attention.

Personally my best guess is it’s some combination of either a less unstable financial system (ie, less of a loss in down times, less of a gain in boom times) or a less affected mix of products/services in the economy (ie, we’re selling things to China who’s still buying, rather than selling to the US who isn’t), and if the government has anything to do with it, I guess less debt and waste might have made some difference; but otherwise, I’m with Treasury Secretary Ken Henry: who the heck knows?

Anyway, apart from the details of exactly where it’s spent, the debate about stimulus spending seems to be about “multipliers”, where spending a dollar one way has a larger effect on the economy than spending it a different way. Richard Posner writes a good introduction in his article How I Became Keynesian:

And here is the tricky part: the increase in income brought about by an investment is greater the higher the percentage of income that is spent rather than saved. Spending increases the incomes of the people who are on the receiving end of the spending. […] If everyone spends 90 cents of an additional dollar that he receives, then a $1 increase in a person’s income generates $9 of additional consumption ($.90 + $.81 [.9 x $.90] + $.729 [.9 x $.81], etc. = $9), all of which is income to the suppliers of consumer goods. If only 70 cents of an additional $1 in income is spent, […] the total increase in consumption as a result of the successive waves of spending is only $1.54, and so the investment that got the cycle going will have been much less productive. In the first example, the investment multiplier–the effect of investment on income–was 10. In the second example it is only 2.5. The difference is caused by the difference in the propensity to consume income rather than save it.

In essence the conclusion is that the less “thrifty” people are (the more they spend money immediately, rather than saving it for a later day), the more productive people are, which is to say people create, use and enjoy more stuff, and the happier everyone is. So transactions with larger multipliers — ie, money that will be quickly respent — are better, and transactions with lower multipliers — money that will just be sat on — are worse.

Mathematically, that’s all fair enough — if you increase the multiplier, more stuff happens and people are happier. But changing one transaction (spending millions on schools, eg) only changes one step, it doesn’t necessarily change the whole economy. And the “multiplier” for saved money isn’t really zero either — that money usually gets invested, which is to say given to someone else to spend on the basis that they’ll give you more money back later. The reason investment has a lower multiplier than consumption is that while people might spend their fortnightly pay immediately, they’re not likely to be quite as spendthrift which a large loan.

To me though, the “speed” aspect there is what’s crucial there, not the “thrift” versus “saving” dichotomy — and that in turn seems better understood via the exchane equation: MV = PQ where M is the total amount of money in circulation ($), V is the average rate at which a dollar changes hands (Hz), P is the price of value ($/value) and Q is the rate at which value is produced. That’s also pretty much a tautology: pick a period — a day, a minute, a second — that’s short enough that no one has time to spend the money they receive; then the amount of money in circulation, M is just the sum of all the transactions, V is the inverse of the period (1/day, 1/hour, 1Hz), Q’ is the number of goods that changed hands in that time (so Q=VQ’), and the average price of goods is just M/Q’, or MV/Q.

A popular corollary of that relationship is that, assuming you hold V and Q constant, then inflation (overall changes in price) is exactly proportional to changes in the money supply — so in so far as central banks can control the money supply, they can also control inflation. Of course V and Q aren’t constant — V changes if you can access your money more quickly, which is definitely happening, and Q changes if people are more or less productive which also happens.

That has some simple implications for government spending — for instance, higher velocity is better, so stimulus programs that don’t actually get money out promptly are probably not such a great idea. Likewise, it’s probably true that focussing stimulus payments on the poor is more likely to be effective because the money will get passed on quicker (whether spent or used to pay off debt). It may also mean lots of small stimulus payments are better than a small number of larger ones, since, again, they’ll be more likely to be spent quickly.

On the other hand, given the Reserve Bank largely has control over both the money supply (M) and modifies that in order to keep prices (P) stable, that only leaves two free variables for the government to try to influence — overall velocity, which isn’t really easy to manipulate, and GDP which is even harder. So outside of the Reserve Bank’s domain, as far as I can see there’s not actually a lot of either blame or credit left to be assigned. So maybe that’s why pundits and governments are focussed on the stimulus stuff, minor though that actually seems to be in the overall scheme of things.

However when you don’t ignore the Reserve Bank (or the US Fed), there’s a bit more to say… So: to be continued.

Wikis and Junkcode

A brief update mostly linking to other things.

I’ve been thinking it’d be fun to try developing code in a wiki environment — ie, web based, reasonably pretty, simple markup, potentially collaborative, and with text/links as first class elements; basically my idea of what LitProg2.0 might look like. As such, I’ve been poking at some hackable wikis — particularly ikiwiki and sputnik. Anyway, as the previous link explains I decided sputnik was the way to go, so now I have a junkcode wiki. A bit of hacking also means I have an RSS feed of new junkcode roughly as I add it, which hackers reading this might find interesting to follow. (There’s a few problems with the RSS generation — it’s not terribly efficient, but that’s probably okay with feedburner as a middleman; and sputnik seems to miscorrect for the TZ, giving timestamps 10 hours too earlier than reality, or so. But hey, it has pretty colours!)

Reality/Fantasy

Good grief. It was bad enough discovering that the characters in Shadowunit maintained in-universe livejournals (and respond to comments), and not much worse to discover the title character from Castle maintaining a twitter account. Watching him solving an in-universe crime pushed things a bit, but the latest seems to be that the “Nikki Heat” novels are going to be published in the real universe. Which I guess just leaves us with Nathan Fillion’s latest tweeted question:

Query: if there were an actual Richard Castle book signing, and I were there, would you wish it signed Richard Castle, or Nathan Fillion?

My fandom’s fourth wall appears to have collapsed.

Solving hard Maths Olympiad problems

Apparently it’s IMO season, and in honour of such, Terry Tao reposed one of the questions on his blog as a “polymath” project — something to be solved collaboratively over the web, rather than by individual effort. Two hundred comments or so later, and a complete proof was achieved, though how much that benefited from the polymath aspect might be debatable. Anyway, it’s a pretty neat problem:

Problem 6. Let a1, a2, …, an be distinct positive integers and let M be a set of n-1 positive integers not containing s = a1 + a2 +…+ an. A grasshopper is to jump along the real axis, starting at the point 0 and making n jumps to the right with lengths a1, a2, …, an in some order. Prove that the order can be chosen in such a way that the grasshopper never lands on any point in M.

Depending on your interest in such things, you might have fun trying to puzzle it out on your own. If so, you might or might not want to read the comment thread above for ideas, but be cautious about the comments to Terry’s followup post which include at least one complete proof.

Anyway, since I’m obviously not a l33t enough maths dude to come up with a proof myself, I thought it might be fun to see about converting the proof into a program (ie, “find a sequence of hops the grasshopper can take without landing on any point in M”). Something along the lines of this blog post from not too long ago. Happily it’s a constructive proof, so that’s an entirely reasonable challenge.

First, I’m going to generalise the problem slightly to make it a little easier to deal with: we’ll allow the grasshopper to start at any point, z, on the number line; and we’ll allow M to contain between zero and n-1 values, instead of exactly n-1 values, and we’ll sort the two sequences from least to greatest. (That turns out to not be any more general in practice, but will make the notation easier)

As such, we’ll define a function steps(a,m,z=0) that takes the two sorted lists of numbers, and the starting position (defaulting to 0), and returns a list of steps that when taken misses everything in m. We’ll resolve it recursively, starting with the base case of m being empty, in which case we can just take the steps in order, without worrying about hitting a mine:

def steps(a,m,z=0):
    assert(sorted(a)==a and sorted(m)==m)
    assert(len(m) < len(a))
    if m == []: return a

The ideal first step to take would be the longest possible — that’s the most likely to pass a bunch of values in m, and generally gets us moving as fast as possible. The key insight from the proof (IMO) is taking this approach, and looking at how that interacts with the various possible arrangements of the first few values of m. There are two important issues: one, the most obvious, is whether one of the values of m is hit immediately by taking the largest possible step; and two, whether the largest possible step actually passes any of the values of m. We continue on by establishing these possibilities:

    a_max = a[-1]
    m_min = m[0]

    a_max_in_m = (z+a_max) in m   # NB: an O(n) or O(lg(n)) step
    a_max_passed_any_m = (m_min < z+a_max)

The most complicated case turns out to be when a_max_in_m and a_max_passed_any_m are both True. Since we know there are at least two distinct values in m (m_min and z+a_max), we can approach that by considering two jumps together, a_other followed by a_max. This won’t hit (z+a_max), and there are at most n-2 other values in m, while there are n-1 values for a_other. So some pair of (z+a_other, z+a_other+a_max) has neither value in m, and avoids at least two members of m. That gives us an inductive step, though we have to check every element of a to find it:

    if a_max_in_m and a_max_passed_any_m:
        for i,a_other in enumerate(a):
            if z+a_other not in m and z+a_other+a_max not in m:
                z_new = z+a_other+a_max
                a_remainder = a[:i]+a[i+1:-1]
                m_remainder = [m_i for m_i in m if m_i > z_new]
                return [a_other,a_max]+steps(a_remainder, m_remainder, z_new)
        assert(False) # unreachable

Having dealt with that, we have the nice result that removing the smallest member of m ensures that there’s no longer anything getting in the way of taking a_max as the next step. By doing that, we can recurse into a simpler problem, and manipulate the answer we get back to come up with a solution:

    simpler = steps(a[:-1], m[1:], z+a_max)

Now we know this series of steps avoids every value of m with the possible exception of m_min, but we need to figure if it does hit m_min, and if so where. So let’s do that:

    sofar = z+a_max
    for i, s in enumerate(simpler):
        if sofar >= m_min:
            break
        sofar += s

At this point we’ve either safely passed m_min, safely reached the end of our path, or hit m_min. Avoiding m_min is great: if we managed that, we’re done.

    if sofar != m_min:
        return [a_max]+simpler

But if we didn’t, we’re still not too badly off: we have a sequence of steps, s[:i] that end on m_min, followed by steps s[i:] that safely avoid all the other values in m. But we can rearrange those steps to avoid all the mines entirely by taking step s[i] first, then steps s[1:i] and step s[0] and finally steps s[i+1:]. Since our first step, s[0], was the largest possible step, we know z+sum(s[i]+s[1:i]) is less than z+sum(s[0]+s[1:i])=m_min and thus that all those steps avoid any value in m, and we also know that z+sum(s[i]+s[1:i]+s[0])>m_min, because that is precisely z+sum(s[:i+1]) which is a step past m_min, and known to avoid all the other values of m. And at that point we’re really done:

    return [simpler[i]] + simpler[0:i] + [a_max] + simpler[i+1:]

So there you have it. (That actually turned out slightly neater than the original proof in some respects, since a couple of the cases ended up merged)

Also somewhat interesting is where the assumptions of the problem make their way into the code. The fact that m doesn’t include the total sum is relied upon to ensure simpler[i] actually exists; that each value in a is distinct is relied upon to ensure a_max is actually larger than simpler[i]; and that there are fewer distinct values in m than distinct values in a is relied upon to apply the pigeon-hole principle in the first branch.

Anyway, pretty neat, I think!

Side project #1: Pageant

So as per my post from a week ago, here comes the description of my first little side project. But first a quick reiteration of the aim: I’m trying to get a feel for what it’s like actually doing a tech startup; so not charging for my time, but rather making something once that I can then sell repeatedly without having to do a lot more work. This is intended to make me more experienced rather than wealthy, so “success” means learning something, rather than making much money. As a consequence I’m aiming for business ideas that are in the bad-to-mediocre range, that will nevertheless involve some interesting/useful technology. That way if the business part goes badly, I don’t feel like I’ve screwed up a chance to make a bazillion dollars, or wasted my time doing something pointless.

So the first interesting-tech/mediocre-business idea I have is related to popcon. I like to think a comment I made once helped inspire popcon’s existance back in the day:

I think It’d be interesting to have a debian-survey style package that when installed, informs the `project’ (stats@debian.org?)  who’s using which packages. This would allow us to get a *much* better indication on which packages’s are in fact moderately stable and tested, and which are just gathering dust; and give us a better idea of what’s appropriate for inclusion in stable and/or unreleased.

Sadly that mail disappeared from the web (it was in the archives mentioned at the bottom of one of my initial posts to debian-devel regarding (what became) the testing suite, but disappeared after an upgrade/reinstall of www.debian.org) — but it was nominally in the public domain as of late July 1998, and lo and behold, popularity-contest appeared some three months later, doing everything I’d thought of and more. (For all I know, my comment played absolutely no part in Avery’s implementation, but I still like to think it did :)

Anyway, cool as popcon (and my original idea!) is, there’re interesting ways you could extend it, getting more information, and doing more with it. You could, for instance, survey more information about packages — what version’s installed would give you hints about how many people are pulling from backports, or mixing stable and unstable, or Debian and Ubuntu; or checking conffiles against their original md5sum might give you useful information about how often the default configuration is sufficient. Or you could analyse the information more thoroughly — eg, seeing if there are any unexpected correlations between people who use particular combinations of packages, or doing a netflix-like “I see you use package foo, many other people who use it also use bar, maybe that might be worth investigating.” (I once tried to do that sort of analysis on the popcon data, but all I ended up with was a pretty animated gif, that apparently crashed some people’s browsers… Red dots were systems, blue dots packages, with a package being installed on a system implying attraction, and uninstalled applying repulsion)

You could also gather completely different data — like information about the hardware, or things like the default language or timezone, or potentially even things from logs. That would let you answer questions like “do many people run Debian on HP hardware?” or “which IBM hardware is popular with Linux users?” which might influence future hardware development or purchases; or tell you surprising things about where Linux is actually being used; or give you some feedback on questions like “is the OOM killer a common occurence?” or “is IPv6 adoption actually going anywhere?”

As well as just gathering data from otherwise passive users, you could also use the data collection as an opportunity to make introductions between users — having established you’re running Debian and have a particular Intel graphics card, you could be automatically given the address of a section of the Debian wiki that’s dedicated to issues with that card; with the idea being that you can see any helpful solutions other users have already come up with to problems you’re having, or leave your own tips for future users. The same principle potentially applies to other sorts of data: if you have an old version of wordpress installed, it might be reasonable to point you at some security alerts that apply to it, or having determined you’re running Debian on some HP server, you might get directed at some updated management software that enables some extra features.

Another interesting improvement I think you could make is to provide ways users can aggregate and anonymise their own data. Even in the age of social networks and ubiquitous transparency, managing privacy of this sort of data is important: it would be spectacularly bad to provide a website that told people exactly which machines were vulnerable to which secuirty exploit, but that’s exactly what a list of which machines have which versions of which packages installed would provide. The popularity-contest software goes to some lengths to avoid that, by identifying data against a randomly generated UUID rather than an internet address, email or username; by not storing detailed information about package versions; and by restricting who has the ability to run any detailed analysis on the data. But you can go further than that by aggregating and filtering the data even before it makes its way to a centralised server — eg, rather than have each individual machine on a network reports its statistics to Debian, you could have the information sent to a proxy server that aggregates all the packages into a single report (30 computers, 10 of which have apache, 15 of which have exim, …), thus removing certain correlations (do all the machines running apache also run exim? or do none of them?), and potentially filtering things like the UUID (which might reveal something about the random number generator, particularly given Debian’s recent issue with randomness…) popcon version (which gives an indication what version of Debian is in use, and in some cases how recently it’s been updated) or timestamp (that may give away that the machine has been down). And if you’re running a network that’s intended to be somewhat locked down, it might be more reasonable to have computers reporting to a machine that you control, rather than one just out their in the wild.

So that, in very rough terms, is the spec for this project, which is currently going by the name “pageant” (ie, a popularity contest that takes itself a bit more seriously…) The technical goal is to provide a pageant client that people can run on their systems, which can report potentially arbitrary information to a central server and can receive and present relevant snippets of advice related to that information; a pageant proxy that can intermediate and filter pageant clients to provide a slighter higher level of anonymity/privacy; and a pageant server that can collect the data, provide relevant advice to clients, and analyse the data. I think it’s feasible to do an interesting job of that, that should go a little further than existing programs, and be usable by actual people, though I suspect the server side will have to be a bit beta-ish to be finished within a  week or so.

The business goal, obviously, is to turn some of the hypothetical benefits touched on above into actual income, ideally without turning it into a vast NSA-like data hoarding corporate conspiracy. I figure there’s a few reasonable ways to approach that:

  • First, I figure that providing the same information other systems currently do at no charge makes sense: so getting basic stats on how many Debian users have nickle installed, or Ubuntu users have network-manager, or Fedora users have a Synaptics touchpad should be free.
  • Second, I figure providing further analysis for companies and researchers should probably be possible, and cost something: probably more depending on how complicated the analysis is. Possibly there could be an extra fee for the analysis to not be also made available to the public; that could be entertaining.
  • Third, I figure that it probably should be possible for companies to at least provide advice to users of their hardware through the system, and that at least in some cases, that probably should be for a fee. I’m not sure if there’s a line in there somewhere between necessary advice (security updates?), helpful tips (here’s some non-free drivers for that hardware?), or outright advertising (buying our hard drives will give you 200% better performance!) that might mean “advice” should vary between free, paid and blocked. An approach might be to say distros’ advice is free, other people pay.
  • Fourth, I think it would be interesting to allow users to optionally pay a fee to register their hardware. This could have a couple of benefits: it provides a low-maintenances way to discourage ballot stuffing — it’s not at all difficult to hack up popcon to pretend you have thousands of servers running your favourite package to try to bias the statistics, but it’s somewhat harder to come up with even a few dollars thousands of times; and possibly more interestingly, it provides an easy means to link a small payment for “using Linux” with the software that’s being used — so distributing 80%-90% of those fees to the authors of the software that’s actually being used might be an efficient way of helping support free software development.

Anyway, that’s the project! My notes have a few other things in them worth mentioning — there’s a couple of not entirely little complications in a few of the above ideas, for one — but this is already long enough, and it’s not like I can’t blog again later. Even though there’s a few similar projects around (popcon and smolt in particular) I’m planning on taking a NIH approach and starting from scratch, on the basis that current stuff is mostly pretty basic to reimplement, and getting an architecture I’m comfortable with is pretty important in making it appropriately generic. As always, helpful tips, questions and/or any general encouragement appreciated, either by email or the comment link…

On organising oneself

Matt writes on ideas, organization and overflow, and that he’s ending up with so many awesome ideas, that even when he notes them down for future reference, he’s so busy that he ends up independently reinventing them before finding time to actually make them happen. (Spoiler: he likes the approach of Steven Covey’s Habit Three and has some ideas for improving it, that he’s probably already noting down for future reference… I kid, I kid)

I have what I guess is a similar collection of interesting little ideas that I think would be worth seeing through, with many of them not really seeing the light of day. If I had any sense, I’d be keeping them all in a wiki, but in reality I tend to just use a collection of TODO files in my home directory. Fortunately, that’s not really the problem for me: I’m mostly able to either track down or reinvent the nuggets of ideas, and I don’t usually forget ideas entirely, however long I’ve put them aside.

What I have been having difficulty with is actually getting them finished — I’ve got a bunch of neat ideas to the point where I think I see how to finish them — ie, right up to the point where the next bit is Hard Work. And apparently I’m no longer at the point where I like doing complicated coding gymnastics just to prove I can. And beyond that, the overall motivation, ie that it’d be kinda cool when finished, just isn’t enough to actually get stuff done.

(And the worst part about trying to motivate yourself to do anything hard, is that when you don’t succeed n times, that becomes an additional reason not to succeed at attempt n+1. But pfft, we’ve addressed that, right?)

(Of course, the worst part for you, dear reader, is having to skim through all this filler I’m writing while trying to avoid getting to the actual point…)

One aspect to my motivations these days is that I’m interested in business and entrepreneurship these days — and hey, I figure it’d be a lot easier to deal with having lots of Fantastic Ideas if you happened to have a profitable business with a bunch of employees you could tell to implement them for you. (I’m sure there are other complications of some sort in that plan, but hey) And having decided to pass on the exciting sounding BootUpCamp next month in favour of the inaugural (and much more local) kernel.conf.au and I guess the forthcoming Brisbane BarCamp now too, I’m left lacking an awesome opportunity to hone said interests.

Being the sort of person who likes looking for mass-kills when considering stones and birds, an idea came to me. If I want to try starting some businesses just for practice, and have a bunch of kinda cool ideas I want to finish, why not turn them into mediocre business ideas, build them, and see how it goes? Adds an extra reason to actually get the cool ideas implemented (it’s a business necessity!), makes them slightly more challenging (can’t just be useful, but have to be at least potentially profitable), and means that even having the business part fail completely is still an objective win (because there’s a kinda cool creation that’s at least functional, if not finished), as well as being a learning experience (which of course is also a win!). And if the business part happens to be successful, well, there’s all those wins, plus some extra cash!

Great in theory! In practice, the abject terror it inspires is something of a drawback — though also possibly motivating in its own way. Anyway, I figure sometime in the near future I’m going to try running a handful of projects through roughly the following formula:

  1. Pick a neat idea I’ve been putting off, that shouldn’t take too long to actually get up and running (say a week or two)
  2. Work out how it improves the world — who would be better off, and why
  3. Work out a plausible way of charging some-or-all of those people before they get some-or-all of that benefit.
  4. Blog about that, on the basis that (a) it gives me a huge incentive to actually finish in a timely manner, a la WoBloMo, and (b) if I’m really luck someone comments or emails with an even better business model.
  5. Write the software and whatnot.
  6. Setup as simple a charging mechanism as possible.
  7. Publish both.
  8. Take a breath.
  9. Go back to step 1.

My initial idea was to commit myself to doing one of those each week for about a month starting yesterday, but while that might be plausible for the coding part (or might not be, too), it’s a bit too daunting for the business part. So I’m thinking I’ll just be starting “soon”, and aiming to get each project “launched” within a week or two, and seeing how that goes. There’s a few other daunting bits too, like setting things up to automatically deal with payments, and there’s a few aspects to some of the ideas that require confusing things like setting up websites… But hey, learning experience!

Anyway, that’s the theory. Whether it ends up bearing any resemblance to practice, I guess we’ll see…

Passions

(not a post about Spike’s favourite soap opera)

As an INTP, I generally try to make “rational” decisions — which is to say, ones I can rationalise and explain and logically support. That in turn is something I can rationalise, explain and support: I’m fairly good at logic, and I’ve been taught lots of ways of analysing problems that people have discovered over centuries, which helps make better decisions. But the counter-argument to that is that it’s still easy to make mistakes, and mistaken logic can lead you to all sorts of bad ideas; a lot of deep, rational thought went into eugenics, for example. So for me, I like to keep a handle on that at least partly by trying to keep things aligned with my emotional response. If something doesn’t feel right, that’s a good time to look back through the logic, because there’s probably a mistake. If you can’t find a mistake, and that doesn’t make you feel better, it’s a good time to be cautious in other ways; if you do feel better, that probably means you’ve got a better understanding than you did before and definitely means you’ll be able to act on the ideas more effectively; and if you do find a mistake, well, you get a chance to fix it.

Going the other way — rationalising whatever you already feel — isn’t so effective to my mind; it’s often easier to make an apparently logical argument that’s actually wrong, than to work out how it’s wrong. That can be useful as a defense mechanism to rebuff challenges to what you want to do, but you can generally rationalise anything without much effort so it’s not actually adding information or improving your decision, and if you infer from the fact you’ve come up with a rationalisation that your decision is the only rational one to make, you can end up with a closed mind to better alternatives, leaving you with a worse decision than if you’d just continued going with your gut feeling. Personally, I tend to take that a fun game: take a completely subjective and illogical response to something (eg, “orange is the best colour”), then “logically” and “conclusively” prove it’s the only justifiable response. At the very least, it’s a good way to keep a little humility about the value of a good argument.

Logically investigating something (“I like orange. Hmm, is there some way to tell what the best colour is?”) you had a gut feeling about is a different matter entirely, of course — and as you think about it, if the analysis leads you to find something different to what you expected (“that’s odd, I think I just proved chartreuse is the best colour”), which might lead you to investigate different definitions for your terms (“perhaps orange is best in some ways, and chartreuse is better in others”) if it doesn’t lead you to change your opinions (“oh. my. god. this chartreuse cape is to die for!”)

Of course, if you’re not naturally comfortable with coming up with logical arguments, or trained enough to do them well, you’ve probably got other, more personally appropriate, ways of coming up with decisions anyway, and maybe none of this applies. But hey, that’s not my problem.

The other advantage of keeping your feelings in accord with your thoughts is that it tends to be more motivating — “passionate” tends to be a decent description both of someone pretty emotional and of someone pretty motivated and active, and there’s no point to making good, rational, decisions without acting on them. In some respects, the more intense the emotion the better; it’s easy to want to quit doing something difficult that’s not immediately rewarding, no matter how logically you’ve convinced yourself that it’s a good idea, but it’s a lot harder to shake off broiling rage, true love, or abject terror, eg.

The trick, then, is if you’ve found something that inspires that sort of emotion, to make sure it’s working in the same direction as the goals you’ve carefully and logically examined. That can be really easy: if your primary emotion is that you care deeply about helping people, seeing someone who’s had a bad day or week or year get a break and maybe break a smile is a good way to keep yourself working in a charity or a hospital, if that happens to be what you want to do. But it’s often not — maybe you’re overwhelmed by anger at the stupid bureaucratic nonsense that’s getting in the way of your hospital helping people, or maybe you’d like to help out in a soup kitchen but you’re terrified of violence in the area.

But, at least sometimes, those can be harnessed too. “Use your anger” isn’t exactly “use the force”, but it still gets some 40,000 hits on google offering useful advice. My feeling (which I wish I’d had earlier than today, but oh well) is there’s probably similar ways to grab most of those emotions, and turn them into allies, rather than just trying to figure out ways of making them go away.

  • frustration, anger, hate: Figure out exactly what it is that’s the deserving object of your ire, and find ways to harm it. There’s lots of entirely reasonable ways to hurt things: a death blow, divide and conquer, death by a thousand cuts, subversion and betrayal; and most entirely reasonable ways of contributing to the good of society can be rephrased into something that’s more acceptable to anger. Annoyed by ignorance on the Internet? Deal it a death blow by creating a site like Wikipedia or snopes; create a debating forum so ignorant people are fighting each other instead of you, and maybe learning something as a result; contribute to Wikipedia or snopes or just help your friends avoid spreading urban myths;  find a group of people who seem particularly ignorant, join them, become well-educated in their customs, befriend them, and then help them get access to all the knowledge they’ve been missing.
  • worry, fear, terror: Be thorough. If you’re worried anyway, you’re going to naturally be thinking of every single way every single thing can go wrong, so take advantage of that and do something about each of those things you think of. Maybe it seems more rational to ignore your worries, and just charge ahead (and maybe it is), but there’s an equally good chance that will just make you worry more and make you less effective, whereas if you actually nervously go around making sure everything is absolutely perfect, you’re at least spending your time contributing to your goals. And every little thing you do fix up is one less thing to worry about, so it’s possible you might end up naturally less worried anyway. Probably not, of course…
  • affection, appreciation, love: Dedicate your work, do it in appreciation, or in honour, and make it something that’s worthy of the object of your admiration, whether that be a person or an idea. It’s always tempting to cut corners or strive for something other than your absolute best, but much less so when what you’re doing is making a devout offering to something or someone you care about.
  • pride, arrogance, narcissism: You think you’re the best, so do something that demonstrates it. Repeat.
  • greed, envy, lust: Be a free-market capitalist — get more of what you want by doing more of what other people want.
  • embarrassment, guilt, shame: Accept, apologise, and then do something worthwhile to atone?
  • indifference, apathy, sloth: No idea. (Is this an emotion, or the absence of emotion? If the latter, find an actual emotion? If you can’t, just try to avoid watching Dexter for lifestyle tips?)

Anyway, there’s my thought for the day. YMMV. The following quote may or may not add support to the thesis presented:

Peace is a lie, there is only passion.
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.

The Sith Code