What are the parallels between music criticism and software testing?

Regular readers of this blog don’t really need to be told that I’m a very keen music fan and amateur rock critic. Writing about a small club-based scene I’ve come to know quite a few band members over the years. I’ve even had people suggest I should quit working in the IT industry and become a full-time music writer. But while being on the fringes of the music scene is can be a great experience, I’m not convinced I want to jump ship and join the circus.

But I can see a lot of parallels between music criticism and my professional career as a software tester.

Not that I’m suggesting that testing and reviewing are exactly the same. To start with music is inherently more subjective than software. But there just as it can be a judgement call as to whether or not a piece of software is fit for purpose, it’s never completely subjective as to whether a record or performance is good, bad or indifferent. There are those that claim all opinions are equally valid when it comes to reviews, and there is no such thing as an objectively good or bad record. If you believe that, you clearly haven’t heard Lou Reed’s appalling collaboration with Metallica. It seems to me that both testing and reviewing are something many people can attempt, and just about anyone can do badly, but take skill and experience to do well. You only have to look at the reviews on websites to which anyone can post without moderation to realise there are bad reviewers out there just like there are bad testers.

To review a record or concert requires both an understanding of what the artist is trying to achieve, and an honest assessment of how well they’ve succeed in achieving it. That in turn requires the equivalent of domain knowledge. Just like a lot of indie-pop reviewers come horribly unstuck attempting to review progressive rock or metal releases, ask me to review a dubstep or free-jazz record and I wouldn’t know where to start. But just as testers from different backgrounds will approach things from different angles and uncover different bugs, a reviewer with deep specialist knowledge of a specific genre will have a quite different perspective from one whose taste is far broader. Something that’s meant to have crossover appeal benefits from both viewpoints.

Then there is the issue of speaking truth to power, which can require both courage and diplomacy. Egos even bigger than those of developers go with the territory. When an artist has poured their heart and soul into making a record, they don’t always appreciate being told how their work could have been better. Much like the way developers don’t always appreciate being told the code they’ve slaved over is riddled with bugs they they really ought to have picked up in their own unit testing. And if you’ve ever had the misfortune to work in a dysfunctionally political environment where project managers surround themselves with yes-men and tend to shoot the messenger whenever those messengers are bearers of bad news, then you’ll recognise those over-zealous fans who sometimes try to vilify anyone that attempts constructive criticism.

It’s true that there are a lot of rock critics out there who exhibit exactly the same sort of adversarial behaviour that gives some testers a bad name. Yes, writing and reading excoriating reviews of mediocre records can occasionally be cathartic, but informed and honest constructive criticism is far more valuable in the long run. Just as software testing is a vital part of making sure software is fit for purpose, constructive criticism has a role in making music better.

Perhaps it’s my tester’s ability to see patterns, but what I hope the above goes to show is that sometimes what you do in your “day job” and an apparently unrelated activity you do in your spare time can have more in common than you think. Certainly there are transferable skills, especially those softer ones which are much in demand.

This entry was posted in Testing & Software. Bookmark the permalink.

Comments are closed.