Moments after I mentioned on Twitter that I was seeing very little spam in my inbox compared to a couple of years ago, my inbox started getting flooded with “Undeliverable mail” return messages because some *&$% spammer has been spoofing one of my addresses. Suffice to say that if you get spam from Russian ladies using the address firstname.lastname@example.org, it’s not from me, and there’s nothing I can do to stop it.
“Someone please take my bottomless bowl of popcorn? I’ve eaten so much I think I’m going to be sick” – Bitcoin-hating Charlie Stross on the collapse of Mt Gox. The whole thing reads like the plot of Stross’ novel “Halting State”.
A bug in the NHS Choices system sent users to a malware site. As reported in The Guardian:
The “internal coding error” sent users to the mistyped URL, of which a third-party appears to have taken advantage, registering the mistyped domain name to serve adverts and malware to unknowingly redirected visitors from the NHS Choices website since Sunday evening.
Things like that make me wonder how on earth that bug could have been missed in testing, even though t’s not easy to answer that question without some knowledge of the archtecture of the site. I would assume from the URL that it’s some form of translation functionality, and I’d have thought somebody ought to have noticed the feature wasn’t working properly and investigated it little more deeply.
What I would like to know is how the Czech malware operator managed to find the bug when NHS’s own testing didn’t.
FUNCTEST stands for Functional Test. As opposed to FUNKTEST, where you raise a bug if the drummer doesn’t have a good enough sense of rhythm
Today I came across and logged a bug, which on closer examination turned out to be a result of an ambiguously-worded line in the specification rather than a simple coding error.
I mentioned this on Twitter during a mid-morning coffee break, and got two contrasting responses.
The first was that the written specification is just the starting point of a conversation between the Business Analysts, Developers and Testers over exactly what the system should look like, and constant communication will resolve any ambiguities as the development proceeds.
The other was that a developer should not be expected to question things in an environment where even the smallest changes require signing off from multiple people with different conflicting agendas. In such circumstances it’s easy to see why a developer might make guesses rather than ask questions.
My reaction to that is that if you’re trying to develop software in an organisation as bureaucratic as in the second case, you run the risk of ending up with software that’s every bit as dysfunctional as the organisation itself.
I’ve worked on projects like that in previous lives, with great long specifications written in great detail for the benefit of the developers who were supposed to implement the thing, but completely failed to give the business stakeholders any real impression of the actual functionality. But the stakeholders went and signed it off anyway, perhaps because they wouldn’t admit, maybe even to themselves, that they didn’t really understand the thing. Needless to say that project went horribly pear-shaped and turned into a nightmare death march as the development team were buried under a mountain of change requests.
Are there still organisations that develop software like that?
While I’m still in Waterfall-land, fortunately my current project is nothing like that. In the end, I got given the task of rewriting that bit of the specification to remove those ambiguities.
Testers and software engineers love to argue over whether a bug is a coding error or a missed requirement. But when it causes this much damage to people’s lives, then such hair-splitting doesn’t really matter.
I always find that the frequency of clichéd buzzwords is a very useful metric in determining the credibility of any random article on the internet. This is true both for business-speak like “leverage”, or the ideological call-signs used in political and cultural blogging.
Poe’s Law famously states that it’s impossible to create a parody of a fundamentalist or extremist site that won’t be mistaken for the real thing.
Does the same thing apply to management speak? I think it does…
Welcome to OVERBLUE, bridging the gap between strategy and execution.
OVERBLUE™ is an Operational Excellence Management System ( or OEMS ) that implements the principles of SPHIDA’s PROACTIVE THINKING to introduce a new paradigm in how people align business processes with corporate goals and how they get work done.
Reading text like this I am really unsure as to whether this site is for real, or whether the whole thing is a very clever parody.
A must have tool for any information worker and collaborative team. It ensures a flawless execution and empowers an organization to achieve the desired performance levels.
It does read as if someone fed the contents of a few Management Buzzword Bingo cards into a Markov Chain Generator, doesn’t it?
I’ve read most of the site, and I find myself with absolutely no idea as to precisely what this seemingly-magical software actually does.
Today’s process improvement methodologies and BPM systems are limited because they are designed to deal with simple, easy to automate processes. But simple processes account for only about 10% of the any organization’s process portfolio.
… the OVERBLUE™ software can be seamlessly used to design, align and execute activities and business processes across 100% of the process spectrum.
Since the link came from the writer Charle Stross, the whole thing does sound like something from his Laundry novels, akin to CASE NIGHTMARE GREEN. Is the CEO of this company called Ellis Billington?
A cautionary tale from Christina at A Mommy Story about The Furby Who Became Evil.
Mira’s Furby was suddenly possessed by a new personality who was mean. It growled at her, it snapped at her with an angry voice if she tried to pet it, and it made retching noises when she tried to feed it, as if the iPad foods weren’t good enough for it. Occasionally it showed little flames in its eyes.
WTF happened? Did we feed it after midnight?
It was now a Furby demon. And Mira was scared of it. She backed away with tears in her eyes, her five year old mind unable to comprehend what had happened to her cheery dance pal, saying she wanted her nice Furby back, and she didn’t want to play with it anymore.
All of which makes me wonder what a tester can learn from this.
How was this product tested? How much did the testers know about the underlying programming? Is the “Evil Furby” that upset little Mira actually a bug, or was it “performing to spec”? And if that’s in the spec, what were they thinking when they specified behaviour that makes five year olds cry?