Yet another reason why the so-called “Internet of Things” is a terrible idea. From BBC News
An official watchdog in Germany has told parents to destroy a talking doll called Cayla because its smart technology can reveal personal data.
The warning was issued by the Federal Network Agency (Bundesnetzagentur), which oversees telecommunications.
Researchers say hackers can use an unsecure bluetooth device embedded in the toy to listen and talk to the child playing with it..
In the not-so-distant past something like this would have been a plot device in a science-fiction novel. Nowadays it’s the sort of thing that makes writers of near-future science fiction throw up their hands in despair.
GitLab’s postmortem of the database outage of January 31 which resulted in significant loss of production data pulls no punches, and ought to be essential reading for anyone involved in software development. It has a lot in common with Vivarail’s report into the Kenilworth fire.
One element in the chain of events that led to the database crash raises eyebrows; an attempted hard-delete of the user account of a GitLab employee who had been maliciously flagged for abuse by a troll. It boggles the mind that a system would do such a thing without any human intervention. That’s either a serious coding error or some dangerously naive requirements analysis.
And this is especially damning.
Why was the backup procedure not tested on a regular basis? – Because there was no ownership, as a result nobody was responsible for testing this procedure.
When some important part of a complex system hasn’t been tested thorougly enough, it’s easy to blame the testers. But the blame usually lies higher up the project management chain.
Michael Nygard has a good blog post stating that QA Instability Implies Production Instability.
Invariably, when I see a lot of developer effort in production support I also find an unreliable QA environment. It is both unreliable in that it is frequently not available for testing, and unreliable in the sense that the system’s behavior in QA is not a good predictor of its behavior in production.
He describes a lot of the pitfalls in maintaining good enviroments, from test data getting overwritten to anonymisation of production data compromising data integrity. Knowing what needs to be done to build and support good test enviromments is an important tester skill.
From my experience, he’s dead right about relationship between the stability of the test environment and the number of problems that escape into production. This is especially true when it comes to things like interfaces with third-party systems. There is a lot of difference between running an instance of the third party system on one of your own servers and anly having access to a system on a remote server where you can’t change the setup or configuation data. And the number of bugs did indeed reflect this.
Worse still, when there’s no access to the third-party system at all, and the best you can do is write a crude emulation yourself. I still have nightmares about that one….
It was overshadowed by the much greater tragedy in France just a few days later, and doesn’t give us any stock villains for three-minute-hates. But the tragic train crash in Italy, following so quickly from the very similar crash in Germany raises a lot of questions about rail safety.
On the RMWeb forum, which has a lot of knowledgeable people including many who work in the rail industry, the resulting discussion on signalling systems for single-track lines and how they might be improved includes positive words for the software testing profession.
The system itself would be cheap, but the testing needed to demonstrate that it’s safe (and idiot proof) to the appropriate regulatory authorities is going to be quite expensive. Proper software testers(*) aren’t cheap.
From what I can tell, the Italian system appears to be a variation on the Telegraph and Train Order system without the use of either a physical single-line token or a virtual equivalent, a practice long since superceded in Britain. There is a far higher risk of human error leading to a fatal accident.
Though there have been quite a few head-on collisions in Britain resulting from conflicting movements across junctions, including the Ladbrooke Grove disaster, I can only think of two single-line collisions in the past century, at Abermule in 1921 and Cowden in 1994. That’s some safety record.