5 things I don’t like about Forrester report on uTest

I like reading staff distributed by uTest. They distribute interesting stuff, as a rule. I like uTest’s Software Testing Blog. That is why I read Google in the Wild Case Study when uTest emailed link to it today. The article appeared to be total disappointment. Here are 5 things I especially didn’t like.

5. Vocabulary: defect. Article uses per-defect cost term. We know better than to use d-word in situations that don’t call for it:


4. Vocabulary: sure thing. I am not sure sure is a right word to use when talking about testing:

How do you ensure that the content you deliver to your customers meets their needs regardless of how and where they access your product?
Ensure that Google has appropriate test coverage for countless situations.
Traditional test cycles and traditional test outsourcing are simply not sufficient to ensure that …

3.  No hard data. When you publish something for people who are not in the same context as yours, those people expect you to provide some justification of what you say.

2. It is an advertisement. Research assumes some application of scientific methods. Advertisement is happy merely using buzzwords. There is no scent of science in the report.

1. Price. Forrester initially priced the article for $499. I know there is nothing for free. You paid with your time for reading this post, for example. But $499 for such shallow staff is way too much for my liking.

If you have some bucks to spare on reading and a free hour I suggest you buy Becoming a technical leader by Gerald M. Weinberg. I read it today, it’s great. Wait, you still have $489 left? Yeah, let all problems be that tough!

5 things I don’t like about Forrester report on uTest

Why software has nothing in common with Schrodinger’s cat

“When I hear about Schrodinger’s cat, I reach for my gun.”

— Stephen Hawking

Metaphors are great, they help us learn something new by linking with prior knowledge. I am a big fan of analogies myself. Driving metaphor, for instance, helped me a lot in figuring out extreme programming approach. Still one has to be aware of overextending metaphors when picking analogies. As Yossi Kreinin mentioned in post on coding standards, one shouldn’t outreach metaphors in areas where he is not minimally competent.

In her recent post, What Software Has in Common with Schrodinger’s Cat, Elisabeth Hendrickson draws some links between software and quantum mechanics:

Coders think the unthinkable
This one is hopefully a joke. http://geekandpoke.typepad.com

Schrodinger explained that in the moment before we look inside the box to discover the outcome, the cat is both alive and dead. There is no objectively measurable resolution to the experiment… yet. The system exists in both states. Once we peek (or by any other means determine the fate of the kitty), the probability wave collapses.

You see, in the moment we release software, before users* see it, the system exhibits the same properties as Schrodinger’s feline.

How often do you stumble upon cats that are “both alive and dead”? I dare to say, never. Maybe, Austrian cats were odd 80 years ago? Like, weird zombie cats. It can be revealed even from popular publications that Schrodinger actually proposed a paradox, ridiculous case. In NY Times article we read:

What’s often lost in the retelling is that Schrodinger didn’t believe the story for a minute. He was subtly ridiculing some of his mystical colleagues who liked to proclaim that conscious observers somehow conjure the real world into existence.

Other popular source states:

 The Austrian physicist Erwin Schrodinger, one of the founders of quantum mechanics, thought of a paradox to show that quantum mechanics doesn’t apply to larger, tangible things. … Applying the rules of quantum mechanics to this system would mean that the cat is neither alive nor dead until a human observer actually looks in the chamber. Schrodinger argued that this was nonsense and merely an example of applying quantum mechanics to situations in which it doesn’t apply.

The best explanation I was able to google is Schrodinger’s cat for a 6th grader:

Probabilities occur all the time in science, because we almost never know everything we need to make a completely accurate prediction. For example, if you want to make a trip of a hundred miles, you can not know ahead of time exactly how long it will take. You might run into a traffic jam. You can only give an estimated time. In quantum mechanics probabilities are different. They are not considered to result from our limited understanding of the universe, but to be fundamental. Of course Einstein thought this was mistaken, but most physicists do not agree with him.

You can check Schrodinger’s paper. It’s easy to get a copy of English version of his work.

Now, to software. I agree when Elisabeth says that

There is some probability that we have done well and our users will be delighted. There is another possibility: we may have missed the mark and released something that they hate.

The problem is that she connects software with quantum mechanics in a certain way. Such link implies that non-deterministic nature is fundamental for software. We can’t abate it in any way. Our knowledge can’t bend chances to our win.

Assume, as Elisabeth suggests, that software has same features as Schrodinger’s cat. Forget for a moment that there are no blurred zombie cats. Now it doesn’t matter how often you run unit tests. It doesn’t matter how often you release builds. It doesn’t matter how often builds are deployed. Probability of failure is the same for each release. Know your users better? Forget it, it doesn’t help. Know your customer’s business better? Forget it, it doesn’t help. Learn from your past fails? Forget it, it doesn’t help. Learn from your peers? Forget it, it doesn’t help. You get the idea. With these I can’t agree.

O.K., I know that Elisabeth doesn’t mean all this. Her choice of analogy does though. Metaphors are mighty tool. Know your tools.

Why software has nothing in common with Schrodinger’s cat