This made me cogitate. Not that he had succeeded in proving that Exploratory Testing (ET) sucks! But I had to start thinking if repeatability (read reusability) of a test is really all that bad! I guess “repeatable test” VS “non-repeatable test” argument is just another face of the “Scripted testing” VS “Exploratory testing” debate. Here in this post, I am not going to continue the same debate trying to prove which one is better and which one is not. Rather I would like to think more on the repeatability aspect of a test!
Some questions to start with can be:
1) Can ALL the tests be made repeatable?
2) Should ALL the tests be made repeatable?
3) By making a test repeatable are we not (may be inadvertently) making them too predictable?
4) Can repeatable tests discover NEW problems (defects/bugs) in the system?
5) Blindly trying to make each and every test repeatable. Is it worth the effort and expense?
6) Why should we repeat just a set of tests (test cases, test scripts whatever), when we can utilize the same time exploring much more tests that may uncover unknown defects!
7) When we say a test is repeatable, is the test really repeatable in all its senses? Can someone guarantee that the test runs exactly in the same way (environment, concurrently running processes and applications, exactly same DLLs loaded at that moment, machine condition, configuration settings, and so forth) as it had run the last time around?
8) Do repeatable tests guarantee reproducible defects/bugs?
9) A repeatable test can make sure there is no re-occurrence of an earlier defect. Ahh well! But how long? Will these so-called repeatable tests not loose value over a period of time (after a number of iterations)?
10) Is software testing all about repeatability?
The argument in support of making your tests repeatable may hold good to some extent, in certain cases like Regression testing or for that matter Performance testing. But is there any point of trying hard to make ALL the tests repeatable? Here are few interesting quotes from some honorable Testing Gurus/Experts:
Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else’s footprints minimizes the chance of being blown up by a land mine.
- James Bach, Test Automation Snake Oil, 1996
- Brian Marick's talk Classic Testing Mistakes
By repeating tests we actually make sure we are avoiding other possible defects (remember minefield analogy? Stepping on someone’s footstep actually makes sure we may avoid stepping on a live landmine) thus minimizing our chance of discovering new defects!
Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.
- "Pesticide Paradox", Boris Beizer, in his book Software Testing Techniques, Second Edition, 1990
"Pesticide paradox" compares software defects with that of pests! Imagine a context when a farmer applies certain pesticide to get rid of insects from his crop. There is every possibility that there will be some insects that will survive the pesticide. If he keeps applying the same pesticide, the insects eventually may build up resistance and the pesticide would no longer work! "Pesticide paradox" describes the problem that a regression test series gets less and less powerful as you use it over and over again. When a set of repeatable tests is run for a period of time they tend to pass more often than failing. A test is valuable when it fails and the failure uncovers a bug. Running a set of tests that seldom fail and have least chance of uncovering defects sounds like a bad idea. Isn’t it?
Having said this, does this mean repeatable tests are waste of time? May be not!
1) Think of Performance tests that a tester may need to run over and over again for days in and days out.
2) Think of Benchmark tests.
3) Think of Build Verification Tests/Sanity Tests that need to be run every time you have a new build!
4) Think of Regression tests.
5) Think of a scenario where you need to run certain tests that are important in nature. The tests that verify some very critical functionalities of the application and these tests must be run periodically to make sure those functionalities continue to work without issues.
So, it appears that having some repeatable tests can be helpful for your test project along with those tests that are non-repeatable. As always, it certainly seems to be context dependant and depends on your testing mission/goal! What do you think? Do you think ALL (what that can mean!) tests MUST be repeatable, as if we are not software testers rather some robotic human beings trying to repeat some algorithm already decided in advance! Repeatable tests! Are they the need of the hour or necessary evil? Your thoughts please.
Reasons to Repeat Tests - By James Bach
Repeatability is Overrated - By Elisabeth Hendrickson