I am a regular reader of your blog and I find it very very useful. My name is Pankaj Shinde and I am working as a Software Tester in a well-known company. I have a doubt regarding Regression Testing. Generally we say that Regression Testing means to test whether any added functionality is affecting other functionalities or not. When functionality is added we write separate test cases for that and execute it. Here we execute not only the test cases written for the new functionality but also execute already passed test cases to check whether other functionalities got affected or not.
I hope till here I am correct. Now my question is what testing approach should be applied if we delete a functionality. Consider an example, a shopkeeper has a software. After calculating the price of all the items purchased, when a button called "VAT" is pressed it adds 12.5% tax to the price and returns the total price. The software was made in that way because Indian Government enforces that rule on shopkeeper.
Now if the government takes back that VAT [Value Added Tax] rule, it is obvious that the shopkeeper will no longer require that VAT functionality. So he asks to the software developer company to remove that functionality. Now what approach the testing team will apply? Will Regression Testing come into the picture or Retesting will be given importance?
This is a question I received via email from one of my blog readers today. Regression Testing is an often-misunderstood area of testing and testers get confused while dealing with it. Some get confused between retesting and regression testing, some about the approach to be followed while regression testing, some about the test cases (test areas) to cover while regression testing and so on. Even I did get confused in my earlier days of testing career. And probably I am yet to learn still more on the topic to talk like an expert on regression testing. Keeping that in mind, the next paragraphs that are going to follow are my understanding of regression testing as at the time of posting of this article.
What is Regression Testing?
Before we can discuss more on the subject lets see how others (some well respected industry experts/sources) describe and define regression testing:
1. Regression testing - Any repetition of tests (usually after software or data change) intended to show that the software’s behavior is unchanged except insofar as required by change to the software or data. [B. Beizer, 1990]
2. Regression testing - Testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program. [Myers, 1979]
3. Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. [Wikipedia]
4. Regression testing - Selective retesting of system or component to verify that modifications have not caused unintended effects and that system still complies with its specified requirements. [IEEE 610]
5. Regression testing - Rerunning test cases, which a program has previously executed, correctly in order to detect errors spawned by changes or corrections made during software development and maintenance. Automated testing tools can be especially useful for this type of testing. [Anonymous]
Sometimes I get a feeling [thanks to the tester friend who sparked this line of thought in a recent debate on terminologies in software testing] that probably all the confusion involving the regression testing could have been avoided to certain extents if it were not called the way we call it (as regression testing) today! The term "regression testing" seems like a misnomer. It would be better if it were called something like "anti-regression" or "progression" testing because the intention behind performing regression testing is to verify that the system has not regressed to a worse state. “Regression” is defined in Merriam-Webster Online Dictionary as “a trend or shift toward a lower or less perfect state”. If that suggests anything, regression testing is done with an intent of making sure that the system has NOT shifted towards a less perfect state. That way, may be anti-regression testing would have been a better terminology to describe such tests. Having said that, I am NOT an authoritative figure in testing field. Nobody is going to change the terminology based on my rambling. So better I should stop whining over the spilt milk and accept it the way testers are habituated to call it (regression testing) for years.
However, regression tests are executed whenever the software changes, either as a result of bug fixes or new changed functionality or the environment changes etc. Regression testing is not performed to show that the tests fail, but to show that the tests continue to pass as they were passing earlier. [I owe this understanding to Michael Bolton] Changes are integral part of software development process and are *almost* unavoidable. Here is a list of things that can result in change in the code and hence necessitates execution of regression test as a primary line of defense against such changes and the resulting unintentional introduction of defects:
Candidates for Regression Testing:
1. New Functionality.
2. Enhancement of existing Functionality.
3. Bug/Defect Fix.
4. Code Refactoring.
5. Removal/Deletion of existing Functionality.
Coming back to Pankaj’s query regarding whether or not to include deletion/removal of features under regression test strategy, I believe that we should. The scenario Pankaj brings up above (removal of the VAT calculation module from the invoice software) can also come under code change, in my opinion. When a programmer is removing an existing module from the software, he is opening up the frozen code once again. And while removing the particular module, there is every chance that some of the dependent modules can get affected if proper analysis is not done. Hence along with the regular retesting, regression testing also becomes a necessity in this scenario to make sure any of those dependent modules have not been affected (badly) due to the removal of the particular (no-more-wanted) module. Any thoughts?
Making choice of tests to include in your regression test suite can be tricky. A tester obviously can not run/execute all the tests pertaining to all related modules of the module where any code change has taken place. That would be probably too time consuming a process. However, while selecting tests to include in the regression test suite, knowledge of bug fixes and how it affects the whole system can be useful. Areas, which are known to be more error-prone, can be included in your regression test coverage plan. Areas, which have undergone too many recent refactoring/code changes, are to be included. Areas that are highly important from the end user and business point of view are to be covered. And of course the core areas, which cover the fundamental functionalities of the software application, must get high priority. Apart from these, a tester can use his past experience to select tests for the regression test suite.
Regression testing is not *only* about having a battery of automation test scripts and running them against all future builds of the software. A minor code change can result in breaking of a whole range of tests of your regression test suite. While the change was intentional and was intended to enhance a particular feature, your regression tests start failing. But this does not mean that the system/application has regressed to a less perfect state. And as a tester you end up spending more time on fixing (maintaining) your tests (scripts) so that they are adjusted for the intended bug fixes. This is one aspect that makes automated regression testing quite challenging. All you can do is probably run a ROI [Return On Investment] analysis and come up with your own strategy to deal with regression testing challenge. Having a suite of regression tests (automation scripts) is a good thing. But I have seen other strategies like exploratory testing can also help while tackling regression testing. Analyze your context, the frequency of code changes, impact of such code changes on other modules, impact of regression defects on your business and things like that and finally choose a strategy that suits your context best and does your job. How do you approach regression testing? What are those key points you take into consideration while choosing tests to include in your regression test suite? Are you one of those testers who believe that regression testing should be 100% [what does that mean!] automated? Our opinions may vary. But at any rate, I would like to hear your ideas and opinions. Feel free to voice out your thoughts via commenting.
Wish you all Merry Christmas and a Happy and Prosperous New Year. Happy Testing…
Related Article: How important is Regression Testing?