Regression Testing Revisited - Thanks to this Interesting Question!

I am a regular reader of your blog and I find it very very useful. My name is Pankaj Shinde and I am working as a Software Tester in a well-known company. I have a doubt regarding Regression Testing. Generally we say that Regression Testing means to test whether any added functionality is affecting other functionalities or not. When functionality is added we write separate test cases for that and execute it. Here we execute not only the test cases written for the new functionality but also execute already passed test cases to check whether other functionalities got affected or not.

I hope till here I am correct. Now my question is what testing approach should be applied if we delete a functionality. Consider an example, a shopkeeper has a software. After calculating the price of all the items purchased, when a button called "VAT" is pressed it adds 12.5% tax to the price and returns the total price. The software was made in that way because Indian Government enforces that rule on shopkeeper.

Now if the government takes back that VAT [Value Added Tax] rule, it is obvious that the shopkeeper will no longer require that VAT functionality. So he asks to the software developer company to remove that functionality. Now what approach the testing team will apply? Will Regression Testing come into the picture or Retesting will be given importance?

This is a question I received via email from one of my blog readers today. Regression Testing is an often-misunderstood area of testing and testers get confused while dealing with it. Some get confused between retesting and regression testing, some about the approach to be followed while regression testing, some about the test cases (test areas) to cover while regression testing and so on. Even I did get confused in my earlier days of testing career. And probably I am yet to learn still more on the topic to talk like an expert on regression testing. Keeping that in mind, the next paragraphs that are going to follow are my understanding of regression testing as at the time of posting of this article.

What is Regression Testing?
Before we can discuss more on the subject lets see how others (some well respected industry experts/sources) describe and define regression testing:

1. Regression testing - Any repetition of tests (usually after software or data change) intended to show that the software’s behavior is unchanged except insofar as required by change to the software or data. [B. Beizer, 1990]
2. Regression testing - Testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program. [Myers, 1979]
3. Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. [
4. Regression testing - Selective retesting of system or component to verify that modifications have not caused unintended effects and that system still complies with its specified requirements. [IEEE 610]
5. Regression testing - Rerunning test cases, which a program has previously executed, correctly in order to detect errors spawned by changes or corrections made during software development and maintenance. Automated testing tools can be especially useful for this type of testing. [Anonymous]

Sometimes I get a feeling [thanks to the tester friend who sparked this line of thought in a recent debate on terminologies in software testing] that probably all the confusion involving the regression testing could have been avoided to certain extents if it were not called the way we call it (as regression testing) today! The term "regression testing" seems like a misnomer. It would be better if it were called something like "anti-regression" or "progression" testing because the intention behind performing regression testing is to verify that the system has not regressed to a worse state. “Regression” is defined in
Merriam-Webster Online Dictionary as “a trend or shift toward a lower or less perfect state”. If that suggests anything, regression testing is done with an intent of making sure that the system has NOT shifted towards a less perfect state. That way, may be anti-regression testing would have been a better terminology to describe such tests. Having said that, I am NOT an authoritative figure in testing field. Nobody is going to change the terminology based on my rambling. So better I should stop whining over the spilt milk and accept it the way testers are habituated to call it (regression testing) for years.

However, regression tests are executed whenever the software changes, either as a result of bug fixes or new changed functionality or the environment changes etc. Regression testing is not performed to show that the tests fail, but to show that the tests continue to pass as they were passing earlier. [I owe this understanding to Michael Bolton] Changes are integral part of software development process and are *almost* unavoidable. Here is a list of things that can result in change in the code and hence necessitates execution of regression test as a primary line of defense against such changes and the resulting unintentional introduction of defects:

Candidates for Regression Testing:
1. New Functionality.
2. Enhancement of existing Functionality.
3. Bug/Defect Fix.
4. Code Refactoring.
5. Removal/Deletion of existing Functionality.

Coming back to Pankaj’s query regarding whether or not to include deletion/removal of features under regression test strategy, I believe that we should. The scenario Pankaj brings up above (removal of the VAT calculation module from the invoice software) can also come under code change, in my opinion. When a programmer is removing an existing module from the software, he is opening up the frozen code once again. And while removing the particular module, there is every chance that some of the dependent modules can get affected if proper analysis is not done. Hence along with the regular retesting, regression testing also becomes a necessity in this scenario to make sure any of those dependent modules have not been affected (badly) due to the removal of the particular (no-more-wanted) module. Any thoughts?

Making choice of tests to include in your regression test suite can be tricky. A tester obviously can not run/execute all the tests pertaining to all related modules of the module where any code change has taken place. That would be probably too time consuming a process. However, while selecting tests to include in the regression test suite, knowledge of bug fixes and how it affects the whole system can be useful. Areas, which are known to be more error-prone, can be included in your regression test coverage plan. Areas, which have undergone too many recent refactoring/code changes, are to be included. Areas that are highly important from the end user and business point of view are to be covered. And of course the core areas, which cover the fundamental functionalities of the software application, must get high priority. Apart from these, a tester can use his past experience to select tests for the regression test suite.

Regression testing is not *only* about having a battery of automation test scripts and running them against all future builds of the software. A minor code change can result in breaking of a whole range of tests of your regression test suite. While the change was intentional and was intended to enhance a particular feature, your regression tests start failing. But this does not mean that the system/application has regressed to a less perfect state. And as a tester you end up spending more time on fixing (maintaining) your tests (scripts) so that they are adjusted for the intended bug fixes. This is one aspect that makes automated regression testing quite challenging. All you can do is probably run a ROI [Return On Investment] analysis and come up with your own strategy to deal with regression testing challenge. Having a suite of regression tests (automation scripts) is a good thing. But I have seen other strategies like exploratory testing can also help while tackling regression testing. Analyze your context, the frequency of code changes, impact of such code changes on other modules, impact of regression defects on your business and things like that and finally choose a strategy that suits your context best and does your job. How do you approach regression testing? What are those key points you take into consideration while choosing tests to include in your regression test suite? Are you one of those testers who believe that regression testing should be 100% [what does that mean!] automated? Our opinions may vary. But at any rate, I would like to hear your ideas and opinions. Feel free to voice out your thoughts via commenting.

Wish you all Merry Christmas and a Happy and Prosperous New Year. Happy Testing…

Related Article: How important is Regression Testing?
Share on Google Plus

About Debasis Pradhan

Debasis has over a decade worth of exclusive experience in the field of Software Quality Assurance, Software Development and Testing. He writes here to share some of his interesting experiences with fellow testers.


  1. Hi, Debasis...

    We've identified two distinct meanings of "regression testing".

    1) Any test that you repeat, to see if it gives the same result as it did before.

    2) Any test to make sure that quality hasn't worsened.

    So you might run a test again--but that test might not show that quality has worsened, even if it has. You might run a test to make sure that quality hasn't worsened, but that might not be a test that you've run before.

    It would seem to me that a regression testing approach should address both issues--the problem of breakage, and the problem of not knowing enough about the system. This would indicate a mix of repeated tests, focused on the areas that are changing, and typically performed by the developers as they refactor the code; and new tests to explore around the changes--some performed by developers, others by more exploratory or investigative testers.

  2. @ Michael Bolton,

    Wow! That was the exact word that came out of my mouth when I read your comment. This is a really fascinating dimension that you have given to my understanding of Regression Testing. "The problem of not knowing enough about the system" is an area that is often missed out in most Regression Test Plans. Repeating earlier tests are given so much importance that often testers forget to include the other basic motive of regression testing (i.e. to make sure that the quality hasn't worsened) into their regression test strategy. Finding the right balance between the 2 aspects of regression testing might get us the key to a successful regression test planning. As you rightly suggested, adopting an exploratory approach can help us in catching some bugs that might have been introduced as a result of the recent code changes.

    Thanks so much for sharing your views and your insightful ideas.


  3. Echoing Michael,

    Here goes I definition of regression testing -

    "Use of previously executed test cases to check if they still pass".

    Unlike others, I would not claim that by executing few *identified* regression test cases, I have verified that software has not regressed or some working feature is still working ...

    I can never say for sure that "what was working earlier" is working now too ... that too with a narrow set of "scripted" tests.

    What I can surely say (to a resonable extent) that whethere those tests pass or not.

    And how about this ...

    "you can never REPEAT a test in its entirity". Can you?


  4. @ Shrini,

    "You can never REPEAT a test in its entirety". Can you?

    I understand and completely agree with this para-phrase. This is a very important and quite interesting point regarding regression testing. As a tester, we might be shooting at our own feet, if we don't realize this and go ahead to believe that by re-executing a set of earlier tests we can assure that the system has NOT regressed to a worse state. As you rightly pointed out, when we say re-executing, we might not be executing the earlier tests EXACTLY how we had executed them the last time (in entirety). That makes it further difficult and risky to assume that we can ascertain the system has not gone back to a bad state simply by re-executing few tests. Hence it is safer to state - the tests do pass or not.

    Thanks for sharing your views. I will be quoting you ("You can never REPEAT a test in its entirety") in future, of course, with proper credit to you. Thanks.


  5. How about this ...?

    If you are testing a feature for the second time, you are probably doing a regression testing


    If a feature is getting tested for the second time, it is being "regression tested"

    Any testing other than first time testing can be termed as "regression testing".

    But how and where you draw a line for the first time testing.

    Even when a feature comes to testers for the first time, it is being tested for second time as developer might have done a round of testing (unit testing) on it.

    So all the tester does is "regression testing" - unless he is a developer of the peice of feature himself.

    I would like have some real debate about this highly misleading term "Regression testing" - I am not sure what does this mean....

    If we can not ensure whether regression happened or not - what is the point in calling some testing as "regression testing"



NOTE: Comments posted on Software Testing Tricks are moderated and will be approved only if they are on-topic. Please avoid comments with spammy URLs. Having trouble leaving comments? Contact Me!