How important is regression testing?

In a single sentence, Regression testing is the retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. Regression testing helps to ensure that changes made to the software code do not break other previously working parts of the software. It is important to do regression testing frequently, because the code as a whole may easily "regress" to a lower level of quality after making a change. Regression testing is necessary, even though a change appears to be working correctly and is believed not to affect the rest of the software.

But definitions apart, ever wondered how important is regression testing? To be honest, I had been doing regression testing more as a routine procedure till recently. And one experience (of course bitter) changed everything. While testing a product, once I received a mail from my project lead asking me to test a build. He also sent a list of changes made in the build. Some of the changes were few major bug fixes and some were few new features/enhancements. And I was asked to submit my report in 3 days! So keeping in mind the recent changes I had to make my strategy to test the build. Also I had to keep in mind the dependency of the affected modules with other related modules. And accordingly, I designed my test plan (I am not talking about any test plan document here. Rather I was going to test the build by exploratory method, as I had only 3 days to test). So I tested it and submitted my report in due time, with some really interesting and of course major bugs.

Jeez! Instead of getting word of encouragement for my smart work (which I thought, I did), I got another mail (this time from my manager) stating that somehow I had missed a very important bug! And how could it happen? Considering the detailed and rigorous testing that had been done on the build! So instead of blaming others, I sat down to look for the loophole(s) in my own test plan/test strategy. And to my surprise, there were none!

So I asked my manager to provide me with more details regarding the bug which apparently I had missed. And there it was. The bug was actually in a far remote module which was in no possible way related to any of the modules where I looked for possible problems. The reason was simple yet horrible! I was not told that there were some changes in that module too! When the concerned developer was confronted by the manager, his reply was “I only made some minor changes in the code. So I didn’t think it was important to inform about this”. Strange! But here I am not blaming the developer. To him the changes made might seem minor and unimportant. But as a tester I should have looked for unintentional errors in this module too, as this was an important area of the application.

So I think these were my mistakes:
1. First, I should not have believed the developer that he had made changes in those listed areas only. Because the areas he listed were those which he thought as important. At most I could have taken the list as a reference only while planning my testing strategies.

2. While planning my strategy, I should have included the other major and important areas, even if no apparent changes were made in them. And here, I didn’t do it because of time constraint. But that also makes no excuse for me. In case of limited time, I should have tried to purchase more time from my manager. If I was not able to buy more time from my manager, I should have informed him in advance about my inability to cover those other risky areas.

3. You might say that I should have done more rigorous regression testing. Which, I think I did to my satisfaction. But I missed that important bug due to my tunnel vision of the affected areas (as told by the developer lead). I didn’t anticipate the changes in other possible areas. And in the process I missed the bug.

4. That time we didn’t have any change control method in place in our project. If there were one, then all the changes made in the code should have been approved by the change control board. In that case I could have known each and every module that could have been possibly affected by the changes.

After this experience, I realized the importance of regression testing in any testing process, be it scripted or exploratory. Some of you also might have faced some similar situations. If so, do share your experience with others by leaving a comment. Also if you can think of any other possible reasons why I missed that important bug, then please do share with me. Waiting for your valuable comments…

Share on Google Plus

About Debasis Pradhan

Debasis has over a decade worth of exclusive experience in the field of Software Quality Assurance, Software Development and Testing. He writes here to share some of his interesting experiences with fellow testers.


  1. I want to share my experience related to this. It happened the same way as during Regression Testing ,one module was left untested(and which was not informed to the QA Team) and was directly affecting the clients . The QA team were informed and held responsible for this.
    If Any Bug is missed out during Regression Testing and if it comes in production, QA Team would be held responsible for this.
    So, Can anyone comment on this ‘Are QA only held responsible for If any Bug comes in Production’.?


  2. The lessons learned part is good; but I believe given these lessons, face to face discusssion with development team always going to mitigate these kind os issues. We can be share our experience like these with development team and ask for more information that they might have thouh unncessary.

  3. @ Vinayak
    Thanks for your ideas Vinayak. But those kind of "face to face" discussion with the development team is practically not possible in most cases (imagine if your development is working from on site). But if you can really do that, then probably that would be the best way to get clear ideas about the changes made in the application.

  4. There is no way to pass a test without knowing what material to study. Similarly without proper communication you could never have caught that bug purposely. You might have caught it by accident and in many cases this is how I have caught the harder to find bugs.

    My approach is to follow a high level matrix of functional paths that execute as many different parts of the application as possible given time constraints. There are two reasons for this. First it is harder for the different parts of the code to work together correctly than it is for them to work correctly when isolated. Second I am assuming the programmers have unit tested with plain vanilla cases and it is up to me to break the application no matter how exotic the case.

    Once I am satisfied that the macro view is working I drill into the micro view meaning individual parts/tasks/modules/dbs for flaws.

    This approach has worked for me for more than 10 years of testing on all kinds of apps and platforms.

  5. Here are some ideas that we had to forge for ourselves from home-made tools that showed the links among every module in the large software system. Therefore, to do the proper regression testing, we had to test every module that was called by or called the module that had the code changed.

    We also had to do a 'diff' which showed the code changes highlighted. Debasis' unknown code changes would have been obvious if this process was followed.

    Also, Clearcase has a Build process that kicks out, shows, any differences between the 'gold file' (the original software) and the new software. So analysis is forced on the software developer to decide, "Is this the GOOD change? Does the Gold File now have to be changed, or is this truly a bug that should NOT have affected the Gold File?"

    Perturbations throughout the system caused by one bug are difficult to find. DOORS, however, purports to identify the effect of each change throughout the system. Maybe so, maybe no.

    Just a few suggestions. I've been a software engineer since 1962, when my 1st computer was a UNIVAC 1004 with 80 bytes (not kb, not MB, not GB, but BYTES). That and a plugboard did billing, Accounts Receivable and General Ledger MAGIC!!!!!

  6. I find this quite satisfactory: test for the 'Best Case', then do the Stress and Complexity testing.
    That means a high volume of 'transactions'----- all of the same kind. Then follow that with a suite of tests that stress the system with highly complex conditions (i.e., logical 'ANDS' and 'ORS'.
    Then, break it. But be sure to document WHERE and under what conditions it broke.
    So often test engineers, under the time constraints of getting the product out the door, just 'touch' the function. "OK, it did what the requirement said it was supposed to do: PASS." That's best case testing, necessary but not sufficient.

    Drilling down is good, and while you're at it, use the old ';;;;;;;;;' trick. If you get confirmation for any action involving assigned numbers to that semi-colon input, you know it's a bug, a phony confirmation.
    This really happened to us.


NOTE: Comments posted on Software Testing Tricks are moderated and will be approved only if they are on-topic. Please avoid comments with spammy URLs. Having trouble leaving comments? Contact Me!