Friends, I found a very interesting and thought provoking article by Helen Joyce. So I thought of sharing it with all of you. Just take a look...
The natural instinct of a software developer is to demonstrate that their application works. They are genuine problem-solvers, sharp and go-getting, who can knock out a prototype application in a matter of weeks or even days. A tester's focus is different. It is to demonstrate an application's weaknesses. It is to find test cases or configurations that would give unexpected results, or to show the software breaking.
When my fellow testers at Red Gate were asked to find words to describe themselves, they used tenacious, uncompromising, and thorough – all personality traits well suited to a determined software tester.
Consequently, testing a piece of software can take twice the time it took to develop it. In some situations there is even a danger that the activities of the test team can become a bottleneck in the drive to meet a product release date, as developers sit back and wait for results back from the test team in order to progress with their coding.
So, what are the processes involved in testing software? And why is it worth all of the effort? Hopefully, this short article will provide at least some of the answers.
Why does testing take so long?
To the uninitiated, it remains a mystery why software deadlines are often delayed. While at university, I remember knocking out a simple application within a few days of starting a programming course, and wondering how on earth it took professional developers so long to write and release commercial applications.
Of course, I now know that there is a world of difference between developing a rough prototype and developing a commercially viable application. It doesn't take long at all, sometimes only days, to get an application to the stage where it starts to do what it is supposed to do. However, creating an application that is user-friendly, meets customer requirements, copes under stress and is scalable and robust – that involves a huge amount of joint development and testing effort.
The developer-tester dynamic needs careful coordination and management. Full coordination is a difficult, if not impossible task and at some point it's inevitable that the testers will be waiting for critical bug fixes, or that the developers will be waiting for results from the test team before progress can be made.
What are the testers up to?
Although many test teams use test tools or scripts to automate testing activities, there's a lot about testing which is just simply labour intensive. Here are just some of the activities involved:
Planning and developing test cases – writing test plans and documentation, prioritizing the testing based on assessing the risks, setting up test data, organising test teams.
Setting up the test environment – an application will be tested using multiple combinations of hardware and software and under different conditions. Also, setting up the prerequisites for the test cases themselves.
Writing test harnesses and scripts – developing test applications to call the API directly in order to automate the test cases. Writing scripts to simulate user interactions.
Planning, writing and running load tests – non-functional tests to monitor an application's scalability and performance. Looking at how an application behaves under the stress of a large number of users.
Writing bug reports – communicating the exact steps required to reproduce unexpected behaviour on a particular configuration. Reporting to development team with test results.
As David Atkinson noted in a recent interview, much testing involves going "beyond sensible" – making sure that the software does not break under extreme conditions. It's not uncommon to find one bug for every ten lines of a developer's code, so subjective decisions need to be made about what should be fixed and with what priority.
When a bug is discovered there is often a time consuming (and curious!) investigation to track down and fix the defect. On other occasions the fix is trivial. It has happened on a number of occasions that I've submitted a bug report detailing the steps required to reproduce an issue, attached screen shots and log files and even investigated for which configurations it is valid, then received confirmation within a few minutes that the developer has fixed it!
However, even after the fix, there is more work to be done. On the next build regression tests are performed (many should be automated) to check that the fix hasn't broken anything else, and test scripts are modified to add this test case for next time.
Some of the above test activities lend themselves to a degree of automation, but many of them are labour-intensive. Planning and developing test cases is manual work. Writing test harnesses and automated test scripts, which iterate through test cases, requires planning and development time for the test team.
Even when automated tools can do some of the donkey work, there's often a price to pay. For example, using automated tools for load testing is pretty much the only way to repeat the same load test between builds to investigate the performance impact of code changes. However, it still takes time for the tester to plan the load tests, write the scripts, set up the test environment, run the tests and analyze the results.
Other times, automation can be counter-productive. It's time pressures that make it rarely worthwhile to use automated tools to do functional testing of the user interface, as the maintenance of the scripts is so fiddly between builds.
Even though automation takes time to setup and maintain, it does offer huge advantages in terms of what can be tested and how frequently, and it also allows testers to innovate and be creative. For testers, coding scripts for automation is worthwhile and easily justifiable to their managers. In fact, only basic programming skills are required to manipulate scripts (recorded automatically by test tools, for example) in order to encompass a wider range of test cases. With greater skill, testers can produce extensive automated tests that can then be scaled up to test not only the basic business requirements but the whole application infrastructure.
Tips for test teams
The best advice I can offer testers is to share your knowledge with the team. One simple but hugely beneficial exercise is to look over another tester's shoulder, as he or she demonstrates an issue they have discovered, and how they investigated it further to realise the extent of the defect. Another top tip for test team leaders is to set up a bug hunt. The idea is that testers work in pairs, or small teams, for an hour to do exploratory testing together. For example, a particular piece of functionality is tested and the team that finds the most defects, or most interesting defects, is rewarded. Donuts are always appreciated!
What's the ROI?
Test teams bring with them considerable expense, and it is one that many companies tend to try to avoid. Aside from the labour costs, there are also hardware and software requirements to be considered, as well the cost of the effort required to fix the bugs that testers find.
However, in order to appreciate the ROI of thorough testing, one only has to consider the impact on a business of delivering substandard products. No software application can claim to be bug free, but it's a given that untested applications will contain more defects than tested applications. Shoddy software will affect future sales, company reputation, increase work for application support staff, affect employee morale, and so forth.
The exact return on investment on testing may not be easy to measure, but as long as the test team focuses on testing the riskiest areas of the application, and executing tests covering all of the business requirements, then these facts alone render their presence invaluable.
There's often a misconception that testing isn't always as challenging as writing code, and therefore an activity which carries less stature within an organization. A large proportion of Computer Science graduates will become software developers, and possibly wouldn't even consider applying for a job as a software tester.
In reality testing can be an extremely creative activity. You still get to develop complicated software, only the objective is different. Where developers have to stick to a fairly rigid brief on what they are coding, testers have more freedom about what and how to build applications that increase productivity or verify quality. It's also extremely satisfying to break a developer friend's code.