Pages

Wednesday 11 April 2012

Visualizing the testing effort

There has been some downtime with updating the blog, but that's about to change. The previous three months have been quite intense and I have learned quite a bit about software testing in a testing team. As some people already know, I work as a legacy system tester. The two months that have passed I have been learning the product and domain in a client team (Yay! Go team Raid!) with a bunch of expert testers. The team has three testers (four when I was in there), two test automation specialists and a Scrum master / manual tester. I was there to reinforce the manual testing and to learn as much as I can during the way.

While I was in there my goal was to hone my skills to perfection in exploratory testing and especially in scenario and flow testing. The features I was working with were part of a complex basic functionality that had been growing since the first product was out. This addition to that functionality pool was a much needed feature and affected basically every part of the functionality.

As the testing was being done we had to forgo the automation as we needed fast and tangible results, as in severe bugs as fast as possible and large coverage of the ground functionality. This resulted however in a pile of bugs and lots of rework. At this point we had no stable ground as all bugs were being fixed one by one and there were built coming twice, three times a day. Result: instability.

The decision to do all testing manually resulted in a HUGE amount of re-testing. We could have made a decision to write a basic set of tests to be run automatically and thus decrease the amount of checking done manually. On hindsight this would have been a great thing but it would have decreased the coverage achieved by manual testing.

To find the golden road with manual testing and automated testing is probably the hardest possible thing to achieve. Managers tend to veer towards automated testing as the tests are re-runnable and documented. Developers have the same approach as the script more easily presents the coverage of testing done. Is it possible to create the efficiency of a manual testing with the coverage visualization of automated testing?

Visualization


When we do automated testing its coverage is measured in different ways to sate the needs of visualization. We have the percentage of code covered by low level testing and we have some number of tests run through different scenarios through GUI. These high level tests are usually run through APIs or some engines that replicate the use of GUI. Both ways we measure have different ways to interpret the results and some pits they fall into when visualizing the results.

Code coverage tells you how much of the code is covered (DUH!) but more importantly it tells you how much is NOT covered. We can make assumptions about the coverage, but as the coverage doesn't cover the user using the product but the code using the code, this can be deceptive. It is however a way to build confidence to the product and to lessen the risk of regression (and we all hate regression, right?).

From tickbytick.co.uk
It is possible to take this to the next level by measuring branch and module coverage. This eases the visualization as you can point out the components within the flow that have not been tested. By taking the "untested" parts into a context we can see whether the coverage in that component is critical or trivial. Instead of relying on numbers (what does 49,21% percent of branch coverage tell you?) we can visualize the coverage and make decisions about the enoughness of coverage more easily and more accurately.

After the low level coverage we move to high level testing, system testing, acceptance testing, you name it (literally). When automating testing on a user experience level we need to make a choice whether to make tests easy to create or them to be realistic. Easy tests mean using APIs and direct calls to back end to replicate a situation in a GUI. This is great way to give false confidence in GUI. As the test engine cannot see the client, the GUI may be unusable but still pass all tests. The buttons, bars, boxes, whatever may be unreachable, unusable or plain ugly, and the machine doesn't see that as it "simulates" the use. This kinda testing is comparable to the code coverage. We have the knowledge of what has been tested but there is no knowing if the critical parts have been tested (or that they work in actual use case).

When doing high level testing and taking it closer to user, we lose some of the speed that we rely on when automating testing. By clicking artificially the elements that the test machine (program or script) sees we can be fairly sure that the user sees them too. This approach however comes with a load of updating the scripts and screenshots, but when done modularity this approach can solve lots of problems in client side test automation. When visualizing the latter GUI automation case we can draw lows of what forms and dialogs have been browsed and used. This tells the story of different scenarios that have been tested.

So we have possibilities to create a greater visualization of the coverage we have. By displaying these results in a radiator we can spread the information and more importantly raise questions about the results. "Why is that area uncovered by tests?" "Why is that part covered so vastly as it's only a supporting function?" From that point on we can make the test automation work for you instead the other way round.

So how can we visualize manual testing by using what we already have in automated testing? Visualizing code coverage is quite futile as it requires some sophisticated debugging/sniffing tools that should be run at the same time. And it may not give any value to the testing and/or visualization of manual testing. There may be some benefits with special cases like prototype testing when the code is not yet built into dll. This may have security problems and require a strong business case to be considered.

More reasonable way to approach is visualizing the scenario coverage of manual testing. If we visualize the testing we do manually it should be as easy as possible. If a larger team does testing and should record the coverage, the way should be standardized or at least agreed. Visualization through mind maps is a great and easy way to do this either with individual testers or with testing teams. As I wrote before on mind maps, taking the visualization a step further by adding the whole teams effort on a single mind map can visualize the total coverage. Using radiators as a way to spread information the coverage can be real time and dynamic.

If the aspect of using manual labor to create coverage charts is daunting, there is a possibility to record the testing done with a tool however and make it visualize the coverage. By recording the testing done manually, say with a screen recorder (Adobe Captivate, Bluberry FlashBack recorder) we record what we have tested and run it though a picture recognition program. It then lists the steps in a scenario (with possible side steps) and draws a graph of testing coverage. There is not yet known tools to do that but a simple Python application might do the job (I may have to return this eventually, but this should suffice at the moment).

As most of the information about testing lies in the brains of a tester, that information needs to be pulled out and visualized. Using radiators and simplifications we can provide the information the managers need to make decisions about the product. If we can make them veer away from the number-driven into the information-driven decision making, the products' quality will (or might) be increased.

Any way, I have gotten some good ideas from the client team to improve my work as a tester and hopefully I can start forcing these ideas to other so that visualization comes naturally with all aspects of testing. By being able to visualize the quality in both manual and automated testing they both become important to decision making and less ambiguous.

2 comments:

Arnaud said...

Nice post, thank you.
It seems we work the same way, using mind maps to organise our work and visualise tests activities. It's good see "mind-mapped" testers :).
On my side, I have forked Freeplane to adapt it to my own needs and vision of the job. I hope to get enough time soon to share my main materials and experience with the community.

Pekka Marjamäki said...

Hi, Arnaud! Thanks for the comment. I'd like to see what other people have been doing with mind maps and visualization.

BR, Peksi