Monday, 11 June 2012

Making a requirements pie

This is a summary of the track by Ainars Galvans at Nordic Testing Days 2012 and how I viewed the subject. The topic was “Care for quality, not for bugs! Do you know the difference?“.

I met Ainars the day before the conference at the hotel and we discussed about bugs and quality. We are on the same line with the overall quality but I disagree with some points he made. So here is what happened and how I saw it and some thought about challenging the claims.

“The best tester isn’t the one who finds the most bugs or embarrasses the most programmers. The best tester is the one who gets the most bugs fixed.” –Cem Kaner

See the difference?
True and true. Ainars had great thoughts about increasing the value of the bugs. By focusing on the bugs that had a story that compelled to the developer (and the triage of people deciding which to fix), you will get more out of the bugs that you find. When I think of myself as a tester, I tend to get as much information about the product and thus uncover as much bugs as I can. I know that by doing so I learn about the product, the risks, the critical areas, the things that people making decisions value the most. When Ainars said “don’t report the bugs that aren’t getting fixed”, I was shocked. Am I doing it the wrong way? I may have a huge amount of rejected bugs, but it’s my way of learning. What if I uncover some minor looking manifestation of a bug that turns out to be a showstopper? I don’t know if I don’t ask and communicate.

Usually when you start communicating with the developer about a bug that seems like a show stopper, he will immediately see that it should be fixed immediately. If he can pinpoint the problem, you won’t need to make a bug report if the problem is sufficiently communicated onwards. So this is where I disagree with Ainars; I will make bug reports but I get the important ones fixed.

Ainars had great thoughts about prioritizing testing. He said that he’s not going to spend his time on testing something he KNOWS is going to pass, so he tests something he doesn’t know that will pass. He saves time doing so and can focus on the important areas of testing. So you should always do the most important testing at the every moment.

How do you know what is important? Ask! I ask if there is some area that is valued by the product owner and I focus on that. Then I ask what the next valuable thing is and focus on that. By time I get some testing done I can start formulate my own view about the importance of different things. So Ainars had a good thing going on there, as he knew the product so good he was able to guide his own testing prioritization using his knowledge of the product. The knowledge acquired by learning and testing the product.

Picking up requirements.
Ainars also introduced a cool way to do exploratory testing. He called it “Requirements driven exploratory testing”. They were almost like test cases but more like missions or topics, which were then tested. They were measured using traffic lights: Green (good enough testing AND good enough quality),Yellow (not started) and Red (not good enough testing OR not good enough quality). I challenged the concept as the testing team was focusing on only documented requirements, and I still do! There is a need to test also things that the client takes for granted and things that are discussed but not documented. I also challenged the fact that requirements are usually ambiguous, and that “requirements are not items to be gathered”. They're not like apples that you can get into your basket and bake a testing pie out of them. Ainars could clarify that if he can. From the presentation I did not get an answer to that.

As a summary, I got a lot out of the track as it provoked me to think about how I do my testing, but also I got a chance to challenge the speaker. I feel that Ainars has a lot to say about this and I will be checking his blog as often as I can. I hope he has a way to address my concerns about the requirements and “not-important” bugs.

No comments: