Pages

Thursday, 18 June 2015

Bug handling workshop

I am running a thing called "Testing Tuesday" at the office. The concept is simple: Sanctify Tuesdays to software testing. This comes in form of helping project teams to test by helping them solve their testing related problems, promoting testing in every possible way. And to top it all, an hour workshop on some testing related topic. I will do a proper write-up on the subject later, but I wanted to share the coolest thing that happened during the 6th (out of 7) Testing Tuesday. The topic was "Bug handling" and the results were really awesome!

A week before this workshop we had a testing oracle related workshop, which I then promoted on twitter. I had classified three bugs and I mentioned those in my tweet. I then had a tweet exchange with Michael Bolton about classification.



That discussion made me want to redefine my Bug handling workshop, since I saw that people I work with, me included, might have quite a different approach to handling the observation we make and receive about the product we work with. So after talking to Michael on Skype I decided to do the following:

Have people define a bug handling process from the very beginning to the very end. Then plot it out, draw diagrams, etc. to explain it. Then focus on the difficult parts and try to enhance the process.

So we started by defining where does "bug handling" start. I started by saying that it starts from the moment there is code, but I was corrected. Bug handling, or observation handling, starts with the first indication or deliverable of work. That might be the requirements documentation, project plan, or whatever tool that is used to run the project. It can be unwritten requirements. It can even be an idea! From the very beginning we start testing and observing the subject. It is those observations that might require handling.

Based on the purpose and the need, we define the way we report, write down, take notes, etc. If we are talking about testing ideas, the observations could be about the idea or the repercussions thereof as statements voiced out. When testing a software, observation may be something you see, hear or feel, that you write down or record. A bug report is a description of your observation, which is then used in various ways to help understand the observation.

https://commons.wikimedia.org/wiki/File:Magnifying_glass_icon_mgx2.svg


"Observation is the active acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the recording of data via the use of instruments. The term may also refer to any data collected during the scientific activity." - Wikipedia (Observation)

It is these observations that we then start to analyze. It can be done in many ways. An observation as a bug report can then be inspected to its validity. Analysis might require communication with the stakeholders, tools, classification algorithms, etc. It is these actions that we employ to analyze the observation. It can be a snap decision or statistical analysis. Whatever is done during the analysis, there is an outcome. The outcome might be to trash the bug report, invalidate the observation, classification of the issue, pigeon holing an inference, describing a behavior in a more concise way, etc. Analysis creates something out of the observation.

Based on the analysis, there might be an action to deal with the observation. It can be a change in code, adding something to a document, building a new tool, fixing a leaking pipe, redefining an argument, etc. There might not be any actions towards the original subject of observation, but perhaps to the process with which we test and challenge. There might even be some actions to make the observation or the analysis different. Maybe a process improvement, learning a new skill, etc. Actions might require on the sub processes and further actions. In the end, however there is a follow up on the actions.

The follow up usually happens after the action. The follow up depends on the observation in a sense that there might be a need to reconstruct the situation in which the observation happened. There might be a need to refer to the earlier version of the subject under test. There might have even been a shift on the subject based on the analysis. And the action itself dictates the follow-up and the magnitude and the nature of it. The follow-up might require regression testing on a bug finding, another round of reviews, rerunning the test automation suite, rethinking, etc.

These four basic action became the guiding principle in all our testing processes.

Observation - Analysis - Action - Follow-up

But it wasn't enough. Every single bit of these actions required supporting activities. Observation required note taking, testing skills, tools, etc. Analysis requires processes, practices, domain knowledge, etc. During our workshop discussion I picked up some key words that were used. I then generated two clouds: The core activities & the supporting activities. The core activities are not enough on their own. The context states what kind of supporting activities are needed to make the core valuable. The supporting activities are of no value without the core, but the core loses value without any supporting activities.

Here are some of the things we came up.




Like I mentioned, all this is useless without the context. Every scenario requires a context that states the most useful way to approach the "bug" handling. Pair review requires different supporting activities than Beta testing, but they both have the core activities. The tools that are used might differ: you can use post-its, JIRA, QC, email, surveypal, etc. to communicate your observations. During the analysis those observation might be enveloped by a tool to create virtual stickers and notes. There might even be a template that is used to report an observation. Those observations might be classified, prioritized, trash or whatever. Based on the analysis at some point in time, something might be done. When I say might, it means that it is possible that an observation is lost and not acted upon. You can call it an "action" if an observation gets lost, but it is philosophical. Let us assume that every observation has an action. The action might require communication, changing something, tools, practices, processes, people, etc. Those actions then have follow-ups. That follow-up can be enveloped in the same tool that it was when it entered as an observation. It can even have a process of its own.


To conclude, there is no best practice to handling observations. Not every observation is a bug. Not every bug needs to be handled the same way. What was the most valuable thing I got out of my workshop was "mind the context!" Think of the value of your process to stakeholders. Think of the needs that need fulfilling. Think of the feedback loop. Think of the people involved in different tasks.

That is all today.

- Peksi


No comments: