Pages

Tuesday, 23 June 2015

What happens when a wannabe rockstar does a lightning talk?

"Are you f***ing ready to rock?!"
The crowd is wild. The stage is lit. The announcer get on the stage.

"Ladies and gentlemen! Are you ready for the coolest rockstar in the world?" The crowd roars its approval.

"Are you f***ing ready to rock?" The crowd goes wild...

Ok. That's not how it happened for me. That is what I wanted to happen, when I did my lightning talk at the Nordic Testing Days 2015. Actually, it went something like this:

Helena Jeret-Mäe get's on the stage. "Next we have Pekka Marjamäki on the topic of Testing Tuesday", she says. Then I get up from the floor and walk in front of the room full of people. I have my hat full of badges, my Superman t-shirt, suspenders handing down, all cool and ready to rock the place.

"OK, people!" I start my performance. I divide the room into two groups. "The group on this side shouts 'TES' and the group on this side shouts 'TING'. Ready? Tes-ting! Tes-ting!"

The crowd starts shouting. They shout "testing". "Louder!" I shout. The room is roaring for a minute or so! Then at it's peak, I silence the crowd and start my talk on Testing Tuesday.

Testing Tuesday doesn't sound that weird. It is a concept me and my colleague Petri Sirkkala from Solita came up with. It spawned from a need to teach testing at my company. I will explain how it goes in detail (accompanied with a video, perhaps) in a later post. In the Nordic Testing Days 2015 I briefly introduced the concept. It is 7 weeks every Tuesday, each week having a workshop of it's own and helping our colleagues with their testing problems. The most recent blog post is a write-up of the 6th Testing Tuesday workshop. Apart from actually helping people test, strange things happen during the Testing Tuesdays, e.g. us two testing dudes walking around the office shouting "Testing Tuesday" and playing Sex Pistols from an old cassette player, or posting testing problems on a white board in the hallway.

The main goal of Testing Tuesday is to promote testing and to sow seeds of interest into people who aren't yet that much into testing. The second goal would be to help people in their testing related challenges. The third one is to have fun.

I think my objective at the Lightning talk was to convey the energy and enthusiasm we pour into the Testing Tuesday. The attitude to fight against poor practices, the bravery to stand up and challenge, the eagerness to improve. Honestly I can't remember what I actually said during the talk, I had so much fun. I do believe that people got the key points out of my talk.

Be open about your passion towards testing.
Share knowledge and help others.
Be brave and have fun.

"We want more! We want more!" The crowd shouted after my talk... At least I wanted it to. Alas, it didn't...

- Peksi

Thursday, 18 June 2015

Bug handling workshop

I am running a thing called "Testing Tuesday" at the office. The concept is simple: Sanctify Tuesdays to software testing. This comes in form of helping project teams to test by helping them solve their testing related problems, promoting testing in every possible way. And to top it all, an hour workshop on some testing related topic. I will do a proper write-up on the subject later, but I wanted to share the coolest thing that happened during the 6th (out of 7) Testing Tuesday. The topic was "Bug handling" and the results were really awesome!

A week before this workshop we had a testing oracle related workshop, which I then promoted on twitter. I had classified three bugs and I mentioned those in my tweet. I then had a tweet exchange with Michael Bolton about classification.



That discussion made me want to redefine my Bug handling workshop, since I saw that people I work with, me included, might have quite a different approach to handling the observation we make and receive about the product we work with. So after talking to Michael on Skype I decided to do the following:

Have people define a bug handling process from the very beginning to the very end. Then plot it out, draw diagrams, etc. to explain it. Then focus on the difficult parts and try to enhance the process.

So we started by defining where does "bug handling" start. I started by saying that it starts from the moment there is code, but I was corrected. Bug handling, or observation handling, starts with the first indication or deliverable of work. That might be the requirements documentation, project plan, or whatever tool that is used to run the project. It can be unwritten requirements. It can even be an idea! From the very beginning we start testing and observing the subject. It is those observations that might require handling.

Based on the purpose and the need, we define the way we report, write down, take notes, etc. If we are talking about testing ideas, the observations could be about the idea or the repercussions thereof as statements voiced out. When testing a software, observation may be something you see, hear or feel, that you write down or record. A bug report is a description of your observation, which is then used in various ways to help understand the observation.

https://commons.wikimedia.org/wiki/File:Magnifying_glass_icon_mgx2.svg


"Observation is the active acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the recording of data via the use of instruments. The term may also refer to any data collected during the scientific activity." - Wikipedia (Observation)

It is these observations that we then start to analyze. It can be done in many ways. An observation as a bug report can then be inspected to its validity. Analysis might require communication with the stakeholders, tools, classification algorithms, etc. It is these actions that we employ to analyze the observation. It can be a snap decision or statistical analysis. Whatever is done during the analysis, there is an outcome. The outcome might be to trash the bug report, invalidate the observation, classification of the issue, pigeon holing an inference, describing a behavior in a more concise way, etc. Analysis creates something out of the observation.

Based on the analysis, there might be an action to deal with the observation. It can be a change in code, adding something to a document, building a new tool, fixing a leaking pipe, redefining an argument, etc. There might not be any actions towards the original subject of observation, but perhaps to the process with which we test and challenge. There might even be some actions to make the observation or the analysis different. Maybe a process improvement, learning a new skill, etc. Actions might require on the sub processes and further actions. In the end, however there is a follow up on the actions.

The follow up usually happens after the action. The follow up depends on the observation in a sense that there might be a need to reconstruct the situation in which the observation happened. There might be a need to refer to the earlier version of the subject under test. There might have even been a shift on the subject based on the analysis. And the action itself dictates the follow-up and the magnitude and the nature of it. The follow-up might require regression testing on a bug finding, another round of reviews, rerunning the test automation suite, rethinking, etc.

These four basic action became the guiding principle in all our testing processes.

Observation - Analysis - Action - Follow-up

But it wasn't enough. Every single bit of these actions required supporting activities. Observation required note taking, testing skills, tools, etc. Analysis requires processes, practices, domain knowledge, etc. During our workshop discussion I picked up some key words that were used. I then generated two clouds: The core activities & the supporting activities. The core activities are not enough on their own. The context states what kind of supporting activities are needed to make the core valuable. The supporting activities are of no value without the core, but the core loses value without any supporting activities.

Here are some of the things we came up.




Like I mentioned, all this is useless without the context. Every scenario requires a context that states the most useful way to approach the "bug" handling. Pair review requires different supporting activities than Beta testing, but they both have the core activities. The tools that are used might differ: you can use post-its, JIRA, QC, email, surveypal, etc. to communicate your observations. During the analysis those observation might be enveloped by a tool to create virtual stickers and notes. There might even be a template that is used to report an observation. Those observations might be classified, prioritized, trash or whatever. Based on the analysis at some point in time, something might be done. When I say might, it means that it is possible that an observation is lost and not acted upon. You can call it an "action" if an observation gets lost, but it is philosophical. Let us assume that every observation has an action. The action might require communication, changing something, tools, practices, processes, people, etc. Those actions then have follow-ups. That follow-up can be enveloped in the same tool that it was when it entered as an observation. It can even have a process of its own.


To conclude, there is no best practice to handling observations. Not every observation is a bug. Not every bug needs to be handled the same way. What was the most valuable thing I got out of my workshop was "mind the context!" Think of the value of your process to stakeholders. Think of the needs that need fulfilling. Think of the feedback loop. Think of the people involved in different tasks.

That is all today.

- Peksi


Sunday, 7 June 2015

First thoughts on people and bravery – Nordic Testing Days 2015

Nordic Testing Days 2015. Three days of tutorials, tracks, workshops. Three days of people. Three days of awesomeness. One might think it is a cliché but a conference is nothing without the people.

The first thing on day one, at breakfast I saw Kristoffer Nordström. My Swedish friend with a knack for python. After that moment I knew the conference couldn’t be anything more than pure awesomness! Then I met Guna Petrova and Helena Jeret-Mäe at the registration. Those women (among the other organizers) are the beating heart of the conference. Then everywhere I went, new and old faces. There was so much energy in the air I could breathe in testing and conquer the world with it.

That is how it feel to attend a conference: you feel everyone’s energy and be empowered by it. Santosh Tuppad (whom I met during a coffee break on the first day) said the same thing. The people around you give you energy. If the people just happen to think alike, they can give you much more! I have no idea what is the scientific basis on that, but I think it has something to do with brainwaves, facial expressions, and false belief that there is anything scientific around it. ;) At least I felt like a king.

Now that the conference is over, I am spent. I think me having a nagging cough ate part of my energy, especially on the third day (and Cards Against Humanity until 3am had nothing to do with that). I was able to gather enough energy to pull an extempore workshop in the hallway during the coffeebreak where we tried to develop a testing strategy for a webshop. It became a crowd magnet and we had huge fun doing it.

What was the topmost thing for me as a delegate and not as a speaker (although I did my best to speak out whenever I could – the tutorial, workshops, lightning talks, the hallway), I felt that I was a promoter of bravery. Bravery to speak out. Bravery to challenge. Bravery to be challenged.

I think being a software tester is about being brave. We stand on the podium in conferences and we spill our guts in front of people. Rob Sabourin was almost in tears at his keynote. Erik Brickarp admitted failures and even arrogance in his track. People make mistakes and they get up to talk about them. It is bravery to ask questions from the audience. It is bravery to ask “Huh?” when one doesn’t understand.

So, I’m going to be brave. In everything I do, I shall try to be braver than the next guy. I shall show that I have the guts to do things, swim against the current, challenge. Maybe then other people have it easier to be brave also, even a little bit. Maybe I can show example and lower their threshold. In the following 12 months that bravery shall take me in to every possible event where I can speak up.

As for the future, the following might make it as blog posts (I haven’t fully decided yet):

  • My lightning talk on Testing Tuesday (TES-TING! TES-TING!)
  • My workshop on Test Strategy in 10 minutes (We didn’t make a testing strategy at all)
  • Kristjan Uba’s tutorial (Rogue Legacy to win!)
  • Blood sausage testing (don’t ask… or ask Sami Söderblom)
  • Context dependency (based on Bill Matthews’ and Ilari Agaerter’s tracks)
  • Cards Against Humanity (I just want to write something really clever on this)
  • Naked Tester (I hate you, Richard, for injecting this idea inside my head!)


The conference is over. The lights have gone out. In the end, everyone’s tired but excited. I want to thank the following people for great discussions throughout the conference: Sami Söderblom, Kristoffer Nordström, Erik Brickarp, Santosh Tuppad, Richard Bradshaw, Jekaterina Krivega, Kristjan Uba and Ilari Henrik Agaerter.

I want to thank the organizers of the Nordic Testing Days 2015 for every single thing they did! The light dancers, the magician, the games, the venue, the food, the drinks, the people! I will see you next year! I promise!



Test Pistols live forever!
- Peksi