Pages

Tuesday, 14 July 2015

"Thinking like a tester" workshop

As I have mentioned before, we have this round of workshops under the concept of Testing Tuesday. I have already covered two of the latest workshops as blogposts. This one is an attempt to cover the first of the seven, a workshop called "Thinking like a tester".


Why did we talk about thinking?

I like to think that there is no testing without thinking. If thinking isn't involved, it is not testing. Machine doesn't think, thus it just checks. Human challenges, observes, infers, models, etc. all the time while looking at a test object. A human tester thinks. That is why we need to practice different ways of thinking. It's like exercising a muscle, but the muscle is our brains.

There are number of different ways to think, many of them overlapping, and we should try to perfect those skills. It is valuable to recognize different patterns of thinking to be better able to solve problems. Testing is essentially problem solving, since we try to notice things and then afterwards try to figure out if this observation might be an issue. By having multiple tools in our tool box we don't have to rely on just one way of thinking ("When you have a hammer, all you see is nails.") and that makes solving the problem much easier.

My goal with this workshop was to introduce a few different mechanics of thinking to the audience and have them use that mechanic to perform an exercise.


What did we cover?

We were able to cover three exercises. The first two exercises were borrowed from improvisational theater. The first one was "reinvent the wheel".

I split people into groups and gave them the assignment to create a method for transportation. The only things given to them were the fact that the context in which they were supposed to solve the problem didn't have a wheel yet. They weren't supposed to create the wheel but create an alternative, as effective method of transportation. The thing was a bit difficult since I told them to start every sentence in their brainstorming with "Yes, but". This was an effort to make people challenge the already agreed.

People started to do the exercise but it seems that they didn't fully see the point of that exercise. My attempt was to make people challenge the assumptions and the ideas already on the table, thus making it an exercise in critical thinking. I might have missed the mark slightly, but people did have fun. I noticed that people required a leader in their group and some catalyst to provoke the challenging. I visited the groups and challenged their ideas by asking a question and then replying to their answer with a "yes, but" phrase. That stirred the pot slightly. I believe it became an exercise on team dynamics more than thinking patterns.

After 10 minutes we debriefed the ideas they came up with and moved on to the next task, "Reinvent storing".

Once again people worked in groups. The task was to invent a method to store things without using shelves or stacking things on top of each other. They once again had two facts about the context: There was no concept of shelves or stacking, and they had to begin an idea by saying "Yes, and". This was supposed to be an exercise on creative thinking, finding new ideas based on the old ideas, accepting what is already decided and building on that.

The task was more fluent than the first one and it spurred some crazy contraptions to store items, from pulley-operated platforms to portable black holes. There was certainly creativity in the air! Once again I felt that the exercise fell a bit short, and people did have questions on how they related to testing.

The third exercise was a bit shorter because it took so long a time to debrief the first two tasks. The third one was an exercise on lateral thinking. I explained it on a broad level, then I gave them a problem which they were to solve using lateral thinking.

The story was something like this: There is a merchant who owes money to an evil man who is in love with the merchant's daughter. The merchant can't pay the dept. The evil man proposes a wager. He puts a white and a black stone into a pouch. If he pulls out a white stone, the debt is forgotten. If he pulls out a black stone, the debt is paid in full AND the daughter is forced to marry the evil man. There seems to be a 50/50 chance. However, the evil man changes the white stone into another black one.

I asked the groups how would they solve the problem so that none of the participants lose their face (i.e. is revealed as a liar, is forced to marry, gets killed, etc.). The task was once again pretty difficult since the premise was so vague. I then had to answer a lot of clarifying questions about the problem before groups could actually start working on their solutions. They managed to think outside the box on many occasions. The ideas were quite feasible, I think.


What was the most valuable thing to me?

Having done three exercises on thinking, I realized that it was just a scratch of the surface. I thought I had time to do a "Thinking fast/slow" exercise but everything went by so fast. The essential thing might have been just having fun with my coworkers, making them do something out of the ordinary, promoting testing as a thinking activity as opposed to a technical task that creates test cases run on some virtual server.

The tasks were obviously quite difficult, but it gave a good ground work for the next workshops. The people were the essence, not me blabbering in the front (although I like that also). The more workshops I held, the more attendee driven it became. I facilitated, they provided the material.


What would I do next?

Since there will be another "tour" for the Testing Tuesday, I will refine this workshop. I will explain in more detail what the task is and arrange more time on people to explain what the connections to testing could be. Instead of making it a lecture I give the mic to the attendees on why would it be important to think. Maybe I'll try to add some other thinking exercises, like "Think like a freak" and the "Thinking fast/slow".

I am thinking on doing a blogpost on the lateral thinking, since I find it really important skill. I believe few of my community colleagues have already done that, so I might have to take a different approach on that. We'll see.

Anyhow, this was the workshop on Testing Thinking. If something wasn't clear or you have ideas how to make the workshop better, drop me a comment.

- Peksi

Tuesday, 7 July 2015

Testing technique workshop

The last part of the Testing Tuesday’s “Test Pistols Tour” was a workshop about testing techniques. The original plan was to have a list of techniques and then exercise to learn those techniques. The scheduling caused us to change our approach because we had no time to create environments for exercises.

So, I turned to the community.




When I dragged my but to the conference room I was expecting just a handful of people, 3 or 4, but eventually we had 7 people. I think there was a bit of a tour fatigue in the air, since this was the seventh workshop. I had seven brave soldiers at the meeting room.

“I have changed the rules!” I said. “I ain’t gonna tell you about testing techniques. You’re gonna tell me about testing techniques.”

Now the plan was the following: Pair people up, make them test something and describe their testing. Then discuss what kind of a problem they were trying to solve with their chosen approach. Sounds simple enough. I was a bit uncertain if people could describe their testing to a level from which I could derive a technique. The challenge was thrown.

I told them to open Word. The assignment was to test the “Find and replace” functionality and describe to you pair what you did and why. I asked some questions from the teams during the 10 minutes of testing we did and made them focus on actually telling why they chose to do something. After the ten minutes, we started talking about how the testing was done. These are the key points we came up.

Hot-key testing

The first team started to describe what they did by explaining how they searched for the functionality. They were trying to find different ways to access the functionality. They found out that on different operating systems the hot keys vary. More so, the hot keys are customizable thus enabling different combinations. “Ctrl+F” was the easiest way to find the function, because it happens to be the same on many other software also (comparable product and familiarity to user). On Mac there wasn’t a “Ctrl+F” so the hot key was a bit difficult to find.

Based on their approach to using the hot keys we gathered that the technique can be used on many Windows based software (and why not Mac based, but I don’t have the experience to use hot keys quite yet). The commonly known hot keys like “Ctrl+C / V / X / Z“ etc. are quite easy to test. The tests are quick and cheap, very generic thus making the technique quite useful.

Premise variance testing

When the group was trying to find different ways of accessing the functionality (hot keys, context menus, sidebars, ribbons, etc.) I asked if it changes the behavior of the functionality when you access it from different origin points. If you change the premise, can the functionality change?

We started to think if we could apply it to various other solutions and products, and we came up with “premise variance testing”. When one changes the premise condition to a function, there might be changes in the behavior. This kind of technique can be derived also into a “step variance testing” where you mutate a single or many elements within the process.


Help testing

When one team was trying to figure out how the function worked, they pulled up the manual. The help can be quite simple for experienced user but it acts as an oracle on many occasions. During help testing one can testing the help itself against the product and test the product against the help. In either case, one acts as the test object and the other as the oracle.

This technique could be derived into all kinds of oracle material testing. We can test against oracles that are used by various stakeholders, e.g. requirements or design documentation. We test the product and ask “is this ok?” and we then try to solve the problem by referring to the oracle. We might have an oracle (e.g. human oracle telling how it should work) and then test the other oracle based on the new knowledge (the human oracle disputes the written document). The “Help testing” might become “Oracle testing”, but the name doesn’t give me good vibes. ;) A help could actually be any material that helps us do testing.


Data roundtrip testing

A team was testing replacing a word with gibberish and then replacing that back to the original value (“Pekka” -> “ASDFGH” -> “Pekka”) and they wanted to know if the same amount of entries are changed. So basically the idea was to revert the original data without actually reverting the state. Mathematically I think this is called “inverse function”. First we apply the normal function followed by the inverse function. Roundtrip actually means that you return to where you started from.



We had a discussion if the “roundtrip testing” is actually a generic thing that can be done to a state also. It is possible to revert the system to previous state without any information whatsoever about the state that was visited. This might actually be a problem in itself, but we chose to narrow our testing technique to mere data.

Minimum data

We did find some testing ideas while describing techniques and I think this was worth mentioning. A team wanted to test with as little data as possible. That is a one variable of premise variance testing where we solely focus on varying the data instead of the states. This testing can be the defaults form testing, testing without any inputs (NULL, n/a, whitespaces, etc.), removing metadata, etc. and it can find bugs in the exception handling logic.


Conclusion


All in all, the testing techniques we found have already been described in other sources, but these made sense to us and felt important. The terms are more tangible than “product tours” or some techniques found in books. We defined the terms and we learned how to describe them in a language that suits our context.

I know that at least the premise variance testing stuck. I have used it a few times now to describe what I do. It makes sense to repeat this exercise again with a different depth. Then uncover new, undescribed techniques and make them part of our tool box. After there has been a handful of these sessions, we might have enough skills to describe our testing to any stakeholder in a language we share and understand.

Sadly that was the last of the Testing Tuesday workshops on this tour. There will be another tour in Helsinki, and I shall write up as much as possible from those sessions.

- Peksi