Pages

Thursday, 8 August 2019

Quest for Quality 2019 at a Glance

Hi all! It’s been a long time since I’ve posted anything on my blog. I’ve been busy with various things, but I have now decided that it is a good time to share some thoughts about the forthcoming fall. I’ve been (with my good friend and colleague Jani Gr√∂nman) invited to speak at three conferences this year: TestCon Europe at Lithuania, HUSTEF in Hungary and Quest for Quality in Ireland. My plan now is to share my thoughts about all these conferences.

I am not an expert with some of the subject matter of these talks I’m about to rate (in my Angry Bird scale) and I am sure I misunderstand a whole lot based on the description on the website. As the title says, this is a glance. It is not a comprehensive analysis. I have described my process in more detail in this post, but here’s the gist of it:

I will grade topics using 0-3 stars per area, in FOUR areas:
Session value – short time span (How much can I get out of the session tomorrow – next year?),
Session value – long time span (How much can I implement o my work and teach to my colleagues, my community?),
Steal-ability (How much of it am I willing to borrow and further develop to make it better and, more importantly, make it my own?), and
Challenge-ability (My past knowledge on the topic and my willingness to challenge the session contents.)
I’ll choose the sessions as follows:
Choose two sessions from each day based on my interest in the title
Choose one session from each day that I pick randomly

This is Conference at a glance - Q4Q 2019!

The conference is mostly about AI and ML, but my talk “Test Coaching” has been chosen to add a more general perspective to the technical set of talks. I will write about test coaching either here or some other platform, but I try to remember to update this post to link to that post.

The concept of AI replacing tester is absurd. As a tool to do better testing, a tester will most definitely benefit from new ideas and tools in the toolbox. I promise not to be judgmental about the topics, but I am most certainly biased towards a non-technical approach to testing and I might be over-critical with some topics. Bear with me, though. This is subjective after all. ūüėČ

One thing to note: I haven’t met most of the presenters, so my rating is based on the bio and my quick research of that person.

Keynotes (There seems to be quite a lot of those)

Tariq King - AI-Driven Testing: A New Era of Test Automation

As a context-driven and holistic tester, I am keen on learning about new things to boost testing efficiency and reliability of information testing provides. The way Tariq talks about AI as a tool is fascinating. I’m fascinated to see live demo how the bots work. I mean, they might work on their own, but the practical application to software testing is what I want to hear. I hope I get answers on how can I use AI in my daily work IN PRACTICE.
Short time value: * (The implementation of this is not yet in my immediate agenda. Let’s see…)
Long-time value: ** (The topic is very interesting and if I cannot implement it immediately, I can at least share my views on it.)
Steal-ability: - (The description offers more questions than answers, so the steal-ability is a mystery now. My technical abilities might hinder me taking the topic to the next level, but let’s see.)
Challenge-ability: ** (I am sure I can find things that rub me the wrong way. As a tester rooting on using one’s brain to test, outsourcing thinking to a machine sounds dangerous and reckless.)

Pallavi Kumar - When AI Meets Software Testing

The keynote description leaves a lot in the dark, but I guess it follows the same track Tariq sets in his keynote. The future of testing always intrigues me. It also frightens me to some degree. Based on the background of Pallavi Kumar, she uses AI in various other purposes, including mental health (which is close to my heart).
Short time value: * (I hope to catch some good ideas but implementing them in my daily work seems a bit distant. I am, however, very intrigued about hearing her talk and maybe chatting with her afterward on practical applications of AI in mental health.)
Long-time value: ** (The future of testing is an interesting topic, but I fear this is a praise to AI. Like Tariq’s talk, I should be at least able to talk about it. Also, I feel the effects of mental health makes me want to hear this talk.)
Steal-ability: - (Same as Tariq, basically.)
Challenge-ability: - (I don’t see “un-challenge-ability” being a bad thing. This should be purely about new insight on applications of AI.)

Michael Clarke - “Robots Took My Job!”: Where do Testers Fit In A Future Fueled by AI & ML

Michael’s view about AI and ML is quite close to mine. Where does AI & ML leave testers? I want to hear more about this. Since Michael doesn’t seem like your run-of-the-mill technical engineer, I feel he’s a kindred spirit. Him being a sole tester in a team resembles my work as a tester for the last few years. I most definitely wait for this talk.
Short time value: ** (Talking about the importance of human tester is important and I totally agree with that. I will most definitely get a lot of ideas from this talk in the short and long run.)
Long-time value: ** (Like I mentioned, this will be a great talk for me.)
Steal-ability: ** (The topic is close to my heart. As a holistic tester I will further develop this idea to benefit my company and the testers in my community.)
Challenge-ability: * (I feel there could (and should) be things I don’t agree with. I hope the talk shows that humans are the ones doing the testing. AI is just a tool!)

Yasar Sulaiman - Will Artificial Intelligence Take Over QA Jobs?

There are quite a lot of talks about testers losing their jobs. In my talk, I talk about a way to keep the testers up to speed with their skills, hence I don’t see us losing our jobs. Yasar talks about evolving testers and I think it supports my talk quite nicely. There is always a threat of people losing their jobs but adapting to change is the key to having a long and stable career.
Short time value: ** (To be able to talk about the evolvement of the tester to fit today’s testing industry is a key element to my job.)
Long-time value: ** (In the long run I might be able to advise the skill set needed to perform specific tasks in various projects. There are quite a lot of domains that can benefit from AI as a tool for a tester.)
Steal-ability: * (Same as Michael Clarke’s talk, the topic is close to my heart. Maybe I can develop this further.)
Challenge-ability: * (Same as Michael Clarke’s talk essentially. Let’s see where the talk leads us.)

Jason Jerina - The Future of Quality Assurance: A Path for A.I & Human Intelligence

First, I thought “Yet another keynote about the future of testing.” but it seems this is more about tool hype. Tools and automation are a tricky subject for me. I fear that the IT community starts to think the tools are snake-oil that solve all testing problems. When we add the AI to that, the thinking of a human might get forgotten and its importance lost in the cogs of technology.
Short time value: - (I feel I don’t get too much in the short run for I see quite a lot of technology hype in this keynote. There might not be that much to implement in my current job.)
Long-time value: * (I will be able to talk about AI and ML on a larger scale. The value of that is currently lost to me.)
Steal-ability: - (I have no intention of stealing this subject unless it turns out to be more than a sales pitch for automation.)
Challenge-ability: *** (I’m sure I won’t agree with most of the stuff. I will most definitely try to challenge the ideas represented.)

Rhealyn Mughi - Robots Are Here; What Can We Do To Keep Up?

Yet another keynote about the future of testing? The topic, however, guides the audience to keep up with the technology and robots. Rhealyn Mugri also talks about the impact, which is nice to hear. We’ll have to see if this keynote promotes intelligence over tool-hype.
Short time value: - (I don’t see too much short time value since this is more about the future of testing. The guidelines might come handy in the long run.)
Long-time value: ** (The guidelines and talk about rapidly evolving industry are useful information.)
Steal-ability: - (I don’t yet see anything steal-able, but we’ll have to see. The future is always interesting, but I don’t see how to make this topic my own.)
Challenge-ability: * (Is this more about robots in general or how they affect testing? I feel there might not be that much talk about testing. If there are no practical applications to implement the guidelines for testing, I will challenge the usefulness of those guidelines.)

Talks (Only 6 this time)

Zachary Attas – Services: How to Test Them When You Have The Keys to The Castle

Ah! Test strategy – my favorite. I feel that people underestimate the value of a good strategy. The focus is usually writing a document and burying with the rest of the plans in some dark corner of the project library. End-to-end testing and test automation both sound interesting. I work at Solita and we use a lot of integration and E2E automation. It is nice to see what possibilities this talk provides to make our tests better.
Short time value: * (I might not be the one writing the code, but I feel that this talk helps me aid others to create better tests.)
Long-time value: ** (In the long run, I might be able to help teams generate a good strategy for integration testing. This is vital to my role as a coach.)
Steal-ability: ** (I try my best to understand the details of the talk. The strategy will be the main part that I am interested. How to generate strategy is my favorite part of learning to test.)
Challenge-ability: * (Not knowing too much about the technical aspects of integration tests, I might not be the best candidate to challenge most of the content. I do feel that the strategy generation part will be my focus and I will see if there are things that sound odd to me.)

Shama Ugale – Testing Conversational AI

The idea of testing conversational devices is intriguing. I can see the difficulty in it and the techniques to do so need to be carefully thought of. I haven’t been involved testing such systems, but I see a progression towards that area in many fields, such as infrastructure maintenance, elderly care, phone advising, etc. Besides, I use these devices at home, so it is intriguing to know how these devices are tested.
Short time value: * (While the topic is very interesting, I can’t see the near-future application of this kind of testing.)
Long-time value: ** (In the long run, I might able to consult teams and customers to test these devices or services. I’m sure these skills are transferable to other approaches of testing.)
Steal-ability: * (Having rather limited technical skills currently, I don’t see too many opportunities to adapt and build upon this subject.)
Challenge-ability: - (The topic is new to me thus I don’t see much challenge-ability in this topic. I believe however that topic is interesting enough to overcome that.)

Maik Nogelsen – Testing VR; The Trinity Of Testing

Maik sounds like an esteemed figure in software testing. I expect a lot from his talk and chatting with him. His work in German testing community sounds awesome. Anyhow, his topic on the VR sounded quite interesting. I read an article about Fallout 4 the difficulties of testing the game. I became interested in the possibilities and difficulties of VR testing but haven’t had a chance to hear more about it. I think this is a great opportunity to learn more about it.
Short time value: *** (I may not be able to implement the techniques or the methods to my daily work, but I am keen on learning about it.)
Long-time value: * (Like I said before, I might not be able to implement this topic to my daily work, so it might not generate that much value in the long run.)
Steal-ability: * (While I might not be able to “make it my own” I’ll learn how things are done in the real world.)
Challenge-ability: - (The topic is new to me thus I don’t see much challenge-ability in this topic. I believe however that topic is interesting enough to overcome that.)

Sunder Shyam - AI Techniques To Improve Software Testing

The topic sounds quite ambitious and quite hard to grasp. I’m not fully sure if the talk is about solving the oracle problem or introducing a new idea called TDP (never heard before). They don’t seem to be the same thing. I might be wrong, but the description sounds a bit unclear. I’ll assume the topic is about solving the oracle problem with AI testing AI.
Short time value: ** (Solving the oracle problem is applicable to various other areas other than AI. Testing in general benefits from knowing more about the oracles. This could be immediately transferred to my current work.)
Long-time value: * (This helps me understand the AI testing (and AI doing the testing) and the problems that AI can solve in development.)
Steal-ability: * (The vagueness of the description makes it hard to determine the aspects I could further advance. Test automation applications might be the most direct useful things for me.)
Challenge-ability: * (I believe the challenging will be easy when moving from AI world to human intelligence world and applying the skills to oracles in human testing.)

Milan Gabor - Security Testing For n00b Testers?

I have been keeping security testing at arm’s length due to it being highly technical craft. Lately, I’ve come to realize that my job as a coach is somewhat like those doing security assessments and analysis. Where I show the problems in testing practices and skills, sec-testing shows problems in programming practices and skills (perhaps in platforms, tools, etc.). I have always had some curiosity, but the first step is a hard one to take. I hope this talk kickstarts my thirst for sec-testing.
Short time value: *** (I’m a n00b! The first steps are very valuable to me getting a grasp of what security testing is and can be.)
Long-time value: ** (On a long run I can be more certain when talking to my coaching clients about the importance of security testing.)
Steal-ability: * (In this case “making it my own” isn’t about stealing this but making it a push to the right direction.)
Challenge-ability: - (With quite a narrow knowledge on the subject but great enthusiasm, I see myself being happy to become a non-n00b.)


Jörgen Damberg - The Luxurious Development Future РWith The Obstacles In The Rear View Mirror

My work is situated in a highly agile world with loads of CD/CI-pipelines and test automation, I’m intrigued by hearing more about AI and CD working together. Knowing the obstacle and how to move past them helps me coach teams in a way that immediately brings value to their daily work.
Short time value: ** (This subject and the skills are quite transferable to my work immediately.)
Long-time value: ** (On a long run I can coach teams to enhance their CD/CI pipelines to make their life easier.)
Steal-ability: - (Building my own version of this talk is quite farfetched.)
Challenge-ability: * (While I know a fair bit about CD/CI and the problems we have, challenging Jörgen and helping solve the actual problems.)

There you go!

Well, that’s my view on a few of the talks and keynotes at Q4Q conference. I’m not saying I attend all these talks, but I feel they might be good candidates for my itinerary. I encourage all of you to comment on what are your views about these talks and other talks as well. If I happened to comment a talk you’re presenting, please give me comment where might I be mistaken. It would be lovely to hear from all of you!


BR,
Pekka “Testing pastor” Marjam√§ki

Wednesday, 25 January 2017

Test Strategy in 10 minutes

This post covers both the workshop I did on test strategy at Solita as a part of the Testing Tuesday and the extempore workshop I arranged at the Nordic Testing Days in 2015. While both had the same agenda, they were vastly different. (Why this post is published now is because it had converted into a draft which I noticed only now.)

The one at Solita

The workshop was not supposed to be a slide show, like none of the workshops arranged at Testing Tuesday. Once again I drew stuff on the white board and let rip. I started by describing testing strategy quickly. Then we discussed about project elements, product elements and quality aspects. These were loosely based on the Rapid Software Testing models. The audience gave ideas on what kind of elements we need to take into account when choosing a test strategy. There is no comprehensive list on what we came up with, but we had a whole lot of ideas ranging from schedules to "pissed-offness" and expected user behavior.



After we covered most of the areas, I split the audience into three groups. They all were supposed to create a testing strategy in ten minutes. The product I chose was a web shop that sells knick-knacks and mails them to people. After ten minutes I had three completely different strategies.

The first one was focused solely on money. They mapped out every aspect that could hinder the income of revenue and prioritized them according to importance to the "owner" of the store.

The second one was a "software engineer approach". That group mapped out all the aspects that were important to different kind of engineers. There was an aspect of security testing, transaction testing, happy day testing of known processes and some complementary approaches. There was no prioritization but there was definite focus on tools and know practices.

The third one was focused on business processes. They mapped out as many user stories as they could and dissected them to steps. They then tried to figure out what kind of a prioritization would make sense to test the processes.

In 10 minutes we had three outstanding drafts of testing strategies. All of them had different approaches and they the strategy supported the focus they had chosen. They were not comprehensive, nor should they have been, but they were "good enough" to start testing the most important thing as soon as possible. We discussed the fact that when combined these three could actually complement each other. In a coffee break, we can create a draft of a strategy and even introduce that draft to a potential customer to explain our strategy to test their product.

All in all, the workshop was a success, since it spawned a new way to think testing strategy. I never thought I could choose a simple focus like "money" as the guideline of my testing. It does reveal the importance of talking to stakeholders to understand their values and needs. Without any templates or predetermined practice every team could conjure an awesome strategy which seemed even executable after they explained what they thought.


The one at Tallinn

Feeling confident on the outcomes of the workshop, I agreed with Helena Jeret-Mäe that I would do an extempore workshop during a coffee break of the Nordic Testing Days (see here). The workshop was supposed to be on the last day. I dragged a white board to the hallway near sofas and gathered people to do the exercise. I was able to gather a couple great minds of software testing on the sofa including Erik Brickarp and Santosh Tuppad. The workshop started by me explaining the idea and then I gave the same task to them: "Generate a test strategy in 10 minutes."



The problem was that the pro testers weren't too happy with the vagueness of the context. They wanted to know more about the product, more about the stakeholders, more about the project. In the end we didn't actually generate a context map for the product (plus some testing strategy elements).

The best outcome for me was (once again) to observe experts in their work and how the dynamics within a group actually work. Also the lack of structure was something they pointed out. Erik mentioned that we could have described four key aspects of the product and then figure out how to test those. Where the first three groups I had at my office all chose a focus, we had none. We could have spent a few minutes in the beginning to define where we would like to focus (even an arbitrary focus) and then create a strategy based on that. There were hints of focus surfacing when they interviewed me about the product, but none that was chosen as the key thread.


What next?

Having had two vastly different workshops on the same subject, I think it makes sense to arrange these even more. The original idea to this came from Fiona Charles at EuroSTAR 2013, but I think we approached it quite differently. I am planning to do these extempore workshops at every conference I attend, since it makes sense to give people a chance to try their skills at this. Every time I get a huge amount of pointers, ideas and lessons learned. I would think that it is the conversation that ensues rather than actually creating a strategy that counts.

The key lessons learned in both sessions were:
- The dynamics in the group determine a lot what kind of strategy is created
- Know your stakeholders!
- Choose a focus as a skeleton and then fatten it up
- Don't over-do it! Ten minutes might be enough to start testing the most important thing.
- Have some structure, but keep ideas flowing

When you see me at a conference, come ask when am I gonna pop out the white board. It might be the next coffee break. ;)

- Peksi

Monday, 23 January 2017

Mushroom-picking heuristic

Imagine this: You are in a forest and you are trying to find juicy mushrooms. You'd like to find some porcini, black chanterelle, maybe some yellow bearded milk-cap. Edible mushrooms nonetheless. Before you go, do you need something? Perhaps the following:

  • the calendar which shows different mushroom appearance into your local forest (not too close to habitat because mushrooms ingest heavy metals)
  • maybe some research on what the mushrooms look like that you are trying to find (also pictures thereof so you don't pick something poisonous that looks vaguely like edible mushrooms)
  • some knowledge on how to pick mushrooms (pick them up intact and one piece either by twisting or pulling)
  • maybe choose the weather when you go mushroom-picking (the mushrooms should be rather dry when being picked up)
  • prepare the mushrooms as soon possible (remove sand, pine needles, moss, etc)
  • use proper tools (a knife, a brush, a basket instead of a plastic bag)


These are things that you might want to consider when going out for mushrooms. Now, these are all preparatory things, bits of knowledge you might want to have before actually going to the forest. Obviously you can go without knowing these things, but you may end up with no mushrooms or worst case poisonous bastards like amanita or cortinars. The trip to the woods might have been productive nonetheless - you got fresh air and some exercise, maybe you had a good chat with a fellow mushroom-picker. Not all unprepared trips are totally worthless, but you need some preparations to achieve good results.

How does this translate to testing? You prepare yourself to the testing task by doing almost same things you do when you go mushroom-picking. Like so:

  • You check the schedule for the most convenient time to do specific kind of testing. (In January you might want to ice-skating instead of mushroom-picking, which might still be fun.) If you're doing usability testing, you might want to choose a time when there is something someone can actually use. When doing penetration testing you might want to pick a time when there is something to penetrate. Cuz if you choose the timing badly you might not achieve the best results. Also checking the schedule gives you clues on how to time-box the testing.
  • You might want to research the subject of testing. What should you be looking for, what things you expect to discover, what are the risks that are already known. Perhaps you want to learn more by exploring the product, by trying things, playing and clicking around, banging the keyboard with a shoe. You might have pictures of the GUI or architecture schematics, maybe a person to help you go about the product. Like mushroom-picking, you can stumble on interesting things, but to recognize the important, the critical, the alarming stuff you might need to do some research.
  • You might need knowledge on how to do software testing. By clicking around without a purpose might not be good testing. A good tester has a skillset that she utilizes to perform good testing. It is also important to know the domain, how to perform testing in that particular product/domain/service.
  • Choose the weather when you go testing... Urhm... Testability and configuration, choose the best starting conditions, datasets, timeslots, loads, etc. to achieve the best testing performance and to find interesting things. You might want to do testing when the backend is performing poorly or the network connections are bad. Maybe choose a dataset that is production like or maybe it should be a fuzzing test to generate weird data to the APIs.
  • Prepare your mushrooms. Deliverables! You should take notes on your testing performance. This is to help you steer your testing, generate ideas that couldn't yet be executed, dot down risks and bugs you find. You can then explain to other people what you did, why you chose to do specific tests and checks, what did you find. If needed, prepare a useful report on the testing you did.
  • Use proper tools. Testing is about using tools, obviously. Most important tool is your brain! Use it. Also use tools to help do things that are difficult or time-consuming, keep your concentration while testing. Use scripts when they are useful, record your screen, have quick note taking equipment. Use other people! Two brains are sometimes better than just one... A person can be a tool!


(This is not comprehensive list in any case. You might want to take other context variables into account to achieve the best result when you actually go testing.)

I'm now in the woods with my wellies on, my trusty 'shroom-basket, a J. Marttiini mushroom knife, and a backpack willed with sandwiches, a “Book of Mushroom and Black Magick”, coffee and some chocolate. Maybe a map, a compass, some survival gear if I get lost in the woods. I want to be prepared and tooled up. How the hell do I find those mushrooms?

This is where a Lévy Flight heuristic kicks in (a heuristic mentioned by James Bach at CAST 2014). It is an algorithm by which animals (and humans) go foraging. "When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random directions." says Wikipedia. Essentially doing something in one spot until you move to other area to do something there. So, I stand on the road and head to the woods. I try to look for sweet spots in the woods, like decaying fallen trees, dry mossy areas under pines or furs. If I have dome my preparations correctly I look for these areas and I might know where to find them. So I start roaming in the woods and stumble on an area that has something interesting. A fallen tree! Yay! Maybe I'll find some black chanterelle there. So I look closely at the area and spend time there, perhaps picking some mushrooms or just scouting the area for clues where I might find some. After a while I head to a new location keeping in mind where I am in the forest.

So I move around the wooded area in a pattern that tries to achieve the best coverage of the important areas. The pattern might seem random, but I have a mission which I am trying to fulfill with the choices of direction to wander. When I am halfway on my walk, I might want to go towards the road so I don't end up too far when the sun goes down. I spend time on areas that are either rich in mushrooms or interesting places in themselves. Maybe I learn something while scouting for porcini, something about birds or types of moss I tread on. Maybe I note down areas that have some other mushroom that I am not intending to pick (you don't want to mix mushrooms in the same basket - I don't know why...). I take mental notes, maybe write stuff down, mark places on my map. All the while trying to achieve a goal I set for myself before I went foraging. On my way back I stumble on an abandoned building. Cool! I might take a look inside and maybe I find something interesting. Maybe an old newspaper or a book? I might spend time there even though I was set out to pick mushrooms. This is interesting new place and I want to know more of it. So I deviate from my initial mission and investigate the building. It's apparently someone's old home and its walls have sunken in to the ground. Maybe the architecture of the pre-WW2 era interests me, maybe the newspaper has some information from the "ye olde times". Maybe the book has a letter tucked in between the pages. I allow myself to deviate because this might be more important than mushroom-picking.

Back to the testing world. The woods turn into APIs, GUIs and code. The moss becomes the date on which I tread on. The mushrooms... They're not bugs, if that's what you thought. Mushrooms are information, relevant information. It can be about a behavior that is annoying the user, an error message in the wrong place, a risk that needs to be communicated to the stakeholders. There might be bugs, but there is so much more. The you go foraging in the software you can apply the L√©vy Flight  heuristic either by accident or purposefully. It is called focusing and de-focusing. You focus on some area to find interesting things for some time. Then you move to other area. If the area you first stumble upon is hugely rich in things waiting to be discovered, you might spend most of your time there. Or you might just stop there briefly and look for more important things to discover.

You start with a mission and you head out to accomplish that mission. You take notes and notice things. You investigate things that look and feel important. You forage information on the product under test.

Here's a scenario that might give clues to choosing mission for your foraging.  On Monday you wonder if you should go picking mushrooms. It's a fine day, but instead of going head first into woods you barely know, you go investigate. You take your dog with you and go scouting the forest. You find a batch of blueberries and eat a few. Oh, they're so nice! The dog, Rover, eats some also. On Tuesday it rains. Bummer! So you decide to go to a shop and buy some equipment for the trip as soon as the rain stops. You decide to get a basket that has compartments for different mushrooms. You didn't even think of it before talking to the shop keeper who's apparently an expert on the matter of picking mushrooms. Great find! On Wednesday you are called to the office for an important meeting. A nice, sunny day wasted in meeting.  No time for foraging today. On Thursday you get your gear, your dog and head out to woods. You check the sweet spots you discovered on Monday and pick delicious mushrooms and even some blueberries. On Friday you make delicious mushroom stew with some potatoes and carrots. Then, to top it off, you make a blueberry pie. All the recipes for these you checked online but used your own twist to make them your signature dishes. A perfect way to start a weekend.

To sum it up, Mushroom-picking heuristic is a two-fold heuristic:
- First it is a preparation heuristic. It helps you create a mind model that allows you to plan for the up-and-coming testing session, testing phases, etc. To an acceptance testing session one might prepare differently than to a security testing phase. Nonetheless testing needs preparation and a good way to tackle the preparation is the Mushroom-picking heuristic. You should make the preparing heuristics to match your own context. Think of tools, background knowledge (oracles), time-frames and constraints, people attending the sessions, bug reporting procedures, facilities, etc.
- Second it is a testing management - a steering heuristic. It helps you move from one focus area to another. Focusing and de-focusing is one aspect. With note-taking you can keep track on the areas you have covered and to remind if there's need to return that area. Keep in mind the Stopping-heuristics so you won't get stuck too much in one area.

The Mushroom-picking heuristic is not comprehensive nor should it be. It is a model that might work or you may find it useless. Perhaps you might try it and give me feedback on how it worked and how I should improve it. It is a work in progress.

Have a tasty spring!

- Pastori

Tuesday, 14 July 2015

"Thinking like a tester" workshop

As I have mentioned before, we have this round of workshops under the concept of Testing Tuesday. I have already covered two of the latest workshops as blogposts. This one is an attempt to cover the first of the seven, a workshop called "Thinking like a tester".


Why did we talk about thinking?

I like to think that there is no testing without thinking. If thinking isn't involved, it is not testing. Machine doesn't think, thus it just checks. Human challenges, observes, infers, models, etc. all the time while looking at a test object. A human tester thinks. That is why we need to practice different ways of thinking. It's like exercising a muscle, but the muscle is our brains.

There are number of different ways to think, many of them overlapping, and we should try to perfect those skills. It is valuable to recognize different patterns of thinking to be better able to solve problems. Testing is essentially problem solving, since we try to notice things and then afterwards try to figure out if this observation might be an issue. By having multiple tools in our tool box we don't have to rely on just one way of thinking ("When you have a hammer, all you see is nails.") and that makes solving the problem much easier.

My goal with this workshop was to introduce a few different mechanics of thinking to the audience and have them use that mechanic to perform an exercise.


What did we cover?

We were able to cover three exercises. The first two exercises were borrowed from improvisational theater. The first one was "reinvent the wheel".

I split people into groups and gave them the assignment to create a method for transportation. The only things given to them were the fact that the context in which they were supposed to solve the problem didn't have a wheel yet. They weren't supposed to create the wheel but create an alternative, as effective method of transportation. The thing was a bit difficult since I told them to start every sentence in their brainstorming with "Yes, but". This was an effort to make people challenge the already agreed.

People started to do the exercise but it seems that they didn't fully see the point of that exercise. My attempt was to make people challenge the assumptions and the ideas already on the table, thus making it an exercise in critical thinking. I might have missed the mark slightly, but people did have fun. I noticed that people required a leader in their group and some catalyst to provoke the challenging. I visited the groups and challenged their ideas by asking a question and then replying to their answer with a "yes, but" phrase. That stirred the pot slightly. I believe it became an exercise on team dynamics more than thinking patterns.

After 10 minutes we debriefed the ideas they came up with and moved on to the next task, "Reinvent storing".

Once again people worked in groups. The task was to invent a method to store things without using shelves or stacking things on top of each other. They once again had two facts about the context: There was no concept of shelves or stacking, and they had to begin an idea by saying "Yes, and". This was supposed to be an exercise on creative thinking, finding new ideas based on the old ideas, accepting what is already decided and building on that.

The task was more fluent than the first one and it spurred some crazy contraptions to store items, from pulley-operated platforms to portable black holes. There was certainly creativity in the air! Once again I felt that the exercise fell a bit short, and people did have questions on how they related to testing.

The third exercise was a bit shorter because it took so long a time to debrief the first two tasks. The third one was an exercise on lateral thinking. I explained it on a broad level, then I gave them a problem which they were to solve using lateral thinking.

The story was something like this: There is a merchant who owes money to an evil man who is in love with the merchant's daughter. The merchant can't pay the dept. The evil man proposes a wager. He puts a white and a black stone into a pouch. If he pulls out a white stone, the debt is forgotten. If he pulls out a black stone, the debt is paid in full AND the daughter is forced to marry the evil man. There seems to be a 50/50 chance. However, the evil man changes the white stone into another black one.

I asked the groups how would they solve the problem so that none of the participants lose their face (i.e. is revealed as a liar, is forced to marry, gets killed, etc.). The task was once again pretty difficult since the premise was so vague. I then had to answer a lot of clarifying questions about the problem before groups could actually start working on their solutions. They managed to think outside the box on many occasions. The ideas were quite feasible, I think.


What was the most valuable thing to me?

Having done three exercises on thinking, I realized that it was just a scratch of the surface. I thought I had time to do a "Thinking fast/slow" exercise but everything went by so fast. The essential thing might have been just having fun with my coworkers, making them do something out of the ordinary, promoting testing as a thinking activity as opposed to a technical task that creates test cases run on some virtual server.

The tasks were obviously quite difficult, but it gave a good ground work for the next workshops. The people were the essence, not me blabbering in the front (although I like that also). The more workshops I held, the more attendee driven it became. I facilitated, they provided the material.


What would I do next?

Since there will be another "tour" for the Testing Tuesday, I will refine this workshop. I will explain in more detail what the task is and arrange more time on people to explain what the connections to testing could be. Instead of making it a lecture I give the mic to the attendees on why would it be important to think. Maybe I'll try to add some other thinking exercises, like "Think like a freak" and the "Thinking fast/slow".

I am thinking on doing a blogpost on the lateral thinking, since I find it really important skill. I believe few of my community colleagues have already done that, so I might have to take a different approach on that. We'll see.

Anyhow, this was the workshop on Testing Thinking. If something wasn't clear or you have ideas how to make the workshop better, drop me a comment.

- Peksi

Tuesday, 7 July 2015

Testing technique workshop

The last part of the Testing Tuesday’s “Test Pistols Tour” was a workshop about testing techniques. The original plan was to have a list of techniques and then exercise to learn those techniques. The scheduling caused us to change our approach because we had no time to create environments for exercises.

So, I turned to the community.




When I dragged my but to the conference room I was expecting just a handful of people, 3 or 4, but eventually we had 7 people. I think there was a bit of a tour fatigue in the air, since this was the seventh workshop. I had seven brave soldiers at the meeting room.

“I have changed the rules!” I said. “I ain’t gonna tell you about testing techniques. You’re gonna tell me about testing techniques.”

Now the plan was the following: Pair people up, make them test something and describe their testing. Then discuss what kind of a problem they were trying to solve with their chosen approach. Sounds simple enough. I was a bit uncertain if people could describe their testing to a level from which I could derive a technique. The challenge was thrown.

I told them to open Word. The assignment was to test the “Find and replace” functionality and describe to you pair what you did and why. I asked some questions from the teams during the 10 minutes of testing we did and made them focus on actually telling why they chose to do something. After the ten minutes, we started talking about how the testing was done. These are the key points we came up.

Hot-key testing

The first team started to describe what they did by explaining how they searched for the functionality. They were trying to find different ways to access the functionality. They found out that on different operating systems the hot keys vary. More so, the hot keys are customizable thus enabling different combinations. “Ctrl+F” was the easiest way to find the function, because it happens to be the same on many other software also (comparable product and familiarity to user). On Mac there wasn’t a “Ctrl+F” so the hot key was a bit difficult to find.

Based on their approach to using the hot keys we gathered that the technique can be used on many Windows based software (and why not Mac based, but I don’t have the experience to use hot keys quite yet). The commonly known hot keys like “Ctrl+C / V / X / Z“ etc. are quite easy to test. The tests are quick and cheap, very generic thus making the technique quite useful.

Premise variance testing

When the group was trying to find different ways of accessing the functionality (hot keys, context menus, sidebars, ribbons, etc.) I asked if it changes the behavior of the functionality when you access it from different origin points. If you change the premise, can the functionality change?

We started to think if we could apply it to various other solutions and products, and we came up with “premise variance testing”. When one changes the premise condition to a function, there might be changes in the behavior. This kind of technique can be derived also into a “step variance testing” where you mutate a single or many elements within the process.


Help testing

When one team was trying to figure out how the function worked, they pulled up the manual. The help can be quite simple for experienced user but it acts as an oracle on many occasions. During help testing one can testing the help itself against the product and test the product against the help. In either case, one acts as the test object and the other as the oracle.

This technique could be derived into all kinds of oracle material testing. We can test against oracles that are used by various stakeholders, e.g. requirements or design documentation. We test the product and ask “is this ok?” and we then try to solve the problem by referring to the oracle. We might have an oracle (e.g. human oracle telling how it should work) and then test the other oracle based on the new knowledge (the human oracle disputes the written document). The “Help testing” might become “Oracle testing”, but the name doesn’t give me good vibes. ;) A help could actually be any material that helps us do testing.


Data roundtrip testing

A team was testing replacing a word with gibberish and then replacing that back to the original value (“Pekka” -> “ASDFGH” -> “Pekka”) and they wanted to know if the same amount of entries are changed. So basically the idea was to revert the original data without actually reverting the state. Mathematically I think this is called “inverse function”. First we apply the normal function followed by the inverse function. Roundtrip actually means that you return to where you started from.



We had a discussion if the “roundtrip testing” is actually a generic thing that can be done to a state also. It is possible to revert the system to previous state without any information whatsoever about the state that was visited. This might actually be a problem in itself, but we chose to narrow our testing technique to mere data.

Minimum data

We did find some testing ideas while describing techniques and I think this was worth mentioning. A team wanted to test with as little data as possible. That is a one variable of premise variance testing where we solely focus on varying the data instead of the states. This testing can be the defaults form testing, testing without any inputs (NULL, n/a, whitespaces, etc.), removing metadata, etc. and it can find bugs in the exception handling logic.


Conclusion


All in all, the testing techniques we found have already been described in other sources, but these made sense to us and felt important. The terms are more tangible than “product tours” or some techniques found in books. We defined the terms and we learned how to describe them in a language that suits our context.

I know that at least the premise variance testing stuck. I have used it a few times now to describe what I do. It makes sense to repeat this exercise again with a different depth. Then uncover new, undescribed techniques and make them part of our tool box. After there has been a handful of these sessions, we might have enough skills to describe our testing to any stakeholder in a language we share and understand.

Sadly that was the last of the Testing Tuesday workshops on this tour. There will be another tour in Helsinki, and I shall write up as much as possible from those sessions.

- Peksi

Tuesday, 23 June 2015

What happens when a wannabe rockstar does a lightning talk?

"Are you f***ing ready to rock?!"
The crowd is wild. The stage is lit. The announcer get on the stage.

"Ladies and gentlemen! Are you ready for the coolest rockstar in the world?" The crowd roars its approval.

"Are you f***ing ready to rock?" The crowd goes wild...

Ok. That's not how it happened for me. That is what I wanted to happen, when I did my lightning talk at the Nordic Testing Days 2015. Actually, it went something like this:

Helena Jeret-Mäe get's on the stage. "Next we have Pekka Marjamäki on the topic of Testing Tuesday", she says. Then I get up from the floor and walk in front of the room full of people. I have my hat full of badges, my Superman t-shirt, suspenders handing down, all cool and ready to rock the place.

"OK, people!" I start my performance. I divide the room into two groups. "The group on this side shouts 'TES' and the group on this side shouts 'TING'. Ready? Tes-ting! Tes-ting!"

The crowd starts shouting. They shout "testing". "Louder!" I shout. The room is roaring for a minute or so! Then at it's peak, I silence the crowd and start my talk on Testing Tuesday.

Testing Tuesday doesn't sound that weird. It is a concept me and my colleague Petri Sirkkala from Solita came up with. It spawned from a need to teach testing at my company. I will explain how it goes in detail (accompanied with a video, perhaps) in a later post. In the Nordic Testing Days 2015 I briefly introduced the concept. It is 7 weeks every Tuesday, each week having a workshop of it's own and helping our colleagues with their testing problems. The most recent blog post is a write-up of the 6th Testing Tuesday workshop. Apart from actually helping people test, strange things happen during the Testing Tuesdays, e.g. us two testing dudes walking around the office shouting "Testing Tuesday" and playing Sex Pistols from an old cassette player, or posting testing problems on a white board in the hallway.

The main goal of Testing Tuesday is to promote testing and to sow seeds of interest into people who aren't yet that much into testing. The second goal would be to help people in their testing related challenges. The third one is to have fun.

I think my objective at the Lightning talk was to convey the energy and enthusiasm we pour into the Testing Tuesday. The attitude to fight against poor practices, the bravery to stand up and challenge, the eagerness to improve. Honestly I can't remember what I actually said during the talk, I had so much fun. I do believe that people got the key points out of my talk.

Be open about your passion towards testing.
Share knowledge and help others.
Be brave and have fun.

"We want more! We want more!" The crowd shouted after my talk... At least I wanted it to. Alas, it didn't...

- Peksi

Thursday, 18 June 2015

Bug handling workshop

I am running a thing called "Testing Tuesday" at the office. The concept is simple: Sanctify Tuesdays to software testing. This comes in form of helping project teams to test by helping them solve their testing related problems, promoting testing in every possible way. And to top it all, an hour workshop on some testing related topic. I will do a proper write-up on the subject later, but I wanted to share the coolest thing that happened during the 6th (out of 7) Testing Tuesday. The topic was "Bug handling" and the results were really awesome!

A week before this workshop we had a testing oracle related workshop, which I then promoted on twitter. I had classified three bugs and I mentioned those in my tweet. I then had a tweet exchange with Michael Bolton about classification.



That discussion made me want to redefine my Bug handling workshop, since I saw that people I work with, me included, might have quite a different approach to handling the observation we make and receive about the product we work with. So after talking to Michael on Skype I decided to do the following:

Have people define a bug handling process from the very beginning to the very end. Then plot it out, draw diagrams, etc. to explain it. Then focus on the difficult parts and try to enhance the process.

So we started by defining where does "bug handling" start. I started by saying that it starts from the moment there is code, but I was corrected. Bug handling, or observation handling, starts with the first indication or deliverable of work. That might be the requirements documentation, project plan, or whatever tool that is used to run the project. It can be unwritten requirements. It can even be an idea! From the very beginning we start testing and observing the subject. It is those observations that might require handling.

Based on the purpose and the need, we define the way we report, write down, take notes, etc. If we are talking about testing ideas, the observations could be about the idea or the repercussions thereof as statements voiced out. When testing a software, observation may be something you see, hear or feel, that you write down or record. A bug report is a description of your observation, which is then used in various ways to help understand the observation.

https://commons.wikimedia.org/wiki/File:Magnifying_glass_icon_mgx2.svg


"Observation is the active acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the recording of data via the use of instruments. The term may also refer to any data collected during the scientific activity." - Wikipedia (Observation)

It is these observations that we then start to analyze. It can be done in many ways. An observation as a bug report can then be inspected to its validity. Analysis might require communication with the stakeholders, tools, classification algorithms, etc. It is these actions that we employ to analyze the observation. It can be a snap decision or statistical analysis. Whatever is done during the analysis, there is an outcome. The outcome might be to trash the bug report, invalidate the observation, classification of the issue, pigeon holing an inference, describing a behavior in a more concise way, etc. Analysis creates something out of the observation.

Based on the analysis, there might be an action to deal with the observation. It can be a change in code, adding something to a document, building a new tool, fixing a leaking pipe, redefining an argument, etc. There might not be any actions towards the original subject of observation, but perhaps to the process with which we test and challenge. There might even be some actions to make the observation or the analysis different. Maybe a process improvement, learning a new skill, etc. Actions might require on the sub processes and further actions. In the end, however there is a follow up on the actions.

The follow up usually happens after the action. The follow up depends on the observation in a sense that there might be a need to reconstruct the situation in which the observation happened. There might be a need to refer to the earlier version of the subject under test. There might have even been a shift on the subject based on the analysis. And the action itself dictates the follow-up and the magnitude and the nature of it. The follow-up might require regression testing on a bug finding, another round of reviews, rerunning the test automation suite, rethinking, etc.

These four basic action became the guiding principle in all our testing processes.

Observation - Analysis - Action - Follow-up

But it wasn't enough. Every single bit of these actions required supporting activities. Observation required note taking, testing skills, tools, etc. Analysis requires processes, practices, domain knowledge, etc. During our workshop discussion I picked up some key words that were used. I then generated two clouds: The core activities & the supporting activities. The core activities are not enough on their own. The context states what kind of supporting activities are needed to make the core valuable. The supporting activities are of no value without the core, but the core loses value without any supporting activities.

Here are some of the things we came up.




Like I mentioned, all this is useless without the context. Every scenario requires a context that states the most useful way to approach the "bug" handling. Pair review requires different supporting activities than Beta testing, but they both have the core activities. The tools that are used might differ: you can use post-its, JIRA, QC, email, surveypal, etc. to communicate your observations. During the analysis those observation might be enveloped by a tool to create virtual stickers and notes. There might even be a template that is used to report an observation. Those observations might be classified, prioritized, trash or whatever. Based on the analysis at some point in time, something might be done. When I say might, it means that it is possible that an observation is lost and not acted upon. You can call it an "action" if an observation gets lost, but it is philosophical. Let us assume that every observation has an action. The action might require communication, changing something, tools, practices, processes, people, etc. Those actions then have follow-ups. That follow-up can be enveloped in the same tool that it was when it entered as an observation. It can even have a process of its own.


To conclude, there is no best practice to handling observations. Not every observation is a bug. Not every bug needs to be handled the same way. What was the most valuable thing I got out of my workshop was "mind the context!" Think of the value of your process to stakeholders. Think of the needs that need fulfilling. Think of the feedback loop. Think of the people involved in different tasks.

That is all today.

- Peksi