Pages

Wednesday 30 May 2012

Comparing love to eight - A response to a blogpost


This is a response to a writing by Natalya Bosatskaya at Intexsoft.com’s blog. The blog post was named Scripted testing vs. Exploratory testing. This post tackles every paragraph individually and tries to raise some thoughts to people thinking the same way as Natalya in this post. She did a great job by trying to explain her points of view, but I felt some of them were off mark. I felt the need to write a longer response to her as the limited space of the comment field did not allow me to get my thoughts out correctly.

In this post, I will attempt to compare Scripted and Exploratory styles of software testing. On the third hand, scripted testing is seemed as a strict and serious process and exploratory one is seemed free and easy. But each test style has own swings and roundabouts. Let's look at them from different points and try defining appropriate conditions to use first one and second one.

It is a common mistake to think exploratory testing as “free or easy”. Exploratory testing (when done correctly) is a cycle of learning, testing and test designing. It is structured but can be as free as it needs to be. It doesn’t limit thinking but provokes it. Exploratory testing focuses on the cognitive process of testing, the lateral thinking and critical analysis of the product in testing. By referring as free, you might mean that it is guided by other things than documented test plans and scripts. Exploratory testing is guided by documents, checklists, feelings, heuristics, questioning, data, functions, missions – whichever is the most suitable way to steer testing in a context and in the situation at hand. Usually the exploratory testing focuses to test the most interesting/important/risk prone area of the product at any given time.


Scripted testing usually supposes two roles: test-designer and tester. Test-designer creates test scenarios beforehand: and then tester executes these test scenarios. Test-designer is a high skilled specialist and tester is a beginner at software testing.

It is true that scripted testing requires two roles. Why is that? Why not have the skills and the insight about the product in one person? That would make it cheaper, right? A beginner tester should learn about the product as much as possible so that he/she would not be a beginner. By learning about testing, about the context, about the product, about techniques, about tools, the tester gathers more knowledge that trying to reproduce scenarios written beforehand.

What happens when the scenario changes? Do we need the test designer to re-write the case? What if there is no time to run the test case that the designer wrote? Is that considered value?

Exploratory testing doesn't divide activities by roles and time. Tester thinks up test cases himself, executes them at once and according to the test results, he creates new test cases. It's obviously, that such tester must be an experienced specialist.

It is true that exploratory testing doesn’t have fixed roles. Exploratory testing does have roles. A person may possess certain amount of business knowledge and in a testing session he might take a role of transferring the knowledge to the other testers during the testing. He may have another role in a different session. There is no waste of time or effort by making written test cases beforehand, although they might use existing test cases as guide and information to learn more about the product AND the about the testing done using those scripts.

Exploratory testing is an approach, not a technique that you use to test a product. It is a mindset to explore the product using the skills that one already has and developing more skills during the testing by learning about the product thus being able to more accurately design future tests. A person that has no experience in exploratory testing needs to start doing testing the exploratory way in order to become an expert. No-one is expert in every context but they may form skills to be good enough in a context.

A test designer taken from the bushes is just as clueless as the beginning tester. But do you know how the test designer starts to design the tests? He explores the product, reads the documentation, questions the stakeholders, and looks for clues. He does exploratory testing but wastes the effort on writing the things down so that a “beginner” can then interpret what he has learned and try to replicate the thought behind the test case.

In case of scripted testing, your test team can consist of one high skilled test-designer (costly employee) and several beginners (low-paid workers). You will economize resources, new testers (even low skilled) can start working rapidly enough. Exploratory testing depends on qualification a lot. If tester has low qualification, he will use ad hoc testing in the guise of exploratory one.

Now you are saying that testing as an activity is a low paid job. Sounds like an artificial pay gap to me. A software designer should be a highly paid person who understands the product and the programmers should be low-paid workers. I'd like to think software testers as skilled group of engineers that in collaboration with the stakeholders (developers, project managers, customer) aim to deliver a quality product using techniques like questioning, testing, challenging, coaching, etc. I think the testers are as important part of a software project as a developer, although you might have a tester integrated into a team and have other roles also (but that's a whole other matter).

I do agree that a tester without a mindset and the structure might veer towards ad hoc. A skilled manager however understands the possibility of a new unskilled tester and guides the person towards structured, mission based exploratory testing. A person that has no knowledge about the product should be given the opportunity to learn about the product and to give ideas about how to test the product better.

You might say that a person without the skills to do exploratory testing is an exploratory tester. Given a chance to explore the product one might become more familiar with techniques, skills and tools required in testing the product efficiently. A person without testing skills forced to perform a task that is scripted eliminates the chance to learn and doesn’t give room to become more efficient.

Scripted testing gives you facilities of high planning and predictability. If we have ready test scenarios, we can estimate efforts on tests executing precisely enough. In case of exploratory style, planning is very difficult or impossible at all. There's no guaranty, too, that tester will execute all needed tests and not forget anything.

It is true that planning gives you predictability. That is why there are so many different ways to manage testing. Take Session-Based Test Management (SBTM); it is a documented way to plan testing in order to respond to the situation at hand AND take into account the future testing activities also. It is rarely so that things go as planned. Scripted testing is rarely able to respond to changes in the software. Let’s say a bug is found. We might need to take a little more time to execute some confirming testing around the problem, even do a another test or three. When the bug is fixed, it may require some additional testing. Might even happen that the fix uncovers other bugs. A set of carefully planned activities start falling like a dominos.

What happens when a test designer forgets something? What happens if the requirements analyst forgets something? What happens if the programmers forget something? Is the scripted testing able to respond to this situation? What happens if we run out of time and we have tests to run? Money that has gone into developing those tests that are not run goes to waste thus making scripted testing even more expensive (and you don't get to learn). Exploratory testing has no such waste.

Exploratory testing management techniques can tackle these things with predefined processes. When something is considered more important than something else, exploratory testing can change focus on the fly and focus on the newly important part of the product.


Scripted tester can give qualitative documentation. At least he can give test scenarios and reports about execution of this scenarios. Exploratory testers often have complications with reports. How can tester report what he has tested and how many tests still remain? How can manager show to customer what has been tested?

Test cases (just like bugs) are not objects of same size. It’s like counting 3 cars, 2 buses and a train – 6 things that move people. How many people are moved? How many things (that move) fit into a car park? Test cases are counted but how much is tested? One way to measure testing is the time spent on testing an area. An hour is an hour in every place in the world (OK, there may be some exceptions) so only qualitative measure should be time. An exploratory testing session takes 90 minutes so 2 persons doing testing for 2 sessions count as 6 hours of testing. How many test cases fits into 6 hours? 3? 300? You can’t tell, but I can tell that in 6 hours I can test an area (or areas) for six hours. Simple, isn't it?

An exploratory session might have a precise mission but allows distracting from the mission if it seems appropriate. Using the SBTM to guide testing you will always get a report of what was tested, that ideas came up during the testing, what bugs were found, what issues arose. A mission might be a user scenario or several of those. A mission might be to learn about “what does the menu bar do” by exploring the product, documentation and eliciting information from stakeholders. Documentation may be as simple as verbal description or as detailed as a screen-cap recording with Rapid reporter log files.

Using sessions as guide to what needs to be covered, we can adjust the plan as we go so that we add more sessions on a given area. At any moment we know what the current situation is with the information we currently hold. A manager should always interview the testers so that he/she can form a testing story which can be supported with gathered metrics (hours/functions, sessions/area, etc.).

So according to reasoning above, scripted testing comes to the fore. Let's find a few advantages of exploratory style.

In this part I disagree strongly. Let’s look at what kind of knowledge about exploratory testing puts scripted testing as the superior approach. However as my examples before were “to the point” and aimed to address the text only, some things about exploratory testing do take somewhat scripted approach while retaining the essence of exploration – learning, testing, and designing simultaneously though stressing different aspects at different times.

Exploratory testing is flexible and adaptive to changes unlike scripted one. If something changes in the application, test-designer must alter test scenarios, otherwise tester can't execute tests. Exploratory tester has test suits in own mind, so he can change them as fast as needed.

Even if we think the most scripted exploratory testing, it still has the advantage of exploring. Exploratory testers don’t need to keep everything memorized as we can use memos, checklists, heuristics, and post-it notes. The documentation fits the context and the need of the current testing situation. All unnecessary documentation should be shunned. Use the documentation as a tool for testing instead of the purpose or the product of testing.

Basic work of scripted tester is boring and monotonous. Exploratory tester has interesting and creative tasks. He does different kind of work, he is designer and executor, so his work is not routine.

Exploratory testing can be monotonous, but the an intelligent exploratory tester finds clues in the monotonous parts of testing to learn more, improve testing techniques and to design better test ideas. If a scripted test becomes monotonous, that could be a hint that it should be automated to give more time to actual testing. Exploratory testing utilizes lots of tools, automation scripts, basically everything that supports the testing and makes it faster. “Throw everything and the kitchen sink” analogy is good to describe the thought behind utilizing resources in exploratory testing.

And exploratory testing can become a routine task, but a good exploratory tester changes focus, de-focuses, makes assumptions and tries to see behind inference, look closer and look at the bigger picture. Exploratory tester turns the routine into his advantage!

In case of exploratory style, tests are created and executed almost simultaneously, so it lets starting tests execution and uncovering bugs earlier.

True. Why shouldn’t scripted testing start earlier? What’s stopping you? Is it the test design part that’s taking so long a time? You have the designer doing (what seems to me) exploration so that he can design the tests. By not putting all the time into writing the testing can begin earlier and can achieve much more learning in doing the testing. And why force the “beginner testers” to wait? Let them dig in also and learn as they go forward. The test designer can still make documentation as in missions for the testers, but retain the exploratory approach and put his skills into good use.

In the end, I will cast one more disadvantage of scripted testing. It's the pesticide paradox. If the same tests are repeated over and over again, eventually they will no longer find any new bugs.

That is why a test getting repeated several times should be considered to be automated and the effort can be put to exploration of the parts that the automation doesn’t cover. Automated tests can check the product and the parts that already work and leave the testing to the testers.

Repeating tests is like repeating the same question all over again. Does that bring value to the tesing (sic)? Does that bring value to the tesing (sic)? Does that bring value to the tesing (sic)? Does that bring value to the tesing (sic)? There might even be a bug in the test itself thus leaving some important areas untested or poorly tested. Automated testing requires also analyzing and exploring (they’re code after all).

In my opinion, it's not correct to consider that one test style is better than another. Each style can be suitable or not in dependance of goals, test process organization, customer requirements and other reasons. And what do you think?

You are correct again. One approach does not supersede others. They are all context dependant. However I see that in every context there is a need to improve. So why not try learning while testing? By forcing people to do testing and NOT learn is a style that is should not be considered suitable in any context.

We may do harm to the community by comparing two different concepts to each other. Like comparing a dog to a blue, or a love to eight (which by the way in a certain context is possible -> love=zero). I do however see that there are too many misconceptions about exploratory testing and that false knowledge must be corrected. Exploratory testing is just like driving a car: we respond to stimuli to adjust our thinking, bearing, or speed. We learn while we go into situations from the behavior of others, the car, and the environment and will determine the right approach as we get the grasp of what is going on. We do not stop the car at every situation and check the car driving manual to give us instructions how to handle the situation. Why do we think that testing requires us to rely on scripts that (in best case) are interpretation of someone else in some different time and context (or entirely obsolete)?

Monday 21 May 2012

Porridge is bad for you!


"What?! What is he talking about?" This blog post is about challenging one's criticism and trying to make it more effective using simple methods. This post is about moving away from unhealthy criticism into constructive critique and feedback. This is about finding the right mind-set.

Why do people criticize?

"You're doing it all wrong!" "Your tie is hideous." You've seen it. You've heard it. Probably you've even done it. I know I have. And in most occasions I have let my self-criticism be clouded by someone else’s opinion about the same subject. By not having a concrete opinion of one's own it's easy to adapt to criticism of someone else.

Let's take an example of a guru that has strict criticism against one subject - a porridge (seems like a neutral enough). We all know that porridge is good for us, right? What if a nutrition guru says that porridge is bad? What do we do? If we have huge respect on that guru's thoughts and doings, might we blind our own judgment with the upward gaze towards the guru? We might just take the guru's opinion granted and start proclaiming that porridge is bad for you.

One way out of this bad equation is to step back and challenge one’s own critique. Is it justified? Are you making an assumption? (Great blog post about defeating assumptions by Ilari Henrik Aegerter) What if the guru was wrong? After making sense of what do YOU think, then you should back the critique up with facts, not opinions. By finding the facts, you might be looking for biased facts, but at least you have something to back it up. It's obvious (to some) that a biased opinion should stay as an opinion, but they rarely do. Instead of being opinions they become statements supported by chosen facts.

Doing basic critique, source checking, challenging, context projecting, you can find the root cause of the guru's opinion about the porridge. Does he (our guru is a he today) have an agenda of his own? Are there hidden meanings in the critique itself? Does it provoke thinking instead of criticizing the product?

How do people criticize?

"When giving feedback, do it like so: Always give good feedback in public and be precise about what was done well. Always give negative feedback in private and be precise. Try to find the solution instead of the one to blame." This was said by my father who has decades of experience in management and leading people. I have always thought this as the fundamental guideline of critique. I think most people know this and agree with this, but how come most people don't act accordingly?

Let's say that the guru had discovered some facts that "Ye olde bran porridge" has all sort of chemicals in it that disable some growth hormones on a child. Obviously that's a statement to be told to the public, right? And as we hold the guru in high regard, he is mandated to present his opinion (possibly supported by facts) in some public media. There are channels in which you can present a complaint about food (health inspector or some kind of an agency) and they will take the necessary precautions to tell the public that "porridge is bad for you". Possibly they have first discussed with the porridge company, who might have taken the product off the market.

The guru might give criticism about porridge in public and have the wrath of the porridge company on his shoulders. He might not care as he's a nutrition guru and has an agenda of his own (hoes he?). Is the guru doing the right thing expressing his opinion so loudly in public? Is the guru promoting himself instead of giving critique?  Was the bad thing in the porridge, in the chemical, or in the company making the porridge?

Where's the difference in the approach between the two models of critique? Was the guru able to achieve the goal of his criticism through a public channel (which ever the goal might have been)? Was the "behind closed doors" critique more efficient than the "in your face" critique? They all depend on the context, obviously. What was the goal of the critique?

Feed-forward

Some people think critique is feedback. Well it kinda is in some extent. Feedback however can be constructive even when the feedback is negative. Feedback is given when someone asks for it; critique is given when ever. Feedback is not trying to make one feel happy/sad but to make them improve; critique is about making a statement. When giving feedback don't sugarcoat it, instead say what YOU like and you'd like to see improved. "I liked the taste of porridge and how my stomach feels afterwards. To make it even more healthy I would not put in the chemicals that prohibit my growth."

As the feedback is a kind of a thing to be asked for, critique is the kind of a thing you just blurb out. Feedback has a purpose and it is meant to improve the one receiving the feedback. Critique has the tendency to provoke something. Conversation, debate, hatred, etc. Challenging can be more effective a way than critique. Challenging the critique itself can become the most valuable feedback there is!

Is the content self-justifying or do we need to empathize to support the critique?

There are tons of guides in how to give feedback without being critical. I know a dozen occasions where I have let my judgment be clouded by numerous things that have lead into bad critique and undesired results. Here's one:

I try to promote intelligent testing and intelligent approach to quality in general. I also believe that certifications that focus on the certificate itself are no good. A certificate that focuses on skills in a field that requires skills is a good thing; artificial certificate concentrating on a narrow view about best-practices (and only knowledge thereof) is a bad thing. This is what I thought and still think. I was having a discussion with people I think highly of about “what is your opinion about ISTQB-provided series of certificates”. By making a comment that the certificate looks good on paper (with some unflattering spices), I provoked a series of questions about “how do I back up my statement” and "do you even know what you're speaking of".

The questions stuck home and I started to think of how I really thought about the issue. The fundamental thought behind the issue remains the same, but as I have not delved deep enough into the syllabus, the history thereof, the initial goal behind the syllabus. Am I eligible to make claims about the issue? Was I repeating what the other people were saying and them making myself feel important about myself by making a rash claim? Does my opinion really matter in this case and could I do some good without being so loud about it? Is there a possibility to raise conversation about the issue within the certificate organization without sounding like a zealot?

With the comment I made (which was criticism at its worst) I thought the content of the comment was self-justifying. "Obviously all the people were thinking the same so I just said it out loud." Even though some of them were, they rightfully challenged my comment and forced me to think about it. What was my goal when stating something like that? What was the desired outcome? Praises to me? More Twitter followers? To raise conversation? To sound like a dumb-ass?

What I did achieve with the comment was for me to be able to criticize my own behavior and claims. I once claimed (in Finnish) that one way to achieve the best quality of an end-product is to "Murder you darlings" - by finding the most direct route from the current point into the desired point. By removing all the excess and self-promotion from the content. To go directly towards results. In making the comment I a was focusing on "sounding cool" instead of trying to use the words as a tool to achieve a goal (which apparently was shrouded).

Did I hurt someone in the process? Can't tell. Not directly, I assume (Pekka, you're assuming things).

Did I achieve the goal? Can't tell. I wasn't aware that there was a goal.

Did I learn something? Oh boy, did I!? ;)

Friday 18 May 2012

I don't know

What's this post about?

I don't know. That's right; it's about "not knowing".

I spend the two days at a coaching course by TNM Coaching and we had a great time there. There were really good conversations about topics regarding coaching and I will delve more deeply into the ones that had most impact on me. Before I go deeper I will share some insights that I learned at the course about questions and the answer "I don't know".

The fact of not knowing

We all get asked questions that we may not have the answer. Some questions are just too complex for us to understand or we may not have the skills to answer that question. For example someone asks you about the amount of stars on the Northern sky, so you'll probably answer "I don't know". In this case you may not have acquired the necessary knowledge base on the topic and thus lack the ability to know the answer.

Could you answer differently?

When we answer by saying "I don't know" we subconsciously diminish ourselves. We give ourselves the impression that the knowledge is required to be "something" and by lacking the knowledge we are lacking as humans. It's a human behavior thingy of some kind, I think, and it comes naturally if we don't have the answer thought up.

If we could spend some time to think about the answer we might be able to avoid the "not knowing -trap". A difficult question requires a bit of analyzing. Do I need that knowledge right now? Could I check it up somewhere? Is there a reason why I have now acquired the required knowledge to answer this question? Do I possess the knowledge already but effectively forgotten it? By answering a question with a question (be it mental or verbal) you may find a better answer than "I don't know".

I don't know if I want to answer this question

There are different situations where you answer a question with an "I don't know". Sometimes the answer is to avoid answering truthfully. In a coaching session when another person asks questions and helps you solve your problem the answer "I don't know" may come in up. This poses interesting issue with the coach as there may be answers behind the "I don't know".

At this moment comes a point where the situation needs to be evaluated. Is it reasonable to try to find the answer behind the dodge or is it better to let it be. If you decide to let the issue untouched it should be stated. You may agree to speak of the topic at later times.

If you however mutually agree to go deeper behind the dodge, one good way is to eliminate the psychic lock (a Jedi Mind trick) by a simple question: "If you knew the answer, what would it be?" or "If a situation presents itself, what would you or someone else do to solve the situation?" The wording reaches behind the barrier and encourages the person to use the capabilities he or she has and start seeing the solutions. This may not however work always, but it is one part of the probing and facilitating the coachee to be able to verbalize his or hers issues and find the solutions.

More coaching stuff coming up!


This is my insight onto the subject of "not knowing". I got the inspiration to this from the coaching coach Vivienne Ladommatou who spent two days with us at F-secure to help us be better coaches. I really admire her wisdom about Genuine Interest and I think that could be the next post regarding the coaching.

I will be making at least some kind of a blog post series about coaching, especially the things that matter to me the most in coaching. I will do some practicing to hone my skills so if someone is interested to help me with my quest to become a better coach, feel free to contact me and we'll figure out some time slot and issue that we start discussing about using the coaching process.

So just tweet or Skype me, or comment under here, and I'll arrange a coaching session that best suits the context. When will be a good time to coach? Well... I don't know. ;)

Wednesday 16 May 2012

Testing with the stars


On 15th and 16th May there was the coolest conference I have ever been to: Turku Agile Days 2012!!! There were great speakers from around the world - Lisa Crispin, Elisabeth Hendrickson, Linda Rising and many many others. But in addition all the talking and workshopping, there was this competition called Testing with the Stars. This was the first of its kind and I had the privilige to be invited to be the star tester in the competition: I also had the most amazing pair from F-secure St. Petersburgh office, Sergey Moshnikov (@dan_dandelion in twitter). He was a great sport to tolerate all my rantings about exploratory testing and he did an amazing job with all the sessions, especially on the subject of BDD in which he appeared to be a huge talent! It was an honor to be paired up with a talent like that.

You can read more from here but here's a quick summary of how it went.

The competition had originally 6 pairs of testers consisting of a star tester and the guest tester. The teams were

  • Ru Cindrea & Ulrika Malmgren
  • Anssi Lehtelä & Tom Leppälahti
  • Petteri Lyytinen & Jenna-Riia Karhunen
  • Pekka Marjamäki & Sergey Moshnikov
  • Maaret Pyhäjärvi & Henrik Juvonen
  • Juha Rantanen & Anssi Kinnunen


The first day was spent doing 3 ten minutes sessions each about Testing strategy, BDD and Exploratory testing. After the ten minutes session our "dance" (as the concept was borrowed from the Dancing with the Stars TV-program) was judged by Lisa Crispin and Elisabeth Hendrickson (who were the guest judges of the competition) and finally also by the audience. So basically there were 18 dances from all the six teams and three got to the semi-finals on the second day. Me and Sergey were lucky enough to get into the semi-finals along with Juha Rantanen, Anssi Kinnunen, Anssi Lehtelä and Tom Leppälahti. The first day was great fun and the contestant did an amazing job with the little time they were given to prepare to the show.

The second day began with a dance free of choise. Me and Sergey did a BDD-dance on how to introduce value to the BDD process. We had help from Jenna-Riia (thanks for all the help we got!) and we managed to pull together an act with the score of 21. Juha Rantanen and Anssi Kinnunen dazzled us all with a BDD dance where thay demonstrated the story building process and included an example of how it could have been done using Robot framework. They got a score of 26. Anssi Lehtelä and Tom Leppälahti did an amazing job visualizing test strategy building using Anssi himself as the test object! It was hugely entertaining but got a score of 16. After the audience had voted the teams' performance Anssi's team and ours were tied with 29 points each (Juha and Anssi had 34 so they were going to the finals anyhow). As the audience was the ... urm... audience of the dances, it was their word that settled the tie and Anssi and Tom went into finals.

Anssi was however not pleased with this as he kinda liked our show, se he insisted that we would team up and do a double dance! So we did a testing competition within a testing competition and we let the audience decide which team won the exploratory testing duel. It sounded like the audience was leaning against Anssi's team but we had a blast doing the duel. Then Juha and Anssi came with their exploratory testing dance and blew up the bank! Dancing robot testing to the beat of Scooter's "Move your a**". With which they won the competition.

I really enjoyed myself during the conference even though I didn't get to see too many talks. I however got to hear the great Linda Rising do a keynote on Agile Mindset and a talk from Elisabeth about test automation. I am definitely going to attend the next TwtS that is going to be arranged as it was so entertaning and good learning to all the participants also. If any of you want to see the video recordings from the sessions they can be found from Bambuser but be warned. ;)

As the Linda Rising's talk was so inspiring, I'll finish this time with a quote she mentioned in her keynote:
"Ever tried. Ever failed. No matter. Try again. Fail again. Fail better." -Samuel Beckett