Pages

Tuesday, 14 July 2015

"Thinking like a tester" workshop

As I have mentioned before, we have this round of workshops under the concept of Testing Tuesday. I have already covered two of the latest workshops as blogposts. This one is an attempt to cover the first of the seven, a workshop called "Thinking like a tester".


Why did we talk about thinking?

I like to think that there is no testing without thinking. If thinking isn't involved, it is not testing. Machine doesn't think, thus it just checks. Human challenges, observes, infers, models, etc. all the time while looking at a test object. A human tester thinks. That is why we need to practice different ways of thinking. It's like exercising a muscle, but the muscle is our brains.

There are number of different ways to think, many of them overlapping, and we should try to perfect those skills. It is valuable to recognize different patterns of thinking to be better able to solve problems. Testing is essentially problem solving, since we try to notice things and then afterwards try to figure out if this observation might be an issue. By having multiple tools in our tool box we don't have to rely on just one way of thinking ("When you have a hammer, all you see is nails.") and that makes solving the problem much easier.

My goal with this workshop was to introduce a few different mechanics of thinking to the audience and have them use that mechanic to perform an exercise.


What did we cover?

We were able to cover three exercises. The first two exercises were borrowed from improvisational theater. The first one was "reinvent the wheel".

I split people into groups and gave them the assignment to create a method for transportation. The only things given to them were the fact that the context in which they were supposed to solve the problem didn't have a wheel yet. They weren't supposed to create the wheel but create an alternative, as effective method of transportation. The thing was a bit difficult since I told them to start every sentence in their brainstorming with "Yes, but". This was an effort to make people challenge the already agreed.

People started to do the exercise but it seems that they didn't fully see the point of that exercise. My attempt was to make people challenge the assumptions and the ideas already on the table, thus making it an exercise in critical thinking. I might have missed the mark slightly, but people did have fun. I noticed that people required a leader in their group and some catalyst to provoke the challenging. I visited the groups and challenged their ideas by asking a question and then replying to their answer with a "yes, but" phrase. That stirred the pot slightly. I believe it became an exercise on team dynamics more than thinking patterns.

After 10 minutes we debriefed the ideas they came up with and moved on to the next task, "Reinvent storing".

Once again people worked in groups. The task was to invent a method to store things without using shelves or stacking things on top of each other. They once again had two facts about the context: There was no concept of shelves or stacking, and they had to begin an idea by saying "Yes, and". This was supposed to be an exercise on creative thinking, finding new ideas based on the old ideas, accepting what is already decided and building on that.

The task was more fluent than the first one and it spurred some crazy contraptions to store items, from pulley-operated platforms to portable black holes. There was certainly creativity in the air! Once again I felt that the exercise fell a bit short, and people did have questions on how they related to testing.

The third exercise was a bit shorter because it took so long a time to debrief the first two tasks. The third one was an exercise on lateral thinking. I explained it on a broad level, then I gave them a problem which they were to solve using lateral thinking.

The story was something like this: There is a merchant who owes money to an evil man who is in love with the merchant's daughter. The merchant can't pay the dept. The evil man proposes a wager. He puts a white and a black stone into a pouch. If he pulls out a white stone, the debt is forgotten. If he pulls out a black stone, the debt is paid in full AND the daughter is forced to marry the evil man. There seems to be a 50/50 chance. However, the evil man changes the white stone into another black one.

I asked the groups how would they solve the problem so that none of the participants lose their face (i.e. is revealed as a liar, is forced to marry, gets killed, etc.). The task was once again pretty difficult since the premise was so vague. I then had to answer a lot of clarifying questions about the problem before groups could actually start working on their solutions. They managed to think outside the box on many occasions. The ideas were quite feasible, I think.


What was the most valuable thing to me?

Having done three exercises on thinking, I realized that it was just a scratch of the surface. I thought I had time to do a "Thinking fast/slow" exercise but everything went by so fast. The essential thing might have been just having fun with my coworkers, making them do something out of the ordinary, promoting testing as a thinking activity as opposed to a technical task that creates test cases run on some virtual server.

The tasks were obviously quite difficult, but it gave a good ground work for the next workshops. The people were the essence, not me blabbering in the front (although I like that also). The more workshops I held, the more attendee driven it became. I facilitated, they provided the material.


What would I do next?

Since there will be another "tour" for the Testing Tuesday, I will refine this workshop. I will explain in more detail what the task is and arrange more time on people to explain what the connections to testing could be. Instead of making it a lecture I give the mic to the attendees on why would it be important to think. Maybe I'll try to add some other thinking exercises, like "Think like a freak" and the "Thinking fast/slow".

I am thinking on doing a blogpost on the lateral thinking, since I find it really important skill. I believe few of my community colleagues have already done that, so I might have to take a different approach on that. We'll see.

Anyhow, this was the workshop on Testing Thinking. If something wasn't clear or you have ideas how to make the workshop better, drop me a comment.

- Peksi

Tuesday, 7 July 2015

Testing technique workshop

The last part of the Testing Tuesday’s “Test Pistols Tour” was a workshop about testing techniques. The original plan was to have a list of techniques and then exercise to learn those techniques. The scheduling caused us to change our approach because we had no time to create environments for exercises.

So, I turned to the community.




When I dragged my but to the conference room I was expecting just a handful of people, 3 or 4, but eventually we had 7 people. I think there was a bit of a tour fatigue in the air, since this was the seventh workshop. I had seven brave soldiers at the meeting room.

“I have changed the rules!” I said. “I ain’t gonna tell you about testing techniques. You’re gonna tell me about testing techniques.”

Now the plan was the following: Pair people up, make them test something and describe their testing. Then discuss what kind of a problem they were trying to solve with their chosen approach. Sounds simple enough. I was a bit uncertain if people could describe their testing to a level from which I could derive a technique. The challenge was thrown.

I told them to open Word. The assignment was to test the “Find and replace” functionality and describe to you pair what you did and why. I asked some questions from the teams during the 10 minutes of testing we did and made them focus on actually telling why they chose to do something. After the ten minutes, we started talking about how the testing was done. These are the key points we came up.

Hot-key testing

The first team started to describe what they did by explaining how they searched for the functionality. They were trying to find different ways to access the functionality. They found out that on different operating systems the hot keys vary. More so, the hot keys are customizable thus enabling different combinations. “Ctrl+F” was the easiest way to find the function, because it happens to be the same on many other software also (comparable product and familiarity to user). On Mac there wasn’t a “Ctrl+F” so the hot key was a bit difficult to find.

Based on their approach to using the hot keys we gathered that the technique can be used on many Windows based software (and why not Mac based, but I don’t have the experience to use hot keys quite yet). The commonly known hot keys like “Ctrl+C / V / X / Z“ etc. are quite easy to test. The tests are quick and cheap, very generic thus making the technique quite useful.

Premise variance testing

When the group was trying to find different ways of accessing the functionality (hot keys, context menus, sidebars, ribbons, etc.) I asked if it changes the behavior of the functionality when you access it from different origin points. If you change the premise, can the functionality change?

We started to think if we could apply it to various other solutions and products, and we came up with “premise variance testing”. When one changes the premise condition to a function, there might be changes in the behavior. This kind of technique can be derived also into a “step variance testing” where you mutate a single or many elements within the process.


Help testing

When one team was trying to figure out how the function worked, they pulled up the manual. The help can be quite simple for experienced user but it acts as an oracle on many occasions. During help testing one can testing the help itself against the product and test the product against the help. In either case, one acts as the test object and the other as the oracle.

This technique could be derived into all kinds of oracle material testing. We can test against oracles that are used by various stakeholders, e.g. requirements or design documentation. We test the product and ask “is this ok?” and we then try to solve the problem by referring to the oracle. We might have an oracle (e.g. human oracle telling how it should work) and then test the other oracle based on the new knowledge (the human oracle disputes the written document). The “Help testing” might become “Oracle testing”, but the name doesn’t give me good vibes. ;) A help could actually be any material that helps us do testing.


Data roundtrip testing

A team was testing replacing a word with gibberish and then replacing that back to the original value (“Pekka” -> “ASDFGH” -> “Pekka”) and they wanted to know if the same amount of entries are changed. So basically the idea was to revert the original data without actually reverting the state. Mathematically I think this is called “inverse function”. First we apply the normal function followed by the inverse function. Roundtrip actually means that you return to where you started from.



We had a discussion if the “roundtrip testing” is actually a generic thing that can be done to a state also. It is possible to revert the system to previous state without any information whatsoever about the state that was visited. This might actually be a problem in itself, but we chose to narrow our testing technique to mere data.

Minimum data

We did find some testing ideas while describing techniques and I think this was worth mentioning. A team wanted to test with as little data as possible. That is a one variable of premise variance testing where we solely focus on varying the data instead of the states. This testing can be the defaults form testing, testing without any inputs (NULL, n/a, whitespaces, etc.), removing metadata, etc. and it can find bugs in the exception handling logic.


Conclusion


All in all, the testing techniques we found have already been described in other sources, but these made sense to us and felt important. The terms are more tangible than “product tours” or some techniques found in books. We defined the terms and we learned how to describe them in a language that suits our context.

I know that at least the premise variance testing stuck. I have used it a few times now to describe what I do. It makes sense to repeat this exercise again with a different depth. Then uncover new, undescribed techniques and make them part of our tool box. After there has been a handful of these sessions, we might have enough skills to describe our testing to any stakeholder in a language we share and understand.

Sadly that was the last of the Testing Tuesday workshops on this tour. There will be another tour in Helsinki, and I shall write up as much as possible from those sessions.

- Peksi

Tuesday, 23 June 2015

What happens when a wannabe rockstar does a lightning talk?

"Are you f***ing ready to rock?!"
The crowd is wild. The stage is lit. The announcer get on the stage.

"Ladies and gentlemen! Are you ready for the coolest rockstar in the world?" The crowd roars its approval.

"Are you f***ing ready to rock?" The crowd goes wild...

Ok. That's not how it happened for me. That is what I wanted to happen, when I did my lightning talk at the Nordic Testing Days 2015. Actually, it went something like this:

Helena Jeret-Mäe get's on the stage. "Next we have Pekka Marjamäki on the topic of Testing Tuesday", she says. Then I get up from the floor and walk in front of the room full of people. I have my hat full of badges, my Superman t-shirt, suspenders handing down, all cool and ready to rock the place.

"OK, people!" I start my performance. I divide the room into two groups. "The group on this side shouts 'TES' and the group on this side shouts 'TING'. Ready? Tes-ting! Tes-ting!"

The crowd starts shouting. They shout "testing". "Louder!" I shout. The room is roaring for a minute or so! Then at it's peak, I silence the crowd and start my talk on Testing Tuesday.

Testing Tuesday doesn't sound that weird. It is a concept me and my colleague Petri Sirkkala from Solita came up with. It spawned from a need to teach testing at my company. I will explain how it goes in detail (accompanied with a video, perhaps) in a later post. In the Nordic Testing Days 2015 I briefly introduced the concept. It is 7 weeks every Tuesday, each week having a workshop of it's own and helping our colleagues with their testing problems. The most recent blog post is a write-up of the 6th Testing Tuesday workshop. Apart from actually helping people test, strange things happen during the Testing Tuesdays, e.g. us two testing dudes walking around the office shouting "Testing Tuesday" and playing Sex Pistols from an old cassette player, or posting testing problems on a white board in the hallway.

The main goal of Testing Tuesday is to promote testing and to sow seeds of interest into people who aren't yet that much into testing. The second goal would be to help people in their testing related challenges. The third one is to have fun.

I think my objective at the Lightning talk was to convey the energy and enthusiasm we pour into the Testing Tuesday. The attitude to fight against poor practices, the bravery to stand up and challenge, the eagerness to improve. Honestly I can't remember what I actually said during the talk, I had so much fun. I do believe that people got the key points out of my talk.

Be open about your passion towards testing.
Share knowledge and help others.
Be brave and have fun.

"We want more! We want more!" The crowd shouted after my talk... At least I wanted it to. Alas, it didn't...

- Peksi

Thursday, 18 June 2015

Bug handling workshop

I am running a thing called "Testing Tuesday" at the office. The concept is simple: Sanctify Tuesdays to software testing. This comes in form of helping project teams to test by helping them solve their testing related problems, promoting testing in every possible way. And to top it all, an hour workshop on some testing related topic. I will do a proper write-up on the subject later, but I wanted to share the coolest thing that happened during the 6th (out of 7) Testing Tuesday. The topic was "Bug handling" and the results were really awesome!

A week before this workshop we had a testing oracle related workshop, which I then promoted on twitter. I had classified three bugs and I mentioned those in my tweet. I then had a tweet exchange with Michael Bolton about classification.



That discussion made me want to redefine my Bug handling workshop, since I saw that people I work with, me included, might have quite a different approach to handling the observation we make and receive about the product we work with. So after talking to Michael on Skype I decided to do the following:

Have people define a bug handling process from the very beginning to the very end. Then plot it out, draw diagrams, etc. to explain it. Then focus on the difficult parts and try to enhance the process.

So we started by defining where does "bug handling" start. I started by saying that it starts from the moment there is code, but I was corrected. Bug handling, or observation handling, starts with the first indication or deliverable of work. That might be the requirements documentation, project plan, or whatever tool that is used to run the project. It can be unwritten requirements. It can even be an idea! From the very beginning we start testing and observing the subject. It is those observations that might require handling.

Based on the purpose and the need, we define the way we report, write down, take notes, etc. If we are talking about testing ideas, the observations could be about the idea or the repercussions thereof as statements voiced out. When testing a software, observation may be something you see, hear or feel, that you write down or record. A bug report is a description of your observation, which is then used in various ways to help understand the observation.

https://commons.wikimedia.org/wiki/File:Magnifying_glass_icon_mgx2.svg


"Observation is the active acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the recording of data via the use of instruments. The term may also refer to any data collected during the scientific activity." - Wikipedia (Observation)

It is these observations that we then start to analyze. It can be done in many ways. An observation as a bug report can then be inspected to its validity. Analysis might require communication with the stakeholders, tools, classification algorithms, etc. It is these actions that we employ to analyze the observation. It can be a snap decision or statistical analysis. Whatever is done during the analysis, there is an outcome. The outcome might be to trash the bug report, invalidate the observation, classification of the issue, pigeon holing an inference, describing a behavior in a more concise way, etc. Analysis creates something out of the observation.

Based on the analysis, there might be an action to deal with the observation. It can be a change in code, adding something to a document, building a new tool, fixing a leaking pipe, redefining an argument, etc. There might not be any actions towards the original subject of observation, but perhaps to the process with which we test and challenge. There might even be some actions to make the observation or the analysis different. Maybe a process improvement, learning a new skill, etc. Actions might require on the sub processes and further actions. In the end, however there is a follow up on the actions.

The follow up usually happens after the action. The follow up depends on the observation in a sense that there might be a need to reconstruct the situation in which the observation happened. There might be a need to refer to the earlier version of the subject under test. There might have even been a shift on the subject based on the analysis. And the action itself dictates the follow-up and the magnitude and the nature of it. The follow-up might require regression testing on a bug finding, another round of reviews, rerunning the test automation suite, rethinking, etc.

These four basic action became the guiding principle in all our testing processes.

Observation - Analysis - Action - Follow-up

But it wasn't enough. Every single bit of these actions required supporting activities. Observation required note taking, testing skills, tools, etc. Analysis requires processes, practices, domain knowledge, etc. During our workshop discussion I picked up some key words that were used. I then generated two clouds: The core activities & the supporting activities. The core activities are not enough on their own. The context states what kind of supporting activities are needed to make the core valuable. The supporting activities are of no value without the core, but the core loses value without any supporting activities.

Here are some of the things we came up.




Like I mentioned, all this is useless without the context. Every scenario requires a context that states the most useful way to approach the "bug" handling. Pair review requires different supporting activities than Beta testing, but they both have the core activities. The tools that are used might differ: you can use post-its, JIRA, QC, email, surveypal, etc. to communicate your observations. During the analysis those observation might be enveloped by a tool to create virtual stickers and notes. There might even be a template that is used to report an observation. Those observations might be classified, prioritized, trash or whatever. Based on the analysis at some point in time, something might be done. When I say might, it means that it is possible that an observation is lost and not acted upon. You can call it an "action" if an observation gets lost, but it is philosophical. Let us assume that every observation has an action. The action might require communication, changing something, tools, practices, processes, people, etc. Those actions then have follow-ups. That follow-up can be enveloped in the same tool that it was when it entered as an observation. It can even have a process of its own.


To conclude, there is no best practice to handling observations. Not every observation is a bug. Not every bug needs to be handled the same way. What was the most valuable thing I got out of my workshop was "mind the context!" Think of the value of your process to stakeholders. Think of the needs that need fulfilling. Think of the feedback loop. Think of the people involved in different tasks.

That is all today.

- Peksi


Sunday, 7 June 2015

First thoughts on people and bravery – Nordic Testing Days 2015

Nordic Testing Days 2015. Three days of tutorials, tracks, workshops. Three days of people. Three days of awesomeness. One might think it is a cliché but a conference is nothing without the people.

The first thing on day one, at breakfast I saw Kristoffer Nordström. My Swedish friend with a knack for python. After that moment I knew the conference couldn’t be anything more than pure awesomness! Then I met Guna Petrova and Helena Jeret-Mäe at the registration. Those women (among the other organizers) are the beating heart of the conference. Then everywhere I went, new and old faces. There was so much energy in the air I could breathe in testing and conquer the world with it.

That is how it feel to attend a conference: you feel everyone’s energy and be empowered by it. Santosh Tuppad (whom I met during a coffee break on the first day) said the same thing. The people around you give you energy. If the people just happen to think alike, they can give you much more! I have no idea what is the scientific basis on that, but I think it has something to do with brainwaves, facial expressions, and false belief that there is anything scientific around it. ;) At least I felt like a king.

Now that the conference is over, I am spent. I think me having a nagging cough ate part of my energy, especially on the third day (and Cards Against Humanity until 3am had nothing to do with that). I was able to gather enough energy to pull an extempore workshop in the hallway during the coffeebreak where we tried to develop a testing strategy for a webshop. It became a crowd magnet and we had huge fun doing it.

What was the topmost thing for me as a delegate and not as a speaker (although I did my best to speak out whenever I could – the tutorial, workshops, lightning talks, the hallway), I felt that I was a promoter of bravery. Bravery to speak out. Bravery to challenge. Bravery to be challenged.

I think being a software tester is about being brave. We stand on the podium in conferences and we spill our guts in front of people. Rob Sabourin was almost in tears at his keynote. Erik Brickarp admitted failures and even arrogance in his track. People make mistakes and they get up to talk about them. It is bravery to ask questions from the audience. It is bravery to ask “Huh?” when one doesn’t understand.

So, I’m going to be brave. In everything I do, I shall try to be braver than the next guy. I shall show that I have the guts to do things, swim against the current, challenge. Maybe then other people have it easier to be brave also, even a little bit. Maybe I can show example and lower their threshold. In the following 12 months that bravery shall take me in to every possible event where I can speak up.

As for the future, the following might make it as blog posts (I haven’t fully decided yet):

  • My lightning talk on Testing Tuesday (TES-TING! TES-TING!)
  • My workshop on Test Strategy in 10 minutes (We didn’t make a testing strategy at all)
  • Kristjan Uba’s tutorial (Rogue Legacy to win!)
  • Blood sausage testing (don’t ask… or ask Sami Söderblom)
  • Context dependency (based on Bill Matthews’ and Ilari Agaerter’s tracks)
  • Cards Against Humanity (I just want to write something really clever on this)
  • Naked Tester (I hate you, Richard, for injecting this idea inside my head!)


The conference is over. The lights have gone out. In the end, everyone’s tired but excited. I want to thank the following people for great discussions throughout the conference: Sami Söderblom, Kristoffer Nordström, Erik Brickarp, Santosh Tuppad, Richard Bradshaw, Jekaterina Krivega, Kristjan Uba and Ilari Henrik Agaerter.

I want to thank the organizers of the Nordic Testing Days 2015 for every single thing they did! The light dancers, the magician, the games, the venue, the food, the drinks, the people! I will see you next year! I promise!



Test Pistols live forever!
- Peksi


Wednesday, 13 May 2015

Intelligent practices of software testing

Intelligent practices of software testing

(This was first published at dev.solita.fi)

Note! I am not using the term “best practice” here. Intelligent practice is something that helps people develop their practices into more suitable for their need. We actually should stop using “best practice” term. Instead we should use whatever helps us solve the problems at hand. The problem solving should come from people and their skills. Influences should be taken from everywhere! Practices should be molded to facilitate problem solving in YOUR context.

Very well. I came up with the following guidelines to help create a better testing environment for me and my colleagues. I am not trying to invalidate your practices that have been found to be best fitting for you. You should keep the practices you deem appropriate for your context. I encourage to revise your practices based on my own findings. It is up to you to change what ever you see reasonable, if anything.


Fight ambiguity with openness and communication

A few examples on ambiguity:

  • The (written) requirements are vague and ambiguous at best. Usually they are out of context and/or vacuum packed. I remember ISTQB mentioning that “It is testers’ right to deny ambiguous requirements”. I believe it is the testers’ obligation is to address this as soon as it is found. And by addressing I mean talk with people. “I cannot figure out how the system should work. I found out that it works like this. Is this OK to you? If not, can you help me determine how it should behave?”
  • The test cases are not concise. If we are to perform good enough testing we need to do testing. Writing test cases might be testing – executing test steps from a test case might be testing. This all depends on the skills of the tester. If the material we base our testing is poor, we need to be open about it. If we feel that we cannot do good enough testing, something needs to change. That change comes from talking about the issues, being open about the difficulties. 


If something is not clear, make it clearer. Find the information, people, resources that helps you make it clear. Learn from difficulties and be open when you face them. Hiding problems rarely leads to anything positive.


Fight ignorance with eagerness

Ignorance comes in many forms. One can claim she doesn’t know enough about the product to test it. Ignorance might paralyze us. Tasks can be daunting and we might procrastinate because we do not know some specific thing about the subject.

Enter eagerness! Try Proof of Concept type of actions. Try sandboxing the test area. Try having fun. Try getting a group of people who might already know about the domain, the product or the like, and test with them. Try walkthroughs. Try anything and everything to solve the problem. Be eager to to solve issues. If you can't solve them, be eager to get issues solved. Ignoring issues rarely leads to anything positive.


Fight über control with good-enough documentation and reliance to skills

Sometimes the aforementioned (ambiguity and ignorance) might lead into not trusting our ability to do proper testing. That might lead into a form of control that requires a vast amount of documentation and reports. There is a three-fold solution to this, I believe: rightly timed planning, unburdening reporting and reliance to people ability to the best possible job. When it comes to testing, people usually ask for test plans, test task descriptions (of various levels of detail) and test reports.

When doing good testing, it requires some planning to remove inefficiencies. That planning should be done in advance, albeit on a higher level. We must be able to plan our testing while we test, because we learn about the product, the project and everything surrounding us while we test. By timing our planning in sync with our testing, we can react to changes and discoveries more efficiently. The planning should happen as close as possible to the actual execution of the plan. Documents and plans deprecate quickly when new things are added. For example, Rapid Software Testing encourages planning and designing tests during testing. I have found out that a brief planning session before testing session is usually in order to be able to tackle the most important topic during that testing session.

We are chosen to perform the tasks because people expect we can do the task. There is little reason to stupefy the intelligent people that do things for us. Writing test cases, that basically make our brains redundant, is stupefying. In fact, it is stupid to write something that creates stupidity. “Stupid is as stupid does.” In order to be intelligent and harness out intelligent to our testing, it is wiser to write inspiring testing documentation. Missions, threads, ideas, broad descriptions, etc. encourage the use of our brains. I’d say “skill before process” and encourage to constant learning and teaching. Autonomy increases motivation see this post to do the best possible job. Critical thinking enables you to challenge your biases. Lateral thinking helps uncover possible issues. Thinking is the key, not the process or tools.

We also need good reporting off of our testing. Reports may be bug reports, testing transcripts, etc. An important thing to remember here is to focus on relevance and sufficiency of our reporting. The report should be a tool to improve and to facilitate discussion within our testing, development and/or project management. Thus I feel that reporting should be done in ways that support our cause, not because we want reports to begin with. Tools may vary from video recordings and screen captures to notebook scribbles. Anything is fair game as long as it supports the cause.


Fight measuring and quantification with reliance on feelings, observations and communication

Numbers are cryptic. Numbers can be interpreted in as many ways as there are people interpreting. Let’s say we have 100 test cases from which we have run 90. A clueless test manager might say: ”Only 10 more and we’re done”. An intelligent tester might say: “It seems we have 10 scripts to look at, but what else do we need to take into account?” Quantifying test cases (or bugs for that matter) is ridiculous. It is like counting unicorns! How many unicorns can fit into a cubicle?

A better approach to test case counting might be counting the time spent testing. Time is uniform in magnitude. Hour is as long in Finland as it is in New Delhi. What is the size of a test case? Few can tell. Measure what is worth measuring and what is objective.


If this is measuring our progress, how do we measure coverage?

We might want to ask “why we measure coverage” and then choose the metrics based on that. One good coverage indicator is a list of items in the product under test that we have touched. If an area is touched multiple times, we could be fairly certain that it is covered well enough, and we know the areas that are untouched. The human feelings come to play here. If a tester or a developer feels that some area is of poor quality, then we should play that feeling and test that area some more. A test case doesn’t tell how confident the tester is - talking about findings with the stakeholders might demolish the false confidence and lay ground for confidence based on actual findings and behavior.

Whatever you choose as your method of measuring, talk about it. The numbers are devious and one should ALWAYS talk about the coverage and confidence instead of using numbers to /prove/ why something is good. I support creating a measurement strategy that is little or no burden to the actual tester, and then use those findings to steer testing to the right path instead of proving something. Be critical about numbers because they can fool us easily. Be intelligent and critical on what and how to measure.


Summary

To summarize, my intelligent practices for software testing might include the following:

  • Fight ambiguity with openness
  • Fight ignorance with eagerness
  • Fight über control with good-enough documentation and reliance to skills
  • Fight measuring and quantification with reliance on feelings, observations


There might be more and I shall keep looking. The most important thing about these practices (or statements or whatever one might call them) is to communicate with people. Discuss your expectations and goals in testing. Discuss procedures and skills, how they can be improved. Whatever challenges you face, communication is the first step to solving the problem. An umbrella practice might go something like this:


  • Fight <choose challenges> with <choose solutions> and communication


If you feel I need to clarify some of the practices, give more examples or add a fundamental practice to my list, please drop me a line. I’m more than eager to discuss. ;)

Wednesday, 29 April 2015

I quit my job, sold my apartment and started doing what I love - TESTING

People might have been wondering (or somewhere deep inside I hope they have) what has been going on in my life. I have been intentionally silent for a while now. I ain't silent no more. I believe the hibernation has been my salvation. I have been able to regain my passion and my confidence. This post is a short review of my year 2014. I'll describe some of the events that changed me and my attitude towards my career. I'll describe what made me lose my passion for a while and what made me regain it.

The beginning of 2014 was tumultuous time for me. I had done the EuroSTAR2013 talk and I was hugely perplexed by the amount of the positive feedback and the intensity of the criticism. I decided it would make sense to lay low for a while and reflect what I had done. I came to understand that I had been flying on my own hype. I didn't deliver what I had promised, I made grand plans and told about them like I had already done them. I needed to set my feet back to the ground. And it was hard.

A lot of personal changes have been happening in the past year. Among the changes a divorce. I now see how much relationships with close ones can affect ones decisions subconsciously. I must say that there were more good times than bad. However the beginning of 2014 set in motion a set of events that lead into my new found passion towards testing and coaching.

I realized during the summer of 2014 that I was utterly and completely isolated and desolated in my home, in a strange city. I cannot say how much of my discomfort came from waxing and waning depression. I do know that after getting out back to my home town, I have felt healthier than ever. So what I did, perhaps on the spur of the moment, I quit my job and moved away from Helsinki. I am actually still trying to sell the apartment I own with my wife, but I live in a 2 bedroom apartment now with my daughter. So basically I packed my things and left.

I was thinking for weeks that I regret the decision, and I might have for a while. Without a job, I found myself in school trying to educate myself with the ins and outs of Pervasive Computing at the Technical University of Tampere. I had some baits in the water waiting for a prospective employer to bite. And when finally Solita decided to give me a chance, I was relieved.

#High6 will be flying at NTD2015 in Tallinn
So, there I was once again, trying to motivate myself to study something that didn't give me the “thrills”. I mean, I like programming and scripting, I like the technology and all. I just… I am a square peg in the round hole, when it comes to formal education. The worst thing is that I keep hearing this “you should really stay in school, cuz you need the degree to get a proper job”. I feel guilty for those kind of comments. The most successful people I know or look up to aren't those who graduated from schools and universities (although they might have degrees), but those who followed their dreams. I know it sounds immature and possibly silly, but I feel I am being made homogeneous in the school.

I have realized my motivation is intrinsic to the core. It is almost narcissistically selfish, but when I follow my own reasons I get results. My motivation, it seems, is pleasure. I want the pleasure of feeling admiration. I want to do things that make me feel good. If I feel bad, I rebel, I fight against the restraints until I depress and collapse. To be able to function, I need to find things that I feel good about.

What happened then was quite miraculous. I started working at Solita to pay my bills and support my mind-numbing studying (which I also sucked at, in addition to despising my inability to study). I was just a consultant (a testing specialist) in a project, trying to find my place and role in the organization. Or that’s what I though. The environment was… magical. It made want to be a beacon of light. I saw people willing to listen to my insight on testing. I saw people seeking my company to chat about some testing problems. I was like “I found my people!”

Me talking about "How to make anyone do anything" at Solita
I've worked there half weeks for two months, until in April I started to do full week. To this day, I have been coaching three all-developer teams to test, manage their testing with Session Based Test Management, and more projects are lining up to use my services. Also I have been able to share my knowledge on coaching for my colleagues, invited to facilitate classes that are still to remain secret. This all and more. This all feels like I have regained my wings and I can fly again.

Long story short, I found my passion I thought I had lost. There is no reason anymore to fly under the radar. 

I'm back - loud and ugly as ever! ;)
- Peksi


Monday, 13 April 2015

Nordic Testing Days 2015 – Conference at a glance

Just to keep the momentum going, I shall tackle the Nordic Testing Days 2015 in a similar manner I did the Let’s Test. I chose the two conferences for they’re close by and I wish I could attend either or both of them this year. The real reason is that I want to keep writing, now that I have momentum. This is

Nordic Testing Days 2015 – Conference at a glance


The Nordic Testing Days 2015 is the fourth of its kind. Having had the privilege to attend the first of its kind as a speaker, I have a special kind of attachment to it. I did a full blown evaluation of the sessions I attended while I was at the conference 2012, but I shan’t do it this time. The conference is a 3 day spectacle with tracks, tutorials, workshops and more. I’ll choose the sessions as follows:

  • Choose one session from each day based on my familiarity of the speaker
  • Choose one session from each day based on my interest in the title
  • Choose one session from each day that I pick randomly

Those should total 9 sessions. I’ll choose 2 random key notes to accompany those.

I will use a heuristic grading system (introduced here) to determine what would be the best session for me. I will grade the stuff with Angry Birds ™ grade – 0-3 stars per area – on five areas:


  • Person-to-person (How will the person and his/hers work affect/inspire me or the people I know?), 
  • Session value – short time span (How much can I get out of the session tomorrow – next year?), 
  • Session value – longs time span (How much can I implement to my work and teach to my colleagues, my community?), 
  • Steal-ability (How much of it am I willing to borrow and further develop to make it better and/or mine?), and 
  • Challenge-ability (My past knowledge on the topic and my willingness to challenge the session contents.)


Keynotes


Mart Noorma’s “ESTCube-1: Testing in Space”

Intergalactic journey ahead. I would pay money to contribute to something that eventually goes orbiting the Earth. Alas, I cannot yet. Soon, perhaps. I know next to nothing Estonia’s space program, but I think this keynote requires some background checks to be able to get everything out of the keynote. Since I followed Philae landing (they had some Finnish technology there also) I am keen on hearing more on the subject.

I must admit I haven’t heard of Mart Noorma, but I think he’s not that loud on the testing scene. What are the key values here might be the inspiration to reach the stars. The short time value might be high-ish but I cannot see too much long term value in this. I might be the wrong crowd for this session, but I’m expecting inspiration and insight from Mart Koorma.

Although this keynote might be hugely inspirational, I see very little in the light of challenging or stealing ideas. It’s a shame, actually, for I am an avid science follower. The thing is that I might be expecting more of a testing approach to the keynote and less of a technical story. Experience report working in difficult situations is always good, but I don’t see myself as the optimal audience for this.


  • Person-to-person: 
  • Short time value: **
  • Long time value: *
  • Steal-ability: 
  • Challenge-ability: 
  • Total: 3/15 stars



Rob Lambert’s “Why Remaining Relevant Is So Important”

Why is it important to stay relevant? I mean, Rob is obviously going to answer that, but why make a keynote of it? Don’t we all know, if we fossilize we are, out in the next round of layoffs? What is the big deal? What I feel Rob is trying to say is we need an attitude change. The relevance comes from want to thrive and be the best. If you’re the one who’s always on the cutting edge of technology, skills and thoughts, other people want to be like you! You become the beacon people look up to.

Rob Lambert is one of the most influential people in the testing scene. His blog was one of the first ones I started to read as a budding tester. I have met him once in person, and he’s a warm, easily approachable character. The problem is that should I have more time on my hands, I’d be more keen on approaching him with my ideas on managing testing. Alas, I have not.

Staying relevant has far-reaching influence. It brings high long time value to the company and to myself. The ideas sound like easily adaptable and with genuine examples the value might become even greater. Short time value might be in form of planning ahead my skillset. With this session and Alexandra Casapu’s “Examine Your Testing Skills” session at Let’s Test 2015, I see no reason why one couldn’t stay relevant to their company or their community.


  • Person-to-person: **
  • Short time value: **
  • Long time value: ***
  • Steal-ability: ***
  • Challenge-ability: *
  • Total: 11/15 stars




Wednesday


Kristoffer Nordström’s “Taming the Terminal-based Applications and Testing Them” (based on familiarity to speaker)

Kristoffer was the guy that inspired me into taking my Pythonian skills forward. His lessons in “Python for testers” and many more inspire many. He’s a great sport and I wish I could attend his tutorial at the conference. If the session has anything close to what I expect off of him, people will be having a hoot!

After reading the description, the tutorial seems quite useful to me. When I was doing testing at F-secure I ran into terminal-based application every once in a while, I might even have created some tooling with Python. This tutorial strikes that particular nail in my skill repertoire.

The values right now are mediocre, however, since my current job description doesn’t touch terminal based stuff. I would have rated this very differently a year ago, I must admit. Also being one of the few testing specialist at the office, I think I wouldn’t be the guy to be teaching this to developers. I could benefit from having a better understanding of testing frameworks and tools, better confidence in my skills, and have a good time at the session. Also, since I have some experience, I could challenge Kristoffer to make him hone his material to perfection. ;)


  • Person-to-person: ***
  • Short time value: **
  • Long time value: **
  • Steal-ability: *
  • Challenge-ability: **
  • Total: 10/15 stars



Kristjan Uba’s  “Let's Learn: Experience Learning through Gaming” (based on interest in the title)

Gaming. My favorite way of learning. If we’re going to play games and learn from testing, I think this session is worth its weight in gold! …at least to a procrastinator and a child-minded person like me. The values are both immediate and long lasting, if done properly: one starts to seek out the games mentioned to play with their friends and colleagues, besides those games can spawn entire new epiphanies on some other testing related area.

I don’t Kristjan Uba from before, but I bet he’s the kinda guy I would get along really well. He sounds enthusiastic, innovative and funny, the kind of a person I like spending time outside work with. Perhaps, should I miss the opportunity to join, I can badger him to play some of the games on some other occasion.


  • Person-to-person: *
  • Short time value: ***
  • Long time value: **
  • Steal-ability: ***
  • Challenge-ability: *
  • Total: 10/15 stars



Robert Sabourin’s “Just-In-Time Software Testing” (random pick)

Ok. Rob has been on my radar from the beginning of my testing career, yet I know next to nothing of him. He’s one of those “one-star-should-be-three-stars” kinda fellas. I am aware of this “just-in-time” method, from some blog post in the past.

The description made me hum in pleasure. That is something, not only I want, but I need. As a professional tester I need to be able to make snap decisions about prioritization, change of focus and moving people to test the right thing. With content like that on a tutorial, I see no point to sit this one out! The values of this sessions are far reaching and immediate! These are the things I must educate my colleagues with, my community should be aware of this, and my work would vastly benefit from the skills and knowledge this tutorial gives.

For I know quite little about the subject to begin with, I see a lack of challenge-ability for me. Maybe a quick 1-on-1 with Rob might get me into the mood. Perhaps a blog post or two to limber my mind...


  • Person-to-person: * (I wanted it to be ***)
  • Short time value: ***
  • Long time value: ***
  • Steal-ability: ***
  • Challenge-ability: *
  • Total: 11/15 stars



Thursday


Sami Söderblom’s “If James Bach and Mary Gorman had a baby, how would it test?” (based on familiarity to speaker)

Mr. Happy Monkey himself talking about… WHAT? Biology? Child birth? Intercourse? I have to say if he hadn’t been picked based on me knowing him, I would have chosen the session based on the title. Sami Söderblom is a good friend of mine. He’s a whiskey-junkey, a cat-photographer, a father, an explorer and a good friend. We have a history through my whole testing career, from my first big testing project to this day. He’s the “three-stars-should-be-the-milkyway” kinda fella.

I must say I’m on pins and needles what the session is about. The description says: “FITCODES”. I’m sold. SFDPOT has been my guideline through my recent testing career. At the Turku Agile Days 2012 I modeled the testing of speedos using the SFDPOT. This week we did testing exercise on testing whatever found in one’s pocket using SFDPOT. To advance the heuristic that has been the lifeline for me is something I really, REALLY, would like to see.

The values, for me, are huge. Should I be able to use the heuristic in my everyday work is a great benefit. In addition to this, to be able to help others test their software better, to design better tests, to manage testing in a better way, make the session even more valuable. To learn how Sami came up with the heuristic is a good steal-able. To refine it to suit my particular needs would be awesome. And, since I know quite a lot about the subject, challenging would be the cherry on the cake.


  • Person-to-person: ***
  • Short time value: ***
  • Long time value: ***
  • Steal-ability: ***
  • Challenge-ability: ***
  • Total: 15/15 stars


Beat that! No pressure, Sami. ;)

Stephen Janaway’s “Why I Lost My Job as a Test Manager and what I Learnt as a Result” (based on interest in the title)

Test coach, he says. I think I like him already. The transformation to Test Coach has been my goal during this year. Teaching people on how to test and make them better at what they do. If Stephen can help me achieve that, I’d be happy as a hippo.

I actually don’t know Stephen from the past, but with his attitude towards coaching, I bet we can hit it on. Value coming out of his “shift to coaching” session could become quite valuable for me with my ambitions and goals, but to all traditional test managers. When I came to my current work place, I told I wanted to be a coach, but I have yet to find my focus and methods in implementing it.

To more easily understand Stephen’s session, I feel I must read the blog post first. Maybe then I can be able to challenge him in a better way.


  • Person-to-person: *
  • Short time value: **
  • Long time value: ***
  • Steal-ability: **
  • Challenge-ability: *
  • Total: 9/15 stars



Erik Boelen’s “Acceptance Testing At Its Best” (random pick)

I’ve heard some experience reports on the subject of coaching end-users to test the acceptance of a product. They have all been educational, but seemed to be lacking some punch - the methods to implement the procedure to one’s own context. I have never heard of the speaker before, but I fresh blood to the Testing Arena is always a crowd pleaser. ;)

This seems like a good session for those battling with limited testing resources and acceptance testing stuff. I see a lot of material I would like clarification to and some areas where challenging might be in order. Values are unpredictable here. Since I work closely to projects and the project management nowadays, I see some intersections to my work. This might be a huge value to a test manager of any kind. I cannot say.


  • Person-to-person: *
  • Short time value: **
  • Long time value: *
  • Steal-ability: *
  • Challenge-ability: **
  • Total:  7/15 stars



Friday


Erik Brickarp’s “Going Exploratory” (based on familiarity to speaker)

Erik. My man! He’s been around for a while. I was introduced to him by a hint from James Bach few years ago for paying the RST course from his own pocket. Anyone to do that is a personal hero of mine. He’s a great thinker, a fine coachee – fine and dandy bloke on all fronts. I’m really excited to see him in person since he has evaded me the few times I’ve attended conferences in the past.

So, Erik’s gonna talk about how he switched from rigid testing process into an exploratory one, failed, learned, tried again, repeated, succeeded. This is something that I want to do! I think my key takeaway is the sandboxing. In my recent project I have drastically changed the process with constant deliverables. This means more freedom in the execution but rigid documentation. I think Erik can give me a couple of good tips how to make the process less painful and more appealing to… the client. (You wer thinking I was gonna say “opposite sex”, weren’t you?)

I definititely see value in this. Short time value comes mainly from the insights that I can implement as soon I hit the desk after the conference and long term effects can sprout an inspiration where I combine my learning to what Erik gives. Very valuable in deed.


  • Person-to-person: ***
  • Short time value: ***
  • Long time value: **
  • Steal-ability: ***
  • Challenge-ability: **
  • Total:  13/15 stars



Radomir Sebek’s "You don't need to be a musician to test music production software" (based on interest in the title)

Music is close to my heart. I compose various kinds of music, hence I have an interest in both the industry and the tools of trade. Combine music and testing – I’m hooked. Although I have never heard of Mr. Sebek, I am keen on hearing what he has to say.

The whole concept is intriguing, having to quickly learn a vast domain to better test it. I think that is the core of software testing in general – fast learning, adapting, moving focus and prioritization based on learning. I want to examine his methods of approaching the subject. The coaching aspect (as in using testers with various backgrounds and influencing them) is also interesting. I am intrigued what kind of methods have been used in the influencing. Experience reports like this are usually difficult to challenge, but usually highly steal-able.


  • Person-to-person: *
  • Short time value: **
  • Long time value: **
  • Steal-ability: ***
  • Challenge-ability: **
  • Total:  10/15 stars



Ilari Henrik Aegerter & Ben Kelly’s “Ben and Ilari's Spectacular Testing Circus” (random pick, honestly!)

Like I mentioned in my earlier post, I have spent some time chatting with Ilari. He has coached me on different things. Ben Kelly has been on my radar, like many testers, but I haven’t yet figured him out. It seems I am compelled to read his blog a few times before I go chat with him.

The session itself is a puzzle (pun intended) since it can contain many things. Interactive games, puzzles, cool problems, etc. are the salt of testing skills. I bet this is a session where I shall spend at least an hour or two, since I just like to challenge myself. I expect to be challenged and to be able to challenge other testers and maybe increase some skills while having fun.


  • Person-to-person: ***
  • Short time value: ***
  • Long time value: **
  • Steal-ability: ***
  • Challenge-ability: **
  • Total:  13/15 stars


Conclusion

There are many sessions that I didn’t tackle although they might be worth gold. If you feel (as a speaker) that I should tackle yours, drop me a line. I’m also eager to discuss my choices with the people I rated.

It seems that I am attending the Nordic Testing days. I shall be there with my beard flowing and throwing #high6’es to people. ;)


Varsti näeme!
- Peksi

Wednesday, 1 April 2015

Let’s Test 2015 – conference at a glance (from a distance)

Just to get up again and write something, I decided I’d do something I’ve done before. I really want to join the Let’s Test 2015, but it seems I cannot. This blog post is similar to those I’ve done before for it is a “Conference at a Glance” kinda thing. So this is

Let’s Test 2015 – conference at a glance (from a distance)


It seems the conference is a 3 day thing with 2 key notes, 33 sessions or various kind. Since I’m a lazy person, I shall do the following:

  • Tackle both key notes
  • Choose one session from each day based on my familiarity of the speaker
  • Choose one session from each day based on my interest in the title
  • Choose one session from each day that I pick randomly

That should bring me to a total of 11 sessions that I try and grade. If you (as a conference speaker or as anyone else) feel like I should do more, just ask me in the comments section.  I try and keep it simple.

I will use a heuristic grading system (introduced here) to determine what would be the best session for me. I will grade the stuff with Angry Birds ™ grade – 0-3 stars per area – on five areas:


  • Person-to-person (How will the person and his/hers work affect/inspire me or the people I know?), 
  • Session value – short time span (How much can I get out of the session tomorrow – next year?), 
  • Session value – longs time span (How much can I implement o my work and teach to my colleagues, my community?), 
  • Steal-ability (How much of it am I willing to borrow and further develop to make it better and, more importantly, mine?), and 
  • Challenge-ability (My past knowledge on the topic and my willingness to challenge the session contents.)


Ben Simo’s “There was not a breach – There was a blog”

Based on the description on the webpage, I quickly came to think I should have done something like that. I should have started blogging about something that is affecting major public. Like the Finnish Railway renewal or something else of a similar matter. Having not thought of the idea, I shall keep my eyes open the next time something big comes up. So “Thanks Ben for giving me a great idea. Don’t mind me copying it in the future!”

As we’re talking about a keynote here, I don’t think during-session-challenging will occur. As this is kind of an experience report, I feel I can absorb huge amount of wisdom and ideas from it. If only reading through the description gave me so many ideas already, attending the keynote might blow my mind. Alas, it might not happen. So my head is safe for now. Challenge-ability might be a bit low, but if you think the challenging as in self-challenging, things I can challenge in my own thinking processes, ways of working, how I present myself to others. On those parameters I see a lot possibility to challenge.

On that note, the steal-ability just went through the roof. As for long term value, I believe that short term value comes from bringing the conversation to the coffee tables for those who are not American nor have the exposure to HealthCare.gov. The long term effects lie in the the steal-ability I mentioned before. If I can introduce some public service kind of attitude towards my own behavior, it’ll really make a difference.

P2P-level is a bit tricky. Ben has been on my radar for years. I’ve been following him and reading his blog sporadically, but I never really came to realize how much of an influence he has been. There hasn’t been too much communication between him and me save for a few tweets now and then, or some random facebook comments to each other’s posts. I must say I should have been more in his face about stuff. I shall change that and get to know him better!

As for score (should I be at the conference I would attend the keynote whatever it might be):


  • Person-to-person: * (I wanted it to be ***)
  • Short time value: *
  • Long time value: ***
  • Steal-ability: ***
  • Challenge-ability: ** 
  • Total: 10/15 stars


Antti Karjalainen’s “Detecting the Heartbleed Vulnerability”

The description might not provoke my vastest interest, but I was part of the Heartbleed scene in a sense. While working at F-secure I heard about it weekly during the “hip season”. I didn’t particularly have anything to do with fixing nor working with it, but I was there to spread knowledge about it.

The thing is, I am the kinda guy who shies away from über cool tools, fancy technology and protocols. I have some interest in security testing, fuzzing and analytics, but I’m more comfortable to leave those to the people who know them better. When it comes to knowing what fuzzing does, I’m comfortable in what the Wikipedia says. That is to say with no disrespect towards the fine men and women who dapple with such technologies. Hats off to all of you!

P2P is a challenge for me, since Antti is a Finn and I might have run into him at some other conference. I can’t put my finger on it, however. I have never talked with him about fuzzing nor about tools (“the shying” and other excuses). The thing is, I don’t think I could talk with him about the fancy stuff. I might have a few cool comments like “that looks cool” or “whaddaya know”. The thing is, however, that I cannot say for sure. I wish there would be a common ground on which we could build a conversation and then work from that.

The short term value might be in form of an interest into fuzzing tools. Or to some yet unknown aspect of fuzzing I could use in my testing. I feel that the topic is so tool/technology oriented, I might not get enough. It also affects the long term value for I don’t think I am capable of transferring that knowledge to my community or colleagues. I don’t say there won’t be any inspiration during the keynote – I might change my way to think testing and tools.

And since my knowledge on the technology and the tools are so diminutive, I feel the steal-ability and challenge-ability are scored low also. This isn’t to say that Antti doesn’t rock most of the listeners’ world! I believe that he is the kinda guy we need more of. Just like Ben, helping regular people with their daily lives is what counts!


  • Person-to-person: *
  • Short time value: *
  • Long time value: *
  • Steal-ability: **
  • Challenge-ability: *
  • Total: 6/15 stars


1st day sessions:

Ilari Henrik Aegerter’s ”A Tester's Walk in the Park” (based on familiarity to speaker)

Ok. Having read the description, I’m torn in half: A session without a clear outtake or a fountain of good ideas. I don’t know quite yet. I do appreciate that the problem solving is the key to it, but are we talking about artificial, abstract problems like “where is testing going”, or concise practical problems like “how can I convince my manager to pay my trip to Sweden”? The uncertainty intrigues me in a way that I wouldn’t dare miss this session. If we can coax people like Michael Bolton or Ben Simo to join in with their problems, it can be a hoot. It could be a hoot with just me, Ilari and three guys I don’t know. The thing is “I don’t know”.

I have spent some time chatting with Ilari. He has coached me on different things, latest today (the April 1st 2015) on how to approach a testing communication problem. I know this guy and I like him. I have to admit I haven’t paid too much attention on his whereabouts the last 18 months, but I reckon that all will be changed.

The value of the sessions is a tricky one. I cannot say what value I can get for I don’t know the contents that well. On short term it might arouse good conversations and comradery between the people attending. Experience reports in the form of problem solving might be a good take-away. In longer time span, the technique itself might be a good thing to learn. The introduction of philosophical thinking and approach to software testing is actually quite intriguing. Challenging the session might be a tough one, since it feels like an experience report of a sort.


  • Person-to-person: ***
  • Short time value: **
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: *
  • Total: 10/15 stars



John Stevenson’s ”A Journey Towards SelfLearning” (based on interest in the title)

WOW! This sure sounds cool! A sessions in a form of a game show! Count me in! I am interested in learning and how people learn. I am a continuous learner myself and I think it is time to up that interest in me once again. If this session could make my learning more structured, I would get more out of the time I have at hand.

I know next to nothing about John Stevenson, but lately I have spotted some of his tweets. He could be one to chat more deeply about learning. Him and James Bach might be a good dinner guests if I wanted to talk about learning and teaching. Since I hope to be a teacher of a sort someday, I think they could give me valuable ideas.

The value I see from this session is vast, both long and short term. While stirring me short term, it could make me think about my life on a longer run, my education and my striving towards being a teacher (though Finnish teachers are paid really poorly). Since I know something about learning, I think I might even be able to challenge John on his ideas. Though I don’t think I can introduce any particularly new and fancy to his curriculum, I might be able to increase the value of the session by making him express thing in different ways to be more easily approachable.


  • Person-to-person: *
  • Short time value: ***
  • Long time value: ***
  • Steal-ability: **
  • Challenge-ability: **
  • Total: 11/15 stars



Louise Perold ’s "Non-violent Communication" (random pick)

Well well well. NVC. I must say I had need for those skills today when I argued with a developer on why the test cases written for manual execution are a poor excuse of a test automation. I did seek some coaching from Ilari (like mentioned before) but I wasn’t able to convey my point to him in a way that would have left both him (the dev) and me in a mutually enjoyable place of mind. Having said that, Louise’s session might be the one for me.

I don’t know her at all, though I might have traded tweets in the past. I’m intrigued meeting her, though. Should I not be able to come to conference, I really want to talk to her about Mortimer J. Adler’s book “How to Speak How to Listen” which I’m reading sporadically now and then. Also what I found out from Daniel Pink’s book “To Sell is Human” might contribute to the Non-violent communication.

Values from this session might be really high in short term and long term. I might not be able to transfer them to my community in a way Louise could, but I bet the example and behavior might influence my co-workers and other people as well. My knowledge on listening and conversation skills might enhance the challenge and steal –abilities, but I would have to let the time indicate what’s beneficial and what’s not.

Non-violent starring would be as follows:

  • Person-to-person: *
  • Short time value: ***
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: **
  • Total: 10/15 stars



2nd day sessions:

Laurent Bossavit & Michael Bolton’s ” "Defense Against The Dark Arts” (based on familiarity to speakers)

Critical thinking, they say. Very well. I would expect something like this from Bolton and Bossavit. Given Laurent’s book “Leprechauns of Software Engineering” it is about time to teach us Earth dwellers them skills to tackle possibly harmful (and perhaps even rigged) information.

I know both of the speakers and I love spending time with them. Both incredibly intelligent fellas with excellent ideas and views of the world. It seems almost a loss that I might have to miss their workshop. But I shan’t weep! I find their session really intriguing so I might badger for a coaching sessions should I need to miss the event.

As for value of the session, I see high value altogether. Critical thinking skills and the practical application of it are worth gold in the testing industry. Whatever I can bring home from that session would be valuable material for my colleagues in ways to think critically.

Since I have some former knowledge on critical thinking, I feel the challenge-ability is high. Also I see that the whole session is about challenging, I feel compelled to challenge much of their material. Steal-ability is high in a sense that I want to educate my colleagues on that particular subject.

My god, I’m looking at quite a session:

  • Person-to-person: ***
  • Short time value: ***
  • Long time value: ** (it’s really 3 stars, but I cannot give the highest score yet)
  • Steal-ability: ***
  • Challenge-ability: ***
  • Total: 14/15 stars



Huib Schoots’ “How to be an Explorer of Software” (based on interest in the title)

Mr. Schoots talks about creativity? The guy who had a conference sessions in which he mostly played music talks about creativity? I think this is the stuff everyone needs to see. The title pushed me in a quite different assumption on the contents of the session. I thought that it would be about hands on exploring something, but since the description kinda gave the impression that it’s about how we document and observe, it actually fulfills the title quite well. Interested I surely am.

Huib is a great fella and he could be one of the top 3 reasons for me to attend. He is the bloke that makes people smile on their worst day. All that and a world-class tester! Say no more! I have seen Sami “the Monkey” Söderblom talking about exploring, read (the beginning) Elisabeth Hendrickson’s book on exploring, so it would be nice to see another take on the subject. The value is hard to define, since I feel the values are more personal than community wide spreadable thoughts. To be able to more concisely test and document a software is a great skill to have.

To challenge Huib is something I think he would enjoy more than anything, so I think I must get in touch with him and get my hands on the material if I can’t attend. I might also be able to steal something that I later present something as my own, but I ain’t gonna let him know. It’s hard to put a finger on the stuff that I’d like to steal, but I bet there are loads of stuff.

The exploration of stars…

  • Person-to-person: ***
  • Short time value: ***
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: ***
  • Total: 13/15 stars



Erik Davis’ "Effective Practice Manager" (random pick)

A practice manager. That’s a new term for me. Based on the description I didn’t get what is a practice manager. Is it like a skill coach or a competence manager? Either way I feel this is an experience report with some educational features. Oh it’s an experience report with discussion. Maybe we have some cool practice manager problems we can help solve for him. Challenging might come to play while there’s a discussion in the session, but I do know so very little about the subject beforehand.

I was intrigued by the “increase their own impact at work” bullet point. That is something I want. I want autonomy and to see my own passions and skills realize in my daily work. In that sense this session might be a good catch. It would definitely have long term effects but I couldn’t get the short term value out of the description. Maybe it lies in the “whatever else comes out of the discussion”.

I don’t know Erik at all. He might be one of those blokes who have avoided my radar so far. Based on his personal description I think I he would be the guy to get to know. Maybe I’ll ask him for a beer at the evening activities. His interest in educating testers and building their skill set is something I want to do also.


  • Person-to-person: **
  • Short time value: *
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: *
  • Total: 8/15 stars



3rd day sessions:

Jari Laakso’s ”Security Testing” (based on familiarity to speakers)

Before I have even checked the description on Jari’s session, I must say I am intrigued by this. He is my first touch on security testing on my Finnish blog years back. Unfortunately, I haven’t met the guy. I know he’s intelligent and capable tester, but I haven’t been in contact him for some time. Maybe I have to rekindle that connection to get the latest.

Ok. The description. Very well! Hands on security testing! Where do I sign? This is like one of the essential stuff that people are asked to when doing exploratory testing. SQL injections are in every text book on testing, but this guy actually shows how to do that. Nice! I can see the value skyrocket! The thing is, to be able to share this knowledge I should know even more about the subject. I might lack the interest in security testing in general, so I might not be able to teach these skills to others (I’d ask Jari to do it for me).

To challenge him might be difficult due to the fact that we are talking about technical stuff once again. Things like cross-site scripting are founded on protocols, REST-calls and whatnot, and I don’t feel I can challenge him on those. I might wanna try, tho. ;)


  • Person-to-person: ***
  • Short time value: ***
  • Long time value: *
  • Steal-ability: *
  • Challenge-ability: *
  • Total: 9/15 stars



Alexandra Casapu’s “Examine Your Testing Skills” (based on interest in the title)

Hands on testing and discovering our testing skills. I like it. In EuroSTAR test lab I had the chance to dapple with Mr. Lyndsay’s machines and I really want to have another go! Besides I’m interested in how my skills map out. I don’t know Alexandra, but I’ve heard of her. She is one of those people I want on my “meet these people in the Testing Scene” list.

The idea of having various non-technical skills to help you with testing is interesting. I would like to know other peoples’ skills and if I can borrow their passions and learn what they know. I’d be able to steal ideas from the participants and from Alexandra. Very nice! Since I have been doing some reading on skills, I might be able to challenge her and my previous views quite easily. And skills practicing in practice is always something to take home to.


  • Person-to-person: **
  • Short time value: **
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: **
  • Total: 10/15 stars



Scott Barber’s  ”Experiencing Product Owners”  (random pick)

Random in deed… Scott, if you’re reading this, you might want to check the description on the website. ;)

A big star on the effort, though.

Conclusion

Having surpassed the pain of starting to write again, I feel joyous on how I managed to rate 10 (Scott’s doesn’t count) sessions on five heuristic parameters. The toughest thing is ahead of me, though. I need to convince someone to pay my trip to Sweden.

Vi ses!
Peksi