Pages

Friday, 25 October 2019

HUSTEF 2019 – Conference at a Glance

Hi all! It’s time for the next post regarding this year’s conferences. I’ve been (with my good friend and colleague Jani Grönman) invited to speak at three conferences this year: TestCon Europe at Lithuania, HUSTEF in Hungary and Quest for Quality in Ireland. My plan was to share my thoughts about all these conferences, but unfortunately, I had to skip TestCon blog post due to scheduling issues. The review is already underway and it’ll be added here as soon as possible.
I’m rating the talks based on my personal opinion, which means that they might be totally different from yours. AND I haven’t seen the talks yet, so the results might be totally different once I get to see some of them. The process is the same as in my previous “At a glance” posts. Look here for instructions.

This is Conference at a Glance – HUSTEF 2019!
What I have heard through grapevine is that the conference is highly valued in Hungary and the premises a gorgeous. I can’t wait to see the venue and the city of Budapest.  From Hustef.hu web page:
“HUSTEF is one of the premier conferences in Europe for practitioners in all areas related to software testing. The conference was founded in 2011 by the members of the ”Hungarian Testing Board” with the aim to have an annual platform where the best from the software and IT R&D sector can exchange information about new developments in the industry. HUSTEF has grown to be one of the biggest software testing events in Europe, with more than 670 attendees from all over the world attending the 2018 conference. HTB is the local representative affiliate of ISTQB accrediting thousands of software testing engineers who share a belief in the power of innovation and a desire to be leaders in the field of testing.”
The talks and workshops vary from technical talks to practical, from AI to think like a tester. This is the kind of place where I want to be! Let’s see what we have ahead of us. I want to say that I am really biased towards intelligent testing thus making it difficult to see good in some of the talks.

Keynotes (There seems to be quite a lot of those)

Tariq KING - AI for Software Testing: The Ultimate Journey

In the last blog post I wrote, I mentioned Mr. King’s talk. There will be quite a lot of talks about AI and it is a big thing in software testing now and in the future. I believe this is going to be an experience report that describes Ultimate’s ways of working. Since I will dive deep into AI and ML in Q4Q-conference week after HUSTEF, I see this as a nice introduction.

  • Short time value: ** (The opportunities to learn and create some basic knowledge in AIST is a good thing. To take these lessons to my daily work at the office doesn’t seem likely now. At least without a proper introduction to the subject.)
  • Long-time value: (I think the long-time value comes from somewhere else. My current company isn’t going to adopt these AIST principles any time soon.)
  • Steal-ability: (My skills and the nature of my current workplace I see very little to incorporate to my work nor do talk about it in the foreseeable future.)
  • Challenge-ability: ** (The role of “intelligent testing” isn’t to be lightly brushed aside. AIST is a tool and it is useful in its own domain.)


Philip LEW - From Tester to Thought-Leader: The 8th Habit of Highly Effective Agile Testing

An interesting topic in general. The skills development is something that I and Jani will do a talk about. If this talk describes the skills themselves, we talk about methods to deliver those skills to the team. Also, since Mr. Lew knows quite a lot about coaching, I see a lot of similarities in our way of thinking (and maybe in the contents of our talks).

  • Short time value: *** (Skills and ways of thinking are what I’m interested about and I think he can deliver a lot of valuable information.)
  • Long-time value: *** (The habits of being effective as a learner are something that everyone should have. Learning is the most important feature of a tester!)
  • Steal-ability: *** (I’m SO going to steal ideas from this topic! Luckily it is on the first day, then I can refer to it in our talk.)
  • Challenge-ability: ** (The problem (the only problem) with this talk is that I might be so biased towards thinking this is so awesome talk that I accept any ideas presented.)


Dionny SANTIAGO - Surviving the AI Testing Apocalypse

Although this is also something that will be a topic at Q4Q, I think this is an interesting topic for me personally. I’m not worried about the coming of AI. I’m worried about the hype and attitudes it brings with it. AI is a tool like that serves a purpose. When agile testing came with a strong focus on automating tests, testers were worried they run out of jobs. Adapting is a key thing here. I don’t see every tester having to be able to code an AI but learning how to use it as a tool. Hopefully, this talk will shed some light on that area.

  • Short time value: ** (A thought-provoking things once again. Being able to talk about the pros and cons of AI is valuable.)
  • Long-time value: ** (Being able to “keep my job when AI takes over” is a good skill. Not being a traditional engineer with a technical skillset, I need the skills to help me get through the Judgement Day without Arnold.) 
  • Steal-ability: * (I might include some of these things in my talks about testing skills and skills development.)
  • Challenge-ability: ** (It might be that the talk doesn’t help me in the way I think. The “surviving” might be focusing solely on developing technical skills to some level. I can do that, obviously, but I am seeking different help and ideas.)


Jennifer BONINE & Janna LOEFFLER - The Life of a Tester: From Once Upon a Time to Happily Ever After

Am I hearing more AI and ML talks? This seems a bit similar to Dionny Santiago’s talk about keeping your QA job after AI is released to the world. The animation makes this an interesting keynote. I’ve known Janna Loeffler for a couple of years now and I know she can deliver a killer talk with visual aids. The animation might prove really valuable in learning about the subject. I and Jani keep saying that quality is not made by the testers but by the whole team. I think this presentation taps into that. Quality IS everyone’s responsibility!

  • Short time value: ** (The survival theme keeps popping up in the conference. I hope this supports and gives new views on the subject.)
  • Long-time value: ** (“Writing my own story” is important in these times of change. This talk should change my views in the long-term.)
  • Steal-ability: ** (Although I cannot steal the animation, I’m sure I can make some of the ideas my own. Life as a tester needs constant adaptation and having a long term plan is always a good thing.)
  • Challenge-ability: ** (When talking about AI again, I see possibilities of colliding with my ideology of having intelligent testers and using AI as a tool. Let’s see what happens…)


Talks (Only 6 this time)

Prashant HEGDE - Revolutionize Your Testing Strategy with MindMap Driven Testing

I started using mind-maps over ten years ago and I even gave a talk about it at Nordic Testing Days in Tallinn in 2012. The use of mind-maps has changed my thinking and the way I report and take notes in my testing. In agile testing, reporting should be agile as well. I’ve found it useful to do that on a mind map. Apart from reporting testing, I use mind-maps to model the context and the product. It helps me to understand the system as an entity instead of separate functions etc.

  • Short time value: *** (Being able to hone my skills in using mind-maps is really useful for me. The insights I’m expecting might inspire me to write something down on how I use them nowadays.)
  • Long-time value: ** (Having a background in using mind-maps in the past might not bring that much new stuff, but perhaps it helps me look at things from a different perspective. At least it should inspire me to use them in more efficiently in the future.)
  • Steal-ability: *** (If I see a technique or hear an idea in this talk, I will most definitely use it to my advantage.)
  • Challenge-ability: *** (Knowing quite a lot of the subject, I’ll be able to challenge things that don’t fit my view of the world. Though I don’t totally disagree, I try to help Prashant make more out if his use of mind-maps by offering my own view.)


Jeremias RÖßLER - Test Automation without Assertions

Ok… Silver bullet test automation tool, eh? I might be overly critical and I’m sure my biases are at work here. I have an allergy to snake-oil test automation. The topic sounds like a tool advertisement. Hopefully, it is more of a technique or an approach to testing instead of offering one tool to solve all problems. I’m sorry Doctor, but we might not have joint ground in this subject.

  • Short time value: ** (My company is always looking to improving test automation. I might be able to offer a solution for them to assess. Perhaps we can find a use for it.)
  • Long-time value: * (Knowing about things always help to develop as a tester. Being able to talk about Recheck most definitely helps me when discussing keyword-based test automation.)
  • Steal-ability: (I see very little things to take as my own or develop it further.)
  • Challenge-ability: *** (I expect to see things that contradict my views. It might be more useful to have a proper talk outside the conference room so that I won’t spoil the event for those who find it useful.)


Aleksandra KORNECKA - Cognitive approach to software quality

I’m a soft skills person. I like thinking, emotions, communication, etc. skills that are not necessarily engineering skills. This talk might help me develop my skills in those areas and be able to talk about them.

  • Short time value: *** (I should be able to use these skills in my work quite easily very soon. On a weekly basis I need to coach people using these skills and teaching them.)
  • Long-time value: *** (Knowing these skills most definitely helps me be a better coach.)
  • Steal-ability: *** (I see large potential in developing this topic further with my previous knowledge. 
  • Challenge-ability: * (Not knowing too much of the research side of things might make it quite difficult to challenge ideas.)


Vojtěch BARTA - Customer Testing and Acceptance

At my company and in my role as a coach we quite often train acceptance testers and customers to do some testing. This can be in the form of accepting the system and paying the bill, some form of validation to lay the blame for defects or contract violation or some other. Acceptance testing always requires a mission which we try to achieve. The talk seems to cover most of the issues I am facing in agile testing in our company. How to make people understand the importance of UAT?

  • Short time value: *** (We have a lot of coaching cases coming up and I feel that this could drastically improve the value of our work.)
  • Long-time value: *** (I’ve been on the customer side doing the acceptance along with other testers, so these skills also help me in my future assignments to test from the end-user/customer point of view. Also, knowing more about acceptance testing helps me educate our sales teams to make better proposals and have them involved them development throughout the lifecycle.)
  • Steal-ability: *** (I’m SO going to steal as much as I can from this talk. The fact is that I can incorporate this knowledge into my previous knowhow and be a better tester and a coach.)
  • Challenge-ability: ** (Since I know quite a bit about this topic, I feel that I have loads to say about this. I’m keen on seeing the results and can Vojtěch include all the important things in his talk.)


Janna LOEFFLER - On the Shoulders of Testing Giants

Just like Janna, my philosophies are molded by these Giants. Michael Bolton, James Bach, Cem Kaner, Jerry Weinberg, etc. are the bedrock of my testing self. These views are then affected by people like Elisabeth Hendrickson, Huib Schoots, etc. I have my feet firmly on the shoulders of there magnificent individuals.

  • Short time value: *** (As a story I’m sure I get loads of ideas and remember to be thankful to those I work with. The talk might not have a “thing” to take home, but it most certainly will make me think of all the people I look up to.) 
  • Long-time value: * (I’m not sure if there is a long-time value here. I expect a story and something to make me think. I might find it easier to acknowledge the influences, but at this point, I feel this is more of a short-term thing.)
  • Steal-ability: ** (“Stealing” this might be hard but talking about it can be a bit easier.) 
  • Challenge-ability: * (I am on the same lines as she is. You need to appreciate the people that influence you. Maybe even say it to them occasionally.)


Shekhar RAMPHAL - Five Levels of API Automation

API testing is an important thing in testing. I’d say a basic level of test automation is API automation. Asserting calls is fairly simple and can be done on almost any language and testing tool there is. It’s nice to hear a talk about making test automation easier. I think it’s good for “dummies” like me.

  • Short time value: *** (A more detailed understanding of API testing is good to have. Even if I can’t do automation per se, I can use the knowledge to coach developers to do better tests.)
  • Long-time value: ** (Skills like this will be beneficial in every kind of testing in this era. AI and similar tools require knowledge of API testing.)
  • Steal-ability: * (To further develop this is a bit too difficult for me, but I believe I can incorporate this into my daily work.)
  • Challenge-ability: ** (I have some prior knowledge of the area and I have healthy critical thinking towards automation. The API testing can (and should) be used wisely.)


There you go!

Well, that’s my view on a few of the talks and keynotes at HUSTEF 2019 conference. Like I said before I’m not saying I attend all these talks, but I feel they might be good candidates for my itinerary. I encourage all of you to comment on what are your views about these talks and other talks as well. If I happened to comment on a talk you’re presenting, please give me a comment. It would be lovely to hear from all of you!


Thursday, 8 August 2019

Quest for Quality 2019 at a Glance

Hi all! It’s been a long time since I’ve posted anything on my blog. I’ve been busy with various things, but I have now decided that it is a good time to share some thoughts about the forthcoming fall. I’ve been (with my good friend and colleague Jani Grönman) invited to speak at three conferences this year: TestCon Europe at Lithuania, HUSTEF in Hungary and Quest for Quality in Ireland. My plan now is to share my thoughts about all these conferences.

I am not an expert with some of the subject matter of these talks I’m about to rate (in my Angry Bird scale) and I am sure I misunderstand a whole lot based on the description on the website. As the title says, this is a glance. It is not a comprehensive analysis. I have described my process in more detail in this post, but here’s the gist of it:

I will grade topics using 0-3 stars per area, in FOUR areas:
Session value – short time span (How much can I get out of the session tomorrow – next year?),
Session value – long time span (How much can I implement o my work and teach to my colleagues, my community?),
Steal-ability (How much of it am I willing to borrow and further develop to make it better and, more importantly, make it my own?), and
Challenge-ability (My past knowledge on the topic and my willingness to challenge the session contents.)
I’ll choose the sessions as follows:
Choose two sessions from each day based on my interest in the title
Choose one session from each day that I pick randomly

This is Conference at a glance - Q4Q 2019!

The conference is mostly about AI and ML, but my talk “Test Coaching” has been chosen to add a more general perspective to the technical set of talks. I will write about test coaching either here or some other platform, but I try to remember to update this post to link to that post.

The concept of AI replacing tester is absurd. As a tool to do better testing, a tester will most definitely benefit from new ideas and tools in the toolbox. I promise not to be judgmental about the topics, but I am most certainly biased towards a non-technical approach to testing and I might be over-critical with some topics. Bear with me, though. This is subjective after all. 😉

One thing to note: I haven’t met most of the presenters, so my rating is based on the bio and my quick research of that person.

Keynotes (There seems to be quite a lot of those)

Tariq King - AI-Driven Testing: A New Era of Test Automation

As a context-driven and holistic tester, I am keen on learning about new things to boost testing efficiency and reliability of information testing provides. The way Tariq talks about AI as a tool is fascinating. I’m fascinated to see live demo how the bots work. I mean, they might work on their own, but the practical application to software testing is what I want to hear. I hope I get answers on how can I use AI in my daily work IN PRACTICE.
Short time value: * (The implementation of this is not yet in my immediate agenda. Let’s see…)
Long-time value: ** (The topic is very interesting and if I cannot implement it immediately, I can at least share my views on it.)
Steal-ability: - (The description offers more questions than answers, so the steal-ability is a mystery now. My technical abilities might hinder me taking the topic to the next level, but let’s see.)
Challenge-ability: ** (I am sure I can find things that rub me the wrong way. As a tester rooting on using one’s brain to test, outsourcing thinking to a machine sounds dangerous and reckless.)

Pallavi Kumar - When AI Meets Software Testing

The keynote description leaves a lot in the dark, but I guess it follows the same track Tariq sets in his keynote. The future of testing always intrigues me. It also frightens me to some degree. Based on the background of Pallavi Kumar, she uses AI in various other purposes, including mental health (which is close to my heart).
Short time value: * (I hope to catch some good ideas but implementing them in my daily work seems a bit distant. I am, however, very intrigued about hearing her talk and maybe chatting with her afterward on practical applications of AI in mental health.)
Long-time value: ** (The future of testing is an interesting topic, but I fear this is a praise to AI. Like Tariq’s talk, I should be at least able to talk about it. Also, I feel the effects of mental health makes me want to hear this talk.)
Steal-ability: - (Same as Tariq, basically.)
Challenge-ability: - (I don’t see “un-challenge-ability” being a bad thing. This should be purely about new insight on applications of AI.)

Michael Clarke - “Robots Took My Job!”: Where do Testers Fit In A Future Fueled by AI & ML

Michael’s view about AI and ML is quite close to mine. Where does AI & ML leave testers? I want to hear more about this. Since Michael doesn’t seem like your run-of-the-mill technical engineer, I feel he’s a kindred spirit. Him being a sole tester in a team resembles my work as a tester for the last few years. I most definitely wait for this talk.
Short time value: ** (Talking about the importance of human tester is important and I totally agree with that. I will most definitely get a lot of ideas from this talk in the short and long run.)
Long-time value: ** (Like I mentioned, this will be a great talk for me.)
Steal-ability: ** (The topic is close to my heart. As a holistic tester I will further develop this idea to benefit my company and the testers in my community.)
Challenge-ability: * (I feel there could (and should) be things I don’t agree with. I hope the talk shows that humans are the ones doing the testing. AI is just a tool!)

Yasar Sulaiman - Will Artificial Intelligence Take Over QA Jobs?

There are quite a lot of talks about testers losing their jobs. In my talk, I talk about a way to keep the testers up to speed with their skills, hence I don’t see us losing our jobs. Yasar talks about evolving testers and I think it supports my talk quite nicely. There is always a threat of people losing their jobs but adapting to change is the key to having a long and stable career.
Short time value: ** (To be able to talk about the evolvement of the tester to fit today’s testing industry is a key element to my job.)
Long-time value: ** (In the long run I might be able to advise the skill set needed to perform specific tasks in various projects. There are quite a lot of domains that can benefit from AI as a tool for a tester.)
Steal-ability: * (Same as Michael Clarke’s talk, the topic is close to my heart. Maybe I can develop this further.)
Challenge-ability: * (Same as Michael Clarke’s talk essentially. Let’s see where the talk leads us.)

Jason Jerina - The Future of Quality Assurance: A Path for A.I & Human Intelligence

First, I thought “Yet another keynote about the future of testing.” but it seems this is more about tool hype. Tools and automation are a tricky subject for me. I fear that the IT community starts to think the tools are snake-oil that solve all testing problems. When we add the AI to that, the thinking of a human might get forgotten and its importance lost in the cogs of technology.
Short time value: - (I feel I don’t get too much in the short run for I see quite a lot of technology hype in this keynote. There might not be that much to implement in my current job.)
Long-time value: * (I will be able to talk about AI and ML on a larger scale. The value of that is currently lost to me.)
Steal-ability: - (I have no intention of stealing this subject unless it turns out to be more than a sales pitch for automation.)
Challenge-ability: *** (I’m sure I won’t agree with most of the stuff. I will most definitely try to challenge the ideas represented.)

Rhealyn Mughi - Robots Are Here; What Can We Do To Keep Up?

Yet another keynote about the future of testing? The topic, however, guides the audience to keep up with the technology and robots. Rhealyn Mugri also talks about the impact, which is nice to hear. We’ll have to see if this keynote promotes intelligence over tool-hype.
Short time value: - (I don’t see too much short time value since this is more about the future of testing. The guidelines might come handy in the long run.)
Long-time value: ** (The guidelines and talk about rapidly evolving industry are useful information.)
Steal-ability: - (I don’t yet see anything steal-able, but we’ll have to see. The future is always interesting, but I don’t see how to make this topic my own.)
Challenge-ability: * (Is this more about robots in general or how they affect testing? I feel there might not be that much talk about testing. If there are no practical applications to implement the guidelines for testing, I will challenge the usefulness of those guidelines.)

Talks (Only 6 this time)

Zachary Attas – Services: How to Test Them When You Have The Keys to The Castle

Ah! Test strategy – my favorite. I feel that people underestimate the value of a good strategy. The focus is usually writing a document and burying with the rest of the plans in some dark corner of the project library. End-to-end testing and test automation both sound interesting. I work at Solita and we use a lot of integration and E2E automation. It is nice to see what possibilities this talk provides to make our tests better.
Short time value: * (I might not be the one writing the code, but I feel that this talk helps me aid others to create better tests.)
Long-time value: ** (In the long run, I might be able to help teams generate a good strategy for integration testing. This is vital to my role as a coach.)
Steal-ability: ** (I try my best to understand the details of the talk. The strategy will be the main part that I am interested. How to generate strategy is my favorite part of learning to test.)
Challenge-ability: * (Not knowing too much about the technical aspects of integration tests, I might not be the best candidate to challenge most of the content. I do feel that the strategy generation part will be my focus and I will see if there are things that sound odd to me.)

Shama Ugale – Testing Conversational AI

The idea of testing conversational devices is intriguing. I can see the difficulty in it and the techniques to do so need to be carefully thought of. I haven’t been involved testing such systems, but I see a progression towards that area in many fields, such as infrastructure maintenance, elderly care, phone advising, etc. Besides, I use these devices at home, so it is intriguing to know how these devices are tested.
Short time value: * (While the topic is very interesting, I can’t see the near-future application of this kind of testing.)
Long-time value: ** (In the long run, I might able to consult teams and customers to test these devices or services. I’m sure these skills are transferable to other approaches of testing.)
Steal-ability: * (Having rather limited technical skills currently, I don’t see too many opportunities to adapt and build upon this subject.)
Challenge-ability: - (The topic is new to me thus I don’t see much challenge-ability in this topic. I believe however that topic is interesting enough to overcome that.)

Maik Nogelsen – Testing VR; The Trinity Of Testing

Maik sounds like an esteemed figure in software testing. I expect a lot from his talk and chatting with him. His work in German testing community sounds awesome. Anyhow, his topic on the VR sounded quite interesting. I read an article about Fallout 4 the difficulties of testing the game. I became interested in the possibilities and difficulties of VR testing but haven’t had a chance to hear more about it. I think this is a great opportunity to learn more about it.
Short time value: *** (I may not be able to implement the techniques or the methods to my daily work, but I am keen on learning about it.)
Long-time value: * (Like I said before, I might not be able to implement this topic to my daily work, so it might not generate that much value in the long run.)
Steal-ability: * (While I might not be able to “make it my own” I’ll learn how things are done in the real world.)
Challenge-ability: - (The topic is new to me thus I don’t see much challenge-ability in this topic. I believe however that topic is interesting enough to overcome that.)

Sunder Shyam - AI Techniques To Improve Software Testing

The topic sounds quite ambitious and quite hard to grasp. I’m not fully sure if the talk is about solving the oracle problem or introducing a new idea called TDP (never heard before). They don’t seem to be the same thing. I might be wrong, but the description sounds a bit unclear. I’ll assume the topic is about solving the oracle problem with AI testing AI.
Short time value: ** (Solving the oracle problem is applicable to various other areas other than AI. Testing in general benefits from knowing more about the oracles. This could be immediately transferred to my current work.)
Long-time value: * (This helps me understand the AI testing (and AI doing the testing) and the problems that AI can solve in development.)
Steal-ability: * (The vagueness of the description makes it hard to determine the aspects I could further advance. Test automation applications might be the most direct useful things for me.)
Challenge-ability: * (I believe the challenging will be easy when moving from AI world to human intelligence world and applying the skills to oracles in human testing.)

Milan Gabor - Security Testing For n00b Testers?

I have been keeping security testing at arm’s length due to it being highly technical craft. Lately, I’ve come to realize that my job as a coach is somewhat like those doing security assessments and analysis. Where I show the problems in testing practices and skills, sec-testing shows problems in programming practices and skills (perhaps in platforms, tools, etc.). I have always had some curiosity, but the first step is a hard one to take. I hope this talk kickstarts my thirst for sec-testing.
Short time value: *** (I’m a n00b! The first steps are very valuable to me getting a grasp of what security testing is and can be.)
Long-time value: ** (On a long run I can be more certain when talking to my coaching clients about the importance of security testing.)
Steal-ability: * (In this case “making it my own” isn’t about stealing this but making it a push to the right direction.)
Challenge-ability: - (With quite a narrow knowledge on the subject but great enthusiasm, I see myself being happy to become a non-n00b.)


Jörgen Damberg - The Luxurious Development Future – With The Obstacles In The Rear View Mirror

My work is situated in a highly agile world with loads of CD/CI-pipelines and test automation, I’m intrigued by hearing more about AI and CD working together. Knowing the obstacle and how to move past them helps me coach teams in a way that immediately brings value to their daily work.
Short time value: ** (This subject and the skills are quite transferable to my work immediately.)
Long-time value: ** (On a long run I can coach teams to enhance their CD/CI pipelines to make their life easier.)
Steal-ability: - (Building my own version of this talk is quite farfetched.)
Challenge-ability: * (While I know a fair bit about CD/CI and the problems we have, challenging Jörgen and helping solve the actual problems.)

There you go!

Well, that’s my view on a few of the talks and keynotes at Q4Q conference. I’m not saying I attend all these talks, but I feel they might be good candidates for my itinerary. I encourage all of you to comment on what are your views about these talks and other talks as well. If I happened to comment a talk you’re presenting, please give me comment where might I be mistaken. It would be lovely to hear from all of you!


BR,
Pekka “Testing pastor” Marjamäki

Wednesday, 25 January 2017

Test Strategy in 10 minutes

This post covers both the workshop I did on test strategy at Solita as a part of the Testing Tuesday and the extempore workshop I arranged at the Nordic Testing Days in 2015. While both had the same agenda, they were vastly different. (Why this post is published now is because it had converted into a draft which I noticed only now.)

The one at Solita

The workshop was not supposed to be a slide show, like none of the workshops arranged at Testing Tuesday. Once again I drew stuff on the white board and let rip. I started by describing testing strategy quickly. Then we discussed about project elements, product elements and quality aspects. These were loosely based on the Rapid Software Testing models. The audience gave ideas on what kind of elements we need to take into account when choosing a test strategy. There is no comprehensive list on what we came up with, but we had a whole lot of ideas ranging from schedules to "pissed-offness" and expected user behavior.



After we covered most of the areas, I split the audience into three groups. They all were supposed to create a testing strategy in ten minutes. The product I chose was a web shop that sells knick-knacks and mails them to people. After ten minutes I had three completely different strategies.

The first one was focused solely on money. They mapped out every aspect that could hinder the income of revenue and prioritized them according to importance to the "owner" of the store.

The second one was a "software engineer approach". That group mapped out all the aspects that were important to different kind of engineers. There was an aspect of security testing, transaction testing, happy day testing of known processes and some complementary approaches. There was no prioritization but there was definite focus on tools and know practices.

The third one was focused on business processes. They mapped out as many user stories as they could and dissected them to steps. They then tried to figure out what kind of a prioritization would make sense to test the processes.

In 10 minutes we had three outstanding drafts of testing strategies. All of them had different approaches and they the strategy supported the focus they had chosen. They were not comprehensive, nor should they have been, but they were "good enough" to start testing the most important thing as soon as possible. We discussed the fact that when combined these three could actually complement each other. In a coffee break, we can create a draft of a strategy and even introduce that draft to a potential customer to explain our strategy to test their product.

All in all, the workshop was a success, since it spawned a new way to think testing strategy. I never thought I could choose a simple focus like "money" as the guideline of my testing. It does reveal the importance of talking to stakeholders to understand their values and needs. Without any templates or predetermined practice every team could conjure an awesome strategy which seemed even executable after they explained what they thought.


The one at Tallinn

Feeling confident on the outcomes of the workshop, I agreed with Helena Jeret-Mäe that I would do an extempore workshop during a coffee break of the Nordic Testing Days (see here). The workshop was supposed to be on the last day. I dragged a white board to the hallway near sofas and gathered people to do the exercise. I was able to gather a couple great minds of software testing on the sofa including Erik Brickarp and Santosh Tuppad. The workshop started by me explaining the idea and then I gave the same task to them: "Generate a test strategy in 10 minutes."



The problem was that the pro testers weren't too happy with the vagueness of the context. They wanted to know more about the product, more about the stakeholders, more about the project. In the end we didn't actually generate a context map for the product (plus some testing strategy elements).

The best outcome for me was (once again) to observe experts in their work and how the dynamics within a group actually work. Also the lack of structure was something they pointed out. Erik mentioned that we could have described four key aspects of the product and then figure out how to test those. Where the first three groups I had at my office all chose a focus, we had none. We could have spent a few minutes in the beginning to define where we would like to focus (even an arbitrary focus) and then create a strategy based on that. There were hints of focus surfacing when they interviewed me about the product, but none that was chosen as the key thread.


What next?

Having had two vastly different workshops on the same subject, I think it makes sense to arrange these even more. The original idea to this came from Fiona Charles at EuroSTAR 2013, but I think we approached it quite differently. I am planning to do these extempore workshops at every conference I attend, since it makes sense to give people a chance to try their skills at this. Every time I get a huge amount of pointers, ideas and lessons learned. I would think that it is the conversation that ensues rather than actually creating a strategy that counts.

The key lessons learned in both sessions were:
- The dynamics in the group determine a lot what kind of strategy is created
- Know your stakeholders!
- Choose a focus as a skeleton and then fatten it up
- Don't over-do it! Ten minutes might be enough to start testing the most important thing.
- Have some structure, but keep ideas flowing

When you see me at a conference, come ask when am I gonna pop out the white board. It might be the next coffee break. ;)

- Peksi

Monday, 23 January 2017

Mushroom-picking heuristic

Imagine this: You are in a forest and you are trying to find juicy mushrooms. You'd like to find some porcini, black chanterelle, maybe some yellow bearded milk-cap. Edible mushrooms nonetheless. Before you go, do you need something? Perhaps the following:

  • the calendar which shows different mushroom appearance into your local forest (not too close to habitat because mushrooms ingest heavy metals)
  • maybe some research on what the mushrooms look like that you are trying to find (also pictures thereof so you don't pick something poisonous that looks vaguely like edible mushrooms)
  • some knowledge on how to pick mushrooms (pick them up intact and one piece either by twisting or pulling)
  • maybe choose the weather when you go mushroom-picking (the mushrooms should be rather dry when being picked up)
  • prepare the mushrooms as soon possible (remove sand, pine needles, moss, etc)
  • use proper tools (a knife, a brush, a basket instead of a plastic bag)


These are things that you might want to consider when going out for mushrooms. Now, these are all preparatory things, bits of knowledge you might want to have before actually going to the forest. Obviously you can go without knowing these things, but you may end up with no mushrooms or worst case poisonous bastards like amanita or cortinars. The trip to the woods might have been productive nonetheless - you got fresh air and some exercise, maybe you had a good chat with a fellow mushroom-picker. Not all unprepared trips are totally worthless, but you need some preparations to achieve good results.

How does this translate to testing? You prepare yourself to the testing task by doing almost same things you do when you go mushroom-picking. Like so:

  • You check the schedule for the most convenient time to do specific kind of testing. (In January you might want to ice-skating instead of mushroom-picking, which might still be fun.) If you're doing usability testing, you might want to choose a time when there is something someone can actually use. When doing penetration testing you might want to pick a time when there is something to penetrate. Cuz if you choose the timing badly you might not achieve the best results. Also checking the schedule gives you clues on how to time-box the testing.
  • You might want to research the subject of testing. What should you be looking for, what things you expect to discover, what are the risks that are already known. Perhaps you want to learn more by exploring the product, by trying things, playing and clicking around, banging the keyboard with a shoe. You might have pictures of the GUI or architecture schematics, maybe a person to help you go about the product. Like mushroom-picking, you can stumble on interesting things, but to recognize the important, the critical, the alarming stuff you might need to do some research.
  • You might need knowledge on how to do software testing. By clicking around without a purpose might not be good testing. A good tester has a skillset that she utilizes to perform good testing. It is also important to know the domain, how to perform testing in that particular product/domain/service.
  • Choose the weather when you go testing... Urhm... Testability and configuration, choose the best starting conditions, datasets, timeslots, loads, etc. to achieve the best testing performance and to find interesting things. You might want to do testing when the backend is performing poorly or the network connections are bad. Maybe choose a dataset that is production like or maybe it should be a fuzzing test to generate weird data to the APIs.
  • Prepare your mushrooms. Deliverables! You should take notes on your testing performance. This is to help you steer your testing, generate ideas that couldn't yet be executed, dot down risks and bugs you find. You can then explain to other people what you did, why you chose to do specific tests and checks, what did you find. If needed, prepare a useful report on the testing you did.
  • Use proper tools. Testing is about using tools, obviously. Most important tool is your brain! Use it. Also use tools to help do things that are difficult or time-consuming, keep your concentration while testing. Use scripts when they are useful, record your screen, have quick note taking equipment. Use other people! Two brains are sometimes better than just one... A person can be a tool!


(This is not comprehensive list in any case. You might want to take other context variables into account to achieve the best result when you actually go testing.)

I'm now in the woods with my wellies on, my trusty 'shroom-basket, a J. Marttiini mushroom knife, and a backpack willed with sandwiches, a “Book of Mushroom and Black Magick”, coffee and some chocolate. Maybe a map, a compass, some survival gear if I get lost in the woods. I want to be prepared and tooled up. How the hell do I find those mushrooms?

This is where a Lévy Flight heuristic kicks in (a heuristic mentioned by James Bach at CAST 2014). It is an algorithm by which animals (and humans) go foraging. "When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random directions." says Wikipedia. Essentially doing something in one spot until you move to other area to do something there. So, I stand on the road and head to the woods. I try to look for sweet spots in the woods, like decaying fallen trees, dry mossy areas under pines or furs. If I have dome my preparations correctly I look for these areas and I might know where to find them. So I start roaming in the woods and stumble on an area that has something interesting. A fallen tree! Yay! Maybe I'll find some black chanterelle there. So I look closely at the area and spend time there, perhaps picking some mushrooms or just scouting the area for clues where I might find some. After a while I head to a new location keeping in mind where I am in the forest.

So I move around the wooded area in a pattern that tries to achieve the best coverage of the important areas. The pattern might seem random, but I have a mission which I am trying to fulfill with the choices of direction to wander. When I am halfway on my walk, I might want to go towards the road so I don't end up too far when the sun goes down. I spend time on areas that are either rich in mushrooms or interesting places in themselves. Maybe I learn something while scouting for porcini, something about birds or types of moss I tread on. Maybe I note down areas that have some other mushroom that I am not intending to pick (you don't want to mix mushrooms in the same basket - I don't know why...). I take mental notes, maybe write stuff down, mark places on my map. All the while trying to achieve a goal I set for myself before I went foraging. On my way back I stumble on an abandoned building. Cool! I might take a look inside and maybe I find something interesting. Maybe an old newspaper or a book? I might spend time there even though I was set out to pick mushrooms. This is interesting new place and I want to know more of it. So I deviate from my initial mission and investigate the building. It's apparently someone's old home and its walls have sunken in to the ground. Maybe the architecture of the pre-WW2 era interests me, maybe the newspaper has some information from the "ye olde times". Maybe the book has a letter tucked in between the pages. I allow myself to deviate because this might be more important than mushroom-picking.

Back to the testing world. The woods turn into APIs, GUIs and code. The moss becomes the date on which I tread on. The mushrooms... They're not bugs, if that's what you thought. Mushrooms are information, relevant information. It can be about a behavior that is annoying the user, an error message in the wrong place, a risk that needs to be communicated to the stakeholders. There might be bugs, but there is so much more. The you go foraging in the software you can apply the Lévy Flight  heuristic either by accident or purposefully. It is called focusing and de-focusing. You focus on some area to find interesting things for some time. Then you move to other area. If the area you first stumble upon is hugely rich in things waiting to be discovered, you might spend most of your time there. Or you might just stop there briefly and look for more important things to discover.

You start with a mission and you head out to accomplish that mission. You take notes and notice things. You investigate things that look and feel important. You forage information on the product under test.

Here's a scenario that might give clues to choosing mission for your foraging.  On Monday you wonder if you should go picking mushrooms. It's a fine day, but instead of going head first into woods you barely know, you go investigate. You take your dog with you and go scouting the forest. You find a batch of blueberries and eat a few. Oh, they're so nice! The dog, Rover, eats some also. On Tuesday it rains. Bummer! So you decide to go to a shop and buy some equipment for the trip as soon as the rain stops. You decide to get a basket that has compartments for different mushrooms. You didn't even think of it before talking to the shop keeper who's apparently an expert on the matter of picking mushrooms. Great find! On Wednesday you are called to the office for an important meeting. A nice, sunny day wasted in meeting.  No time for foraging today. On Thursday you get your gear, your dog and head out to woods. You check the sweet spots you discovered on Monday and pick delicious mushrooms and even some blueberries. On Friday you make delicious mushroom stew with some potatoes and carrots. Then, to top it off, you make a blueberry pie. All the recipes for these you checked online but used your own twist to make them your signature dishes. A perfect way to start a weekend.

To sum it up, Mushroom-picking heuristic is a two-fold heuristic:
- First it is a preparation heuristic. It helps you create a mind model that allows you to plan for the up-and-coming testing session, testing phases, etc. To an acceptance testing session one might prepare differently than to a security testing phase. Nonetheless testing needs preparation and a good way to tackle the preparation is the Mushroom-picking heuristic. You should make the preparing heuristics to match your own context. Think of tools, background knowledge (oracles), time-frames and constraints, people attending the sessions, bug reporting procedures, facilities, etc.
- Second it is a testing management - a steering heuristic. It helps you move from one focus area to another. Focusing and de-focusing is one aspect. With note-taking you can keep track on the areas you have covered and to remind if there's need to return that area. Keep in mind the Stopping-heuristics so you won't get stuck too much in one area.

The Mushroom-picking heuristic is not comprehensive nor should it be. It is a model that might work or you may find it useless. Perhaps you might try it and give me feedback on how it worked and how I should improve it. It is a work in progress.

Have a tasty spring!

- Pastori

Tuesday, 14 July 2015

"Thinking like a tester" workshop

As I have mentioned before, we have this round of workshops under the concept of Testing Tuesday. I have already covered two of the latest workshops as blogposts. This one is an attempt to cover the first of the seven, a workshop called "Thinking like a tester".


Why did we talk about thinking?

I like to think that there is no testing without thinking. If thinking isn't involved, it is not testing. Machine doesn't think, thus it just checks. Human challenges, observes, infers, models, etc. all the time while looking at a test object. A human tester thinks. That is why we need to practice different ways of thinking. It's like exercising a muscle, but the muscle is our brains.

There are number of different ways to think, many of them overlapping, and we should try to perfect those skills. It is valuable to recognize different patterns of thinking to be better able to solve problems. Testing is essentially problem solving, since we try to notice things and then afterwards try to figure out if this observation might be an issue. By having multiple tools in our tool box we don't have to rely on just one way of thinking ("When you have a hammer, all you see is nails.") and that makes solving the problem much easier.

My goal with this workshop was to introduce a few different mechanics of thinking to the audience and have them use that mechanic to perform an exercise.


What did we cover?

We were able to cover three exercises. The first two exercises were borrowed from improvisational theater. The first one was "reinvent the wheel".

I split people into groups and gave them the assignment to create a method for transportation. The only things given to them were the fact that the context in which they were supposed to solve the problem didn't have a wheel yet. They weren't supposed to create the wheel but create an alternative, as effective method of transportation. The thing was a bit difficult since I told them to start every sentence in their brainstorming with "Yes, but". This was an effort to make people challenge the already agreed.

People started to do the exercise but it seems that they didn't fully see the point of that exercise. My attempt was to make people challenge the assumptions and the ideas already on the table, thus making it an exercise in critical thinking. I might have missed the mark slightly, but people did have fun. I noticed that people required a leader in their group and some catalyst to provoke the challenging. I visited the groups and challenged their ideas by asking a question and then replying to their answer with a "yes, but" phrase. That stirred the pot slightly. I believe it became an exercise on team dynamics more than thinking patterns.

After 10 minutes we debriefed the ideas they came up with and moved on to the next task, "Reinvent storing".

Once again people worked in groups. The task was to invent a method to store things without using shelves or stacking things on top of each other. They once again had two facts about the context: There was no concept of shelves or stacking, and they had to begin an idea by saying "Yes, and". This was supposed to be an exercise on creative thinking, finding new ideas based on the old ideas, accepting what is already decided and building on that.

The task was more fluent than the first one and it spurred some crazy contraptions to store items, from pulley-operated platforms to portable black holes. There was certainly creativity in the air! Once again I felt that the exercise fell a bit short, and people did have questions on how they related to testing.

The third exercise was a bit shorter because it took so long a time to debrief the first two tasks. The third one was an exercise on lateral thinking. I explained it on a broad level, then I gave them a problem which they were to solve using lateral thinking.

The story was something like this: There is a merchant who owes money to an evil man who is in love with the merchant's daughter. The merchant can't pay the dept. The evil man proposes a wager. He puts a white and a black stone into a pouch. If he pulls out a white stone, the debt is forgotten. If he pulls out a black stone, the debt is paid in full AND the daughter is forced to marry the evil man. There seems to be a 50/50 chance. However, the evil man changes the white stone into another black one.

I asked the groups how would they solve the problem so that none of the participants lose their face (i.e. is revealed as a liar, is forced to marry, gets killed, etc.). The task was once again pretty difficult since the premise was so vague. I then had to answer a lot of clarifying questions about the problem before groups could actually start working on their solutions. They managed to think outside the box on many occasions. The ideas were quite feasible, I think.


What was the most valuable thing to me?

Having done three exercises on thinking, I realized that it was just a scratch of the surface. I thought I had time to do a "Thinking fast/slow" exercise but everything went by so fast. The essential thing might have been just having fun with my coworkers, making them do something out of the ordinary, promoting testing as a thinking activity as opposed to a technical task that creates test cases run on some virtual server.

The tasks were obviously quite difficult, but it gave a good ground work for the next workshops. The people were the essence, not me blabbering in the front (although I like that also). The more workshops I held, the more attendee driven it became. I facilitated, they provided the material.


What would I do next?

Since there will be another "tour" for the Testing Tuesday, I will refine this workshop. I will explain in more detail what the task is and arrange more time on people to explain what the connections to testing could be. Instead of making it a lecture I give the mic to the attendees on why would it be important to think. Maybe I'll try to add some other thinking exercises, like "Think like a freak" and the "Thinking fast/slow".

I am thinking on doing a blogpost on the lateral thinking, since I find it really important skill. I believe few of my community colleagues have already done that, so I might have to take a different approach on that. We'll see.

Anyhow, this was the workshop on Testing Thinking. If something wasn't clear or you have ideas how to make the workshop better, drop me a comment.

- Peksi

Tuesday, 7 July 2015

Testing technique workshop

The last part of the Testing Tuesday’s “Test Pistols Tour” was a workshop about testing techniques. The original plan was to have a list of techniques and then exercise to learn those techniques. The scheduling caused us to change our approach because we had no time to create environments for exercises.

So, I turned to the community.




When I dragged my but to the conference room I was expecting just a handful of people, 3 or 4, but eventually we had 7 people. I think there was a bit of a tour fatigue in the air, since this was the seventh workshop. I had seven brave soldiers at the meeting room.

“I have changed the rules!” I said. “I ain’t gonna tell you about testing techniques. You’re gonna tell me about testing techniques.”

Now the plan was the following: Pair people up, make them test something and describe their testing. Then discuss what kind of a problem they were trying to solve with their chosen approach. Sounds simple enough. I was a bit uncertain if people could describe their testing to a level from which I could derive a technique. The challenge was thrown.

I told them to open Word. The assignment was to test the “Find and replace” functionality and describe to you pair what you did and why. I asked some questions from the teams during the 10 minutes of testing we did and made them focus on actually telling why they chose to do something. After the ten minutes, we started talking about how the testing was done. These are the key points we came up.

Hot-key testing

The first team started to describe what they did by explaining how they searched for the functionality. They were trying to find different ways to access the functionality. They found out that on different operating systems the hot keys vary. More so, the hot keys are customizable thus enabling different combinations. “Ctrl+F” was the easiest way to find the function, because it happens to be the same on many other software also (comparable product and familiarity to user). On Mac there wasn’t a “Ctrl+F” so the hot key was a bit difficult to find.

Based on their approach to using the hot keys we gathered that the technique can be used on many Windows based software (and why not Mac based, but I don’t have the experience to use hot keys quite yet). The commonly known hot keys like “Ctrl+C / V / X / Z“ etc. are quite easy to test. The tests are quick and cheap, very generic thus making the technique quite useful.

Premise variance testing

When the group was trying to find different ways of accessing the functionality (hot keys, context menus, sidebars, ribbons, etc.) I asked if it changes the behavior of the functionality when you access it from different origin points. If you change the premise, can the functionality change?

We started to think if we could apply it to various other solutions and products, and we came up with “premise variance testing”. When one changes the premise condition to a function, there might be changes in the behavior. This kind of technique can be derived also into a “step variance testing” where you mutate a single or many elements within the process.


Help testing

When one team was trying to figure out how the function worked, they pulled up the manual. The help can be quite simple for experienced user but it acts as an oracle on many occasions. During help testing one can testing the help itself against the product and test the product against the help. In either case, one acts as the test object and the other as the oracle.

This technique could be derived into all kinds of oracle material testing. We can test against oracles that are used by various stakeholders, e.g. requirements or design documentation. We test the product and ask “is this ok?” and we then try to solve the problem by referring to the oracle. We might have an oracle (e.g. human oracle telling how it should work) and then test the other oracle based on the new knowledge (the human oracle disputes the written document). The “Help testing” might become “Oracle testing”, but the name doesn’t give me good vibes. ;) A help could actually be any material that helps us do testing.


Data roundtrip testing

A team was testing replacing a word with gibberish and then replacing that back to the original value (“Pekka” -> “ASDFGH” -> “Pekka”) and they wanted to know if the same amount of entries are changed. So basically the idea was to revert the original data without actually reverting the state. Mathematically I think this is called “inverse function”. First we apply the normal function followed by the inverse function. Roundtrip actually means that you return to where you started from.



We had a discussion if the “roundtrip testing” is actually a generic thing that can be done to a state also. It is possible to revert the system to previous state without any information whatsoever about the state that was visited. This might actually be a problem in itself, but we chose to narrow our testing technique to mere data.

Minimum data

We did find some testing ideas while describing techniques and I think this was worth mentioning. A team wanted to test with as little data as possible. That is a one variable of premise variance testing where we solely focus on varying the data instead of the states. This testing can be the defaults form testing, testing without any inputs (NULL, n/a, whitespaces, etc.), removing metadata, etc. and it can find bugs in the exception handling logic.


Conclusion


All in all, the testing techniques we found have already been described in other sources, but these made sense to us and felt important. The terms are more tangible than “product tours” or some techniques found in books. We defined the terms and we learned how to describe them in a language that suits our context.

I know that at least the premise variance testing stuck. I have used it a few times now to describe what I do. It makes sense to repeat this exercise again with a different depth. Then uncover new, undescribed techniques and make them part of our tool box. After there has been a handful of these sessions, we might have enough skills to describe our testing to any stakeholder in a language we share and understand.

Sadly that was the last of the Testing Tuesday workshops on this tour. There will be another tour in Helsinki, and I shall write up as much as possible from those sessions.

- Peksi

Tuesday, 23 June 2015

What happens when a wannabe rockstar does a lightning talk?

"Are you f***ing ready to rock?!"
The crowd is wild. The stage is lit. The announcer get on the stage.

"Ladies and gentlemen! Are you ready for the coolest rockstar in the world?" The crowd roars its approval.

"Are you f***ing ready to rock?" The crowd goes wild...

Ok. That's not how it happened for me. That is what I wanted to happen, when I did my lightning talk at the Nordic Testing Days 2015. Actually, it went something like this:

Helena Jeret-Mäe get's on the stage. "Next we have Pekka Marjamäki on the topic of Testing Tuesday", she says. Then I get up from the floor and walk in front of the room full of people. I have my hat full of badges, my Superman t-shirt, suspenders handing down, all cool and ready to rock the place.

"OK, people!" I start my performance. I divide the room into two groups. "The group on this side shouts 'TES' and the group on this side shouts 'TING'. Ready? Tes-ting! Tes-ting!"

The crowd starts shouting. They shout "testing". "Louder!" I shout. The room is roaring for a minute or so! Then at it's peak, I silence the crowd and start my talk on Testing Tuesday.

Testing Tuesday doesn't sound that weird. It is a concept me and my colleague Petri Sirkkala from Solita came up with. It spawned from a need to teach testing at my company. I will explain how it goes in detail (accompanied with a video, perhaps) in a later post. In the Nordic Testing Days 2015 I briefly introduced the concept. It is 7 weeks every Tuesday, each week having a workshop of it's own and helping our colleagues with their testing problems. The most recent blog post is a write-up of the 6th Testing Tuesday workshop. Apart from actually helping people test, strange things happen during the Testing Tuesdays, e.g. us two testing dudes walking around the office shouting "Testing Tuesday" and playing Sex Pistols from an old cassette player, or posting testing problems on a white board in the hallway.

The main goal of Testing Tuesday is to promote testing and to sow seeds of interest into people who aren't yet that much into testing. The second goal would be to help people in their testing related challenges. The third one is to have fun.

I think my objective at the Lightning talk was to convey the energy and enthusiasm we pour into the Testing Tuesday. The attitude to fight against poor practices, the bravery to stand up and challenge, the eagerness to improve. Honestly I can't remember what I actually said during the talk, I had so much fun. I do believe that people got the key points out of my talk.

Be open about your passion towards testing.
Share knowledge and help others.
Be brave and have fun.

"We want more! We want more!" The crowd shouted after my talk... At least I wanted it to. Alas, it didn't...

- Peksi