Friday, 12 July 2013

Conference at a Glance, part II – My glance on the Tuesday AM tutorials

This is the second part of my series of posts about EuroSTAR 2013 conference. I apply the same method of evaluating as I did previously, so read the “My glance on the Monday tutorials” before this post, if you haven’t done that already.

Ian Rowland’s “Thinking Outside The Locks

I have not heard of Ian Rowland, but I must say I’m intrigued. A magician? The biography in EuroSTAR page makes me want to know more about this fellow. In fact, I googled his name and got to his website. I’m really looking forward to see Ian do his stuff. I would guess that humor is involved in addition to mind blowing approach to thinking outside the box (or locks as he says).

To be honest, I think critical thinking, unconventional approaches, non-rational thinking are the tools of my trade. I would be able to use those both daily and they would make me become a better tester both short and long term.

I would recommend this to all my colleagues. In fact I’ve been thinking about a 30 minutes workshop on out-of-the-box thinking and using unconventional methods to solve a problem. If I get enough ideas for my workshop I might just do a workshop, and I hope Ian can help me generate ideas for it.

To be honest, I can’t think of anything to disagree from the summary of the tutorial, so I might not be able to challenge him. I’m curious to see how the methods he uses actually project to software testing. I might steal some of the details and trick he uses to my own work, like I mentioned before.

The tutorial seems very interesting as a light weight beginner for the conference (even better after a full day of Monday tutorial), but is this the best option out of the cast of many great speakers? I would be the one doing a half day tutorial on the subject, considering I don’t have that much experience in coaching out-of-the-box thinking.

After the smoke clears and the magician bows, I would like to see/hear/learn about implementing the skills and theories Ian shows to us. Theory is good and all but I would like to see results in testing craft to be happy with the tutorial.

On my Birdy scale, Ian Rowland’s “Thinking Outside The Locks” would scale as follows:

  • Person-to-person: **
  • Short time value: ***
  • Long time value: **
  • Steal-ability: *
  • Challenge-ability: 
  • Total: 8/15 stars

Prof Harry Collins and James Bach’s “Using Sociology To Examine Testing Expertise

I think I said enough of James Bach on my previous blog post, so I will concentrate on Prof Collins. I find his resume quite impressive. He has (co-)written books that we testers should (I haven’t) be reading including the “Tacit and Explicit Knowledge” and “Rethinking Expertise”. I am eager to hear what he and James Bach have come up with. The duo of unschooled (but not untaught) and a University professor could spell doom to us mere mortals. I’m truly eager to hear their tutorial.

I am really eager to listen to stuff about meta-knowledge (or knowledge about knowledge). I do not, however, see a short term benefit from it. It will eventually develop my sense of self analysis. I’m very interested in any studies about testing and testing methodologies and this tutorial taps to that – using the tacit knowledge in addition to radiant expertise.

I don’t know straightaway how I could harness the tutorial for the benefit of my fellow testers. If the tutorial addresses social tacit knowledge, I could be able to make the company benefit from acknowledging that knowledge.

I am keen challenge the fact that there are skills that no person possesses but a group of people. Let’s say I invite a group of people to my house and I want to learn Chinese. None of these people know Chinese, but as a group we might be able to possess the skills to communicate in Chinese? Am I on the right path here? Or are we talking about more-than-sum-of-its-parts mentality, where we all would know just a little Chinese or a language close to Chinese?

I would refrain myself from teaching this. Maybe I could mention and guide people to seek for material appropriate to this experiment. I have no previous knowledge in this kind of study as I lack the university background.

Just like in Ian’s tutorial I would like to see/hear something that I could implement to my own work. What do I do with the information about what skillset does the teams have?

On my Birdy scale, Harry Collins and James Bach’s “Using Sociology To Examine Testing Expertise” would scale as follows:

  • Person-to-person: ***
  • Short time value: 
  • Long time value: **
  • Steal-ability: 
  • Challenge-ability: **
  • Total: 7/15 stars

Peter Zimmerer’s “Questioning Testability

I begin to wonder what people think about me as a community member when I don’t know most of the people making presentations and more importantly tutorial speakers. This pre-analyzing the tutorials also helps me to familiarize myself with the people so I could recognize them at the event location in Gothenburg. I believe I have a lot to talk about with Zimmerer from all kind of things, but I believe we can make a conversation out of his topic also.

Testability is a freaky subject for me. I might be living in a bubble where we almost automatically plan our products with testability in mind. We aim to make the testing as fast and efficient as possible, so this tutorial might not give me too much in short term. I do believe that testability is one of the key things to enable efficient testing, so I more than recommend this tutorial to everyone!

I do believe that I could benefit from revolutionary points of view, which I hope Peter will provide. At some point when testability becomes more a worry for me, I might need the skills. Also the ability to promote testability could be important for me in this company. At some point the leading testability evangelists might leave the company, so we need as much tacit knowledge on testability as possible.

I’m expecting a lot of practical examples to be able to share myself (possibly after altering them to my own flavor). In that sense, the stealability is quite high in tutorial. I would focus mostly on practical appliances of testability, because testability as theory is quite trivial. People seem to have trouble in understanding how they can make stuff happen in practice.

When it comes to questioning, the words “step-by-step” rose a red flag. Is this method an omniscient, all-encompassing process? I hope this tutorial doesn’t turn into “do this and everything will be fine” but into “apply these skills where it’s reasonable”. If I am to join the tutorial, I will definitely challenge

On my Birdy scale, Peter Zimmerer’s “Questioning Testability” would scale as follows:

  • Person-to-person: *
  • Short time value: *
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability: ** 
  • Total: 8/15 stars


Anne-Marie Charrett’s “Coaching Software Testers

James has mentioned Anne-Marie a couple of time in our conversation and praised her coaching skills. Then again, I must admit that I have not made myself too familiar with her work. I’m looking forward to seeing her and possibly having a chat at some point. Hopefully she’ll be able to donate some of her time to me.

Actually the coaching method described here is something I have already done a few times. First James Bach coached me using the Socratic Questioning and then I used it to coach Erik Brickarp, Jari Laakso and Aleksis Tulonen among others. The amount learning on BOTH parties was phenomenal. I would love to gain more skills in this area to be able to continue my journey as a coach. This is something that both my colleagues and my fellow crafts(-wo-)men will benefit. I have some skills to begin with so I will employ them in future to the benefit of all, including me.

Having said that, I will try to steal as much as possible from this session and to mold it to my own. I recommend this session – yes, without having yet attended it, but having faith in it like in no other! Coaching skills are paramount on testers skill set if they ever want to become true professional.

I find it hard to challenge this on two reasons: I would consider myself as a member of coaching congregation and I would find it hard to challenge something I have unquestionable faith in. I am willing to try challenging for the sake of argument, but facing Socratic Questioning arguing for argument’s sake might be my downfall.

On my Birdy scale, Anne-Marie Charrett’s “Coaching Software Testers” would scale as follows:

  • Person-to-person: *
  • Short time value: **
  • Long time value: ***
  • Steal-ability: ***
  • Challenge-ability:  *
  • Total: 10/15 stars

James Christie’s “Questioning Auditors Questioning Testing

James Christie (How many people called James are presenting in this conference?) is one of those that fall on the same category as Anne-Marie – I would love to talk to them for what I have heard from my fellow community members – but I have never delved into James’ work in-depth. Maybe I can sneak into his lunch table and steal a minute to talk about testing. ;)

In the past, I was working in a company where an audit was held. I was part of a group that coached the people getting audited to answer the questions correctly to appease the auditor. Wrong approach to audit – we do not always act according to the documented process but according to our best knowledge on the situation. The audit was about the documentation. The auditors were held in so high authoritarian position that they were not challenged – I was not allowed to talk to the auditors. ;)

I don’t see a short term benefit on this tutorial, however. I’m not currently in a position to be part of audits currently at F-secure. We do have security audits and the like, but I have yet to be invited to one such event. I might benefit James’ tutorial if I focus on questioning instead of auditing. If the scope wouldn’t be so narrow as to concern only audits, I would find it more beneficial to my current work.

If I could combine questioning to other areas, like specific levels of testing (unit, module, etc.) I could be able to teach or coach other testers and programmers to question their work more efficiently. So long term benefits could outweigh those of short term. More so, I don’t know where my life takes me so having some skills in challenging auditors might be my thing in the future.

I have such limited knowledge on auditing as such, so I find it difficult to disagree with questioning. Usually the person being questioned could benefit from the questioning too. I have been in a situation where I learned more by being challenged than by acquiring book knowledge on the subject. Like I said earlier, I would like to see tracks on more general questioning, arguing and challenging. This tutorial might answer some questions I have, but I’m not sure at this point.

On my Birdy scale, James Christie’s “Questioning Auditors Questioning Testing” would scale as follows:

  • Person-to-person: **
  • Short time value: 
  • Long time value: **
  • Steal-ability: *
  • Challenge-ability:  *
  • Total: 6/15 stars


Pradeep Soundararajan & Dhanasekar Subramaniam’s “Context Driven Mind Mapping

I know Pradeep from tweeting with him and reading his blog. I also have followed the progress of Moolya for some time and I’m really impressed in their success. I’m also looking forward meeting Pradeep and Dhanasekar in Gothenburg to talk about mind mapping and all testing related stuff. I’m glad that Pradeep is hosting two sessions at the conference so I can join at least one of them.

I’m a bit of a mind map enthusiast myself so this tutorial is almost tailored for me. I find a lot of things here that are almost exactly from my workshop year ago from Nordic Testing Days 2012. I do believe that I have a lot to learn on both using the mind maps and hosting workshops. In short term, I would like to learn the most effortless way to utilize mind mapping. I tend to procrastinate during the testing, so if mind map can keep me focused, I would be on cloud nine. I also see mind maps as the tool of the future for it utilizes the brain instead of some arbitrary tool.

This tutorial would be worth stealing in its entirety and then I would go on promoting the ideas and practices to my company and to my peers in the community. In the long term, mind maps could help make exploratory testing both understandable and credible to stakeholders with minimum effort on documentation. I have already played around with the thought of decompiling the mind map into coverage charts by scripts, so this might even further automate the documentation of exploratory testing.

As for challenging, I know where I was stumbling in my workshop so I might tap into those subjects. First would be the content-switching during testing from the mind map to the software under test. If the mind map requires another window in addition to database browser, Unix log screens, browsers, standalone tools, etc. the content-switching becomes a burdening factor in the long run. Second would be the “free form” of the maps, which could result in inconsistent ways to report. I’m curious to see how Panda and Commander can tackle these. :)

I would recommend taking the Monday tutorial by James Lyndsay and combining it with this to make an awesome combo of exploratory testing and modeling. I haven’t yet decided which to attend, but if you, dear readers, should consider this combo really hard.

On my Birdy scale, Pradeep Soundararajan & Dhanasekar Subramaniam’s “Context Driven Mind Mapping” would scale as follows:

  • Person-to-person: ***
  • Short time value: **
  • Long time value: **
  • Steal-ability: **
  • Challenge-ability:  **
  • Total: 11/15 stars

Afterword

Once again I have not yet decided which to attend. There seems to be 2 top dogs right now, but I cannot yet say which to attend. I might even change my mind right before the session if other community members talk me to join a session other than what I would have chosen.

Anyway, I have quite a task ahead of me to plow though the conference tracks one by one. But be assured, I will go through as much as I can.

Also, I got interviewed to EuroSTAR community spotlight. I thank Emma Connor for the interview, and wish her and every tester out there a great summer!

- Peksi

No comments: