Tuesday, 31 August 2010

Preacher man Marjamäki

I was reading own blog entries and I started to truly think about my own writings. Those writings have clearly the idea and insights - some of the insights derived from others, but still. However, part of my text is “porridge” - it lacks the red wire. I am a novice blogger, so it is certainly not a great disadvantage here, but when I read more and more testing blogs of other, I see they write long and coherent entities, where as mine are short and fragmentary.

So I have found the opportunity to develop myself from my own errors, which I did not even know were there at the time of writing. Someone may disagree on whether the writing style in itself flawed, but perhaps "lacking" is the word that describes it best. Whet perhaps led to the creation of this writing was a text about challenging oneself by James Bach. So I found a questionable approach in my posts, and it cries for improvement.

So how do we improve the concept, which is not really wrong - or at least not harmful? How to motivate to repair, which in itself is not necessary to be repaired?


When the tester examines an application (i.e., exploratory tester) he finds bugs. He also finds deviations. Fault in this case would probably be writing off topic and / or misspellings within the text. Potential errors nonetheless.

Typographical errors in this case are the lowest of severity, and the repair improves (only) the cosmetics. They make it more compromise usability (readability) when the user (reader) has to constantly think about whether a typo is a typo or deliberately wrong (or even a different word). Repair is easy in these cases, so despite the low severity the correction should be made to find out the “bug” is noticed.

Factual errors are more severe errors. It may mislead the reader, which is not usually the purpose of the blog. In addition, it reduces the confidence enjoyed by the writer, so his texts published as “true” are not considered true but false. Reviewing context is crucial! Factual errors, severe errors! Correction in these cases results in a new version of the text or freezing the text. If a bug can be found in an application and it prevents the use of the application to full extent or to which it is designed, it will be deactivated and make the necessary corrections in order to function as desired (as expected) in a way.

Deviation in this case is a deviation from the actual or supposed truth. If something does not match the reader's prior expectations, he considers it of a deviation. A deviation in itself is not a fault, it has been just implemented different from expectations. If someone publishes the text, which is different from the mainstream, he must to be able to justify his choice to gain of user satisfaction (in which case it becomes a feature), or the deviation turns to be a fault (i.e., a defect in the text). The same applies to applications where a feature is the done in violation of the requirements (prejudices, assumptions, expectations).

Derived from that: Is there a previous blog post that is defective? Does my unintentionally preaching-like blogging result in faulty, incorrect or different writings? My bias is that they are both defective and deviant. Let’s see whether it is true...

My intention in the text "How do I interview a new tester?" was to write down ideas and to create guidelines for myself and other similarly interested, who wrestle with the same thing. The text is incomplete on many parts, due to a lack of perspective. References to existing texts and comparing them would have added depth to the text. For example, Software Testing Club, funny book, "The Ridiculously Simple Guide Test Building A Team" would have been good reference material ... Ashamed that I made probably most of the "best proven" things mentioned in the eBook and I probably asked exactly the wrong things. (As it turned out, I wrote the whole shebang again. But this refers to the original, Finnish version of the text.)

"Performance Testing Integration Project" was designed to provide a clear frame on performance testing – area where I hadn’t gone before. The post studied the technical point of view and may even provide a reference material for those interested. As a result, the post was quite successful individual, in my opinion. It was based on existing performance testing of the model. This could have been mentioned in the text, because I make it seem like I invented the whole testing model.

So that this blogging does not change into defensively, I want to keep it in the critical path. As I told earlier, these were the assumptions and outcomes, and comparing them. The outcomes were somewhat different from the assumptions, but in some cases (as in "Performance Testing Integration Project"), the deviations were very small. Although they were not well-founded, they were such that their correction is not priority number one. Au contraire, "Testing Jin and Jang!" fails to meet expectations by a long shot. As I say in the text "My thoughts began to roll like crazy at the time -", which was true at the time, the red wire in my head is not conveyed to the text as I would have liked it to. In retrospect, I could filter the preaching part out of the text, and focus on the essential and the cold facts. Would I be able to submit my own experiences or to refer to other’s experiences on the subject? Could that material be gathered and refined in to a good and sententious and incite other to think "like mad"? The text is lacking the touch with reality, which prevents the reader from taking the text seriously. This therefore calls for a re-write due to the defects and deviations of the post.


So, how to motivate the repair, if the repair itself is not necessary? Self-improvement! If you have the opportunity to develop yourselves, it should most certainly be done. You never know where reviewing your own shortcomings may lead.

In order that this blogging does not turns into preaching on behalf of self-improvement, I would like to stress that all faults, errors and deviations should be identified and investigated. The correction of these may happen spontaneously by studying and searching for them. This could be like examination of the desert: the examination of the sand reveals an anomaly in the smooth surface od the desert (the tip of the pyramid), examination that the finding (digging) is revealed to be mightiest tomb monument in the World containing divine treasure and riches beyond imagination.

Friday, 27 August 2010

Do we have enough billable testing to do?

In our company there are the kind of projects that are small or smallish, that have 2 or 3 developers and they are about 1 – 4 weeks in duration. Then there are mammoth projects that occupy the whole unit (20 developers) and last from 6 to 9 months. I.e. developers rush to get things done and the product to the customer.

In the midst of all this is a lone tester. There is a myriad of things to test, but you cannot always bill customer for those tests: it has not been sold to the customer! Only the development and development testing has been included in the offer and thus are the only things billable and the real testing is done as acceptance test driven style. While this kind of approach is great in small project with little changes done to the software it accumulates to lots of versioning and rework (which is redundant!) and it ties resources to minor project for a long period of time. The official testers (read a tester) are positioned in the larger projects that have lots of testing and things to bill.

Alas, cometh a situation where there is nothing to be tested for a whole week (say summer vacation weeks) and the tester is tied to this project nevertheless. It is almost certain that because the tester is tied, no other testing is sold to the customer during that large project. So there is nothing to test. So the tester walks the empty corridors of the office asking the coders whether someone has anything to test. AND EVERYONE HAS SOMETHING!!! There are thing ranging from simple graphic user interface testing to complex integration interfaces and confirming production environments etc. The developers have not the time/skills/motivation to do those tests properly (or at all) and so they are left undone. And all this time the tester is supposed to do 100% billable work! And he has to keep track on the new testing tools, techniques and whatnot. Both in testing AND development!

OK, this is exaggeration by generalising but it hides the seed of truth. So, what the hell should we do?

One answer is to create “a buffer fund” or similar which is used to bill the unbillable work done by testers. This could be like 5 hours a week which could be spent by a single tester and then share his finding to the team. This could also include the projects where testing has not been sold to the customer. This amount could vary between 5 and 25 percents of the weekly work time depending on the amount of billable work there is at that moment. This could contain unsold testing in a project where testing I crucial and in reality some (or lots of) testing must be done. This could also contain various tool tutorials, webinars, blogging (this has grown more important than ever *wink*) etc., non-testing work but test related what so ever.

By doing this the quality improves in general (people see the importance of testing and the consequences of not-testing, and they might sell the testing to the customers with more enthusiasm) and in testing genre (personal knowledge improves and the understanding of the craft and the direction improves). This can however become costly to the company so the pros and cons should be weighted with precision. After all the alternative is that the testers twiddle their thumbs and that the testers don’t count the unbillable hours at all (which leads to uncertainty of income and possibly uneven work hours).

Chew on that!

How do I interview a new tester?

I'm bound to get my baptism in fire in the field which I have no previous experience. I'm going to interview two would-be testers ja pick the most qualified (or suggest to take both if they are both very good, no point cast pearls before swine). He/she/both (yes there is a male and a female applicant coming) is/are applying for an intern from university re-educating program where people are trained to be software testing experts. (Yeah, no one can be an expert without proper practical knowledge. But to become expert straight from the school bench! Huh?) This is a great opportunity for me to try my wings as a test manager and as an interviewer. Because I'm the one and only tester in our company (or fully involved in testing) I get to have a say in the selection.

Now I should find the right questions to dig out the info about occupants' testing skills. Because the information is scarce and hard to find (at the time of writing I searched stuff in Finnish and everybody seems to keep these things to themselves), so I decided to write mine down for later use. After I had searched far and wide for the perfect reference material I started writing... only to later discover some really useful material. These were in English, so they support the re-writing more that the original post.

I managed to find a funny eBook about building a testing team by STC. The book is hilarious because it portrays the interview process as a scripted, unintelligent process where the true merits of testers are irrelevant. However it contains a fragment of truth, which I incorporated in my interview material.

Also I found a document written by some guy called Kaner *wink* and I briefly had look at his thoughts about recruiting. He had some great thought about forming the right kind of questions like:
"What would you do with a product that came to you without specifications?'

Instead I ask,

"Have you ever worked on a product that came to you without specifications? Tell me about the challenges this raised and how you handled them. (And then, as a follow-up question,…) What do you think you did particularly well in that situation? (And then…) What did you learn that will help you handle this better in the future?"
The document contains PRETTY good information concerning large scale recruiting. This however was about recruiting a test intern for two and a half months. So I took a quick look at the document and buried it deep in my mind, and it popped out while I was rewriting this post! (Man! was I dumb not to read it fully through! Next time, baby... Next time. We got a good intern 'though.)

Off to the interview!

First of all the applicant should tell about his/her experience in testing scene and IT-world. This should include at least:
- project models (SCRUM, Waterfall, etc.) bonus is always to include one one your company uses most of the time
- testing levels (unit testing, integration testing, system testing, etc.)
- techniques (manual, scripted, automation, etc.)
- methods (exploratory, experience based, bug catching, test case based, ets.)

If the applicant does not include all these in the story, the interviewer should feel free to ask about those. Depending of the possible future role, one could also enquire the following:
- test tools (what has the applicant used, how and to what purpose)
- test planning and designing experience (test cases, test plans, test level plans, automation architectures, etc.)
- experience in code writing and project managing (PERL, LAMP, .Net, Java, etc.)
- test managing (reporting, managing team, customer support, etc.)

If the applicant has education, taken classes or what ever that may support the testing, it could be enquired if it has not come up. In addition possible certificates (although overlooked by some ranking testers out there) in testing genre and IT in general can give indications of the persons capabilities if the needed amount of information has not yet presented itself. In addition if the applicant has some examples to show from test cases he has designed or scripts written they might prove useful.

Even though the applicant has the merits to support his selection he must be valuated as in fitting in the team. If you have a group of tester at their twenties, a world-seen tester may change the dynamics in the team and break the successfully running engine. Whether looking for team with diversity or as must similar personalities and skills, the key is to make sure the "new guy" fits in. This raises the verbal and written skills of the applicant.

Whether the applicant is suitable or not the most important thing there is about a would-be tester is that he/she must have passion for the craft! If you have an applicant with considerable skills and merits but no passion what-so-ever he'll only hinders the test team and makes a good thing plummet to ruins.