I got a permission to enter the ISTQB Advanced level Certificate - Test Manager course and to apply for the certificate. It is a course provided by Finnish company called FC Sovelto that is used to replace the former Intermediate and Practitioner level certificate by ISEB. It's been a year since my former certificate exam of foundation level. Since then I have had plenty of opportunities to assess the pros and cons of certification.
The course provides coaching for the certification exam. The exam can be taken without the course and I'm sure plenty of people who take the exam do not go to courses especially if they're paying the whole thing themselves.
So, what can this certificate give me? What are the gains in professional level? How does my company benefit from me getting a cert? How does my future get brighter if I get the cert? What benefits can I achieve by going to the prep course? There are many questions and the answers for an exploratory extrovert tester like me aren't always the ones I'm liking to hear.
How does a company benefit from someone to certificate themselves as a test manager? My first thought is "No how" 'cause it's only a certificate - a piece of paper with a watermark on it and a fancy signature. How could a company benefit from sending an employee to a course that might be paid by the company (and it cost a whopping lot!) and it may cause the company to check the salary of the certificated person? They're all expenses to the company! On the other hand is it worthy to encourage people to pay for their own certification? That way there is no direct expenses to the company.
So how does the certification show in companies that have them? In reality (or at least in my reality) the certification of an employee is a great benefit in a company as a whole. The certificate in itself is a great achievement. In competition situation the company can have a trump card and say "we have a fully certificated test team and test management" by which they can sell better quality (this means they must reach the set bar). This is also important to the image of the company because there is someone in the ranks who know the standards and principles of the industry (though artificial they may be). Some client can even have the certification as a requirement for the deal to be closed. I.e. Microsoft certificates are mandatory on some projects for some clients, so why can't a testing cert be (somewhere in future).
The other angle is pure craft. How can the skills and knowledge of an individual affect on a corporate level? When company acquires more knowledge and skill it increases the "skill capital" or the "skill pool". The know-how not present in company must be attained from other sources like contractors, or the job the skill was required for was carried through with present know-how with might have resulted in low quality or undesirable resuts. Certificate training support the acquisition of the skills and the certificate is a document to prove that the requirement for those skills are met. (Some may argue that the certificate is nothing but a proof of capabilities to learn litany of test vocabulary. They can have their opinion if they can make a good argument about it.) Does it provide skill it the course only aims to pass the exam? How can you be sure that the course provides the skill not a lithany? These kind of thing are worth to take into consideration when deciding whether to attain the course or not.
Third point of view is both corporate and personal. Like every testing event the course is a great opportunity to make contacts. The word about your company gets spread around and the person attaining the course gets to meet other test spirited people from other companies and domains. This can lead into cooperation and contracting that can prove valuable for both companies. But most of all the testers can exchange thoughts and view about testing and test related stuff.
In conclusion on the corporate side, there are three things in certification (and the course) that can benefit a company: the image, the craft and the connections.
There are bad things in certification as in all good things. The image of the company may begin to transform into a rigid and standard-obeying corporation in some professional scenes and may hinder the acquisition of these kind of clients. More over the cert may lead into a rigid process model that trim out all exploratory spirit and drive people to an inflexible frame where there are no room for innovation or personal thought. These may not be accurate in any way but they are assumptions what may happen is a land slide is triggered. The testing processes can be agile and light (I’m not going to explain what is "agile" or "light") even though they are based in standardized processes.
On a personal level certification is harder assess in pros and cons. The con might be the box-thinking and veering towards a specific mind set. Certification might cause the tester to take pre-chewed (standardized) procedures and techniques as his own and forget all other. On the other hand not being certificated might mean you have to reinvent the wheel. My opinion is that the ISTQB certification should be considered as in "learning" the certification rather than "owning" the certification. Can one maintain freshness in ones thoughts while obeying the standards? Can one be separated from the limitations and restrictions brought by the certificate (are there any?) and all the while know the testing terms and standards on a certification level? Is the "main stream" a bad thing? Is the "counter stream" a good thing?
So what benefits does a certification give you? What disadvantage may it bring? The forthright benefit on personal level is the increase in personal skill in case you take part on the course and you do not already have the skills. If you know "everything" before the course and the exam the raise in skill is not relevant basis to acquire the cert. Disadvantages may also include the expenses the course and cert may cause especially if one does not pass the exam the first time.
How does the certification benefit ones career? How does it strengthen/weaken the position in company? The certification brings certain stability to one's position in company. It indicates that a person knows certain things and he has a document to prove it. If it is the industry standard (in one point of view) certification then its value is much increased. I won't describe any alternative certifications in testing industry 'cause I know so little about them, but what I’ve heard there are some certifications that are not ISTQB-related. Nevertheless the certification strengthens the position in company. The weakening of ones position is relevant only in cases where the certified person does not meet the standards of the certification in every day work. This means that the bar is higher than for the uncertified person. Certification also benefits the career in long term as a personal marketing tool. It may be the ticket to places where uncertified tester can not go (although who would want to go there? *wink*). in job interview the certification is a certain way to prove you have the skills and knowledge necessary. I won't go in detail into the pros and cons of certification in job interview but lets just say that its a double-edged-razor - certified may get labelled as box-thinking tester while uncertified are thought to be more fresh thinking. And just like in corporate side the contacts are the bread and butter of testing events. This benefits both the current situation and the future in form of scouting new job opportunities etc.
So, how do I get the most out of the certificate course and from the certificate itself (given that I pass the exam)? I strive to create as many a connection as possible in the event and to bring up the name of my company in every possible turn. I try to challenge the testers and to get as much testing info as possible. I also try challenge my own thought patterns against the ISTQB model and the other way round so that I don't lose the freshness and the awareness achieved from my work in exploratory testing. The more i question the things I hear the better I learn new thing. I get new points of view to things and I can also learn new thing to support my knowledge in testing in general. Maybe I think some of these ISTQB thing are good for keeps and I take them with me and use them where best suited.
A plog from a Finnish dude who has a grand passion for testing and everything test related in the universe. The finnish version of the blog is at http://mitenmatestaan.blogspot.com
Thursday, 7 October 2010
Tuesday, 31 August 2010
Preacher man Marjamäki
I was reading own blog entries and I started to truly think about my own writings. Those writings have clearly the idea and insights - some of the insights derived from others, but still. However, part of my text is “porridge” - it lacks the red wire. I am a novice blogger, so it is certainly not a great disadvantage here, but when I read more and more testing blogs of other, I see they write long and coherent entities, where as mine are short and fragmentary.
So I have found the opportunity to develop myself from my own errors, which I did not even know were there at the time of writing. Someone may disagree on whether the writing style in itself flawed, but perhaps "lacking" is the word that describes it best. Whet perhaps led to the creation of this writing was a text about challenging oneself by James Bach. So I found a questionable approach in my posts, and it cries for improvement.
So how do we improve the concept, which is not really wrong - or at least not harmful? How to motivate to repair, which in itself is not necessary to be repaired?
-
When the tester examines an application (i.e., exploratory tester) he finds bugs. He also finds deviations. Fault in this case would probably be writing off topic and / or misspellings within the text. Potential errors nonetheless.
Typographical errors in this case are the lowest of severity, and the repair improves (only) the cosmetics. They make it more compromise usability (readability) when the user (reader) has to constantly think about whether a typo is a typo or deliberately wrong (or even a different word). Repair is easy in these cases, so despite the low severity the correction should be made to find out the “bug” is noticed.
Factual errors are more severe errors. It may mislead the reader, which is not usually the purpose of the blog. In addition, it reduces the confidence enjoyed by the writer, so his texts published as “true” are not considered true but false. Reviewing context is crucial! Factual errors, severe errors! Correction in these cases results in a new version of the text or freezing the text. If a bug can be found in an application and it prevents the use of the application to full extent or to which it is designed, it will be deactivated and make the necessary corrections in order to function as desired (as expected) in a way.
Deviation in this case is a deviation from the actual or supposed truth. If something does not match the reader's prior expectations, he considers it of a deviation. A deviation in itself is not a fault, it has been just implemented different from expectations. If someone publishes the text, which is different from the mainstream, he must to be able to justify his choice to gain of user satisfaction (in which case it becomes a feature), or the deviation turns to be a fault (i.e., a defect in the text). The same applies to applications where a feature is the done in violation of the requirements (prejudices, assumptions, expectations).
Derived from that: Is there a previous blog post that is defective? Does my unintentionally preaching-like blogging result in faulty, incorrect or different writings? My bias is that they are both defective and deviant. Let’s see whether it is true...
My intention in the text "How do I interview a new tester?" was to write down ideas and to create guidelines for myself and other similarly interested, who wrestle with the same thing. The text is incomplete on many parts, due to a lack of perspective. References to existing texts and comparing them would have added depth to the text. For example, Software Testing Club, funny book, "The Ridiculously Simple Guide Test Building A Team" would have been good reference material ... Ashamed that I made probably most of the "best proven" things mentioned in the eBook and I probably asked exactly the wrong things. (As it turned out, I wrote the whole shebang again. But this refers to the original, Finnish version of the text.)
"Performance Testing Integration Project" was designed to provide a clear frame on performance testing – area where I hadn’t gone before. The post studied the technical point of view and may even provide a reference material for those interested. As a result, the post was quite successful individual, in my opinion. It was based on existing performance testing of the model. This could have been mentioned in the text, because I make it seem like I invented the whole testing model.
So that this blogging does not change into defensively, I want to keep it in the critical path. As I told earlier, these were the assumptions and outcomes, and comparing them. The outcomes were somewhat different from the assumptions, but in some cases (as in "Performance Testing Integration Project"), the deviations were very small. Although they were not well-founded, they were such that their correction is not priority number one. Au contraire, "Testing Jin and Jang!" fails to meet expectations by a long shot. As I say in the text "My thoughts began to roll like crazy at the time -", which was true at the time, the red wire in my head is not conveyed to the text as I would have liked it to. In retrospect, I could filter the preaching part out of the text, and focus on the essential and the cold facts. Would I be able to submit my own experiences or to refer to other’s experiences on the subject? Could that material be gathered and refined in to a good and sententious and incite other to think "like mad"? The text is lacking the touch with reality, which prevents the reader from taking the text seriously. This therefore calls for a re-write due to the defects and deviations of the post.
-
So, how to motivate the repair, if the repair itself is not necessary? Self-improvement! If you have the opportunity to develop yourselves, it should most certainly be done. You never know where reviewing your own shortcomings may lead.
In order that this blogging does not turns into preaching on behalf of self-improvement, I would like to stress that all faults, errors and deviations should be identified and investigated. The correction of these may happen spontaneously by studying and searching for them. This could be like examination of the desert: the examination of the sand reveals an anomaly in the smooth surface od the desert (the tip of the pyramid), examination that the finding (digging) is revealed to be mightiest tomb monument in the World containing divine treasure and riches beyond imagination.
So I have found the opportunity to develop myself from my own errors, which I did not even know were there at the time of writing. Someone may disagree on whether the writing style in itself flawed, but perhaps "lacking" is the word that describes it best. Whet perhaps led to the creation of this writing was a text about challenging oneself by James Bach. So I found a questionable approach in my posts, and it cries for improvement.
So how do we improve the concept, which is not really wrong - or at least not harmful? How to motivate to repair, which in itself is not necessary to be repaired?
-
When the tester examines an application (i.e., exploratory tester) he finds bugs. He also finds deviations. Fault in this case would probably be writing off topic and / or misspellings within the text. Potential errors nonetheless.
Typographical errors in this case are the lowest of severity, and the repair improves (only) the cosmetics. They make it more compromise usability (readability) when the user (reader) has to constantly think about whether a typo is a typo or deliberately wrong (or even a different word). Repair is easy in these cases, so despite the low severity the correction should be made to find out the “bug” is noticed.
Factual errors are more severe errors. It may mislead the reader, which is not usually the purpose of the blog. In addition, it reduces the confidence enjoyed by the writer, so his texts published as “true” are not considered true but false. Reviewing context is crucial! Factual errors, severe errors! Correction in these cases results in a new version of the text or freezing the text. If a bug can be found in an application and it prevents the use of the application to full extent or to which it is designed, it will be deactivated and make the necessary corrections in order to function as desired (as expected) in a way.
Deviation in this case is a deviation from the actual or supposed truth. If something does not match the reader's prior expectations, he considers it of a deviation. A deviation in itself is not a fault, it has been just implemented different from expectations. If someone publishes the text, which is different from the mainstream, he must to be able to justify his choice to gain of user satisfaction (in which case it becomes a feature), or the deviation turns to be a fault (i.e., a defect in the text). The same applies to applications where a feature is the done in violation of the requirements (prejudices, assumptions, expectations).
Derived from that: Is there a previous blog post that is defective? Does my unintentionally preaching-like blogging result in faulty, incorrect or different writings? My bias is that they are both defective and deviant. Let’s see whether it is true...
My intention in the text "How do I interview a new tester?" was to write down ideas and to create guidelines for myself and other similarly interested, who wrestle with the same thing. The text is incomplete on many parts, due to a lack of perspective. References to existing texts and comparing them would have added depth to the text. For example, Software Testing Club, funny book, "The Ridiculously Simple Guide Test Building A Team" would have been good reference material ... Ashamed that I made probably most of the "best proven" things mentioned in the eBook and I probably asked exactly the wrong things. (As it turned out, I wrote the whole shebang again. But this refers to the original, Finnish version of the text.)
"Performance Testing Integration Project" was designed to provide a clear frame on performance testing – area where I hadn’t gone before. The post studied the technical point of view and may even provide a reference material for those interested. As a result, the post was quite successful individual, in my opinion. It was based on existing performance testing of the model. This could have been mentioned in the text, because I make it seem like I invented the whole testing model.
So that this blogging does not change into defensively, I want to keep it in the critical path. As I told earlier, these were the assumptions and outcomes, and comparing them. The outcomes were somewhat different from the assumptions, but in some cases (as in "Performance Testing Integration Project"), the deviations were very small. Although they were not well-founded, they were such that their correction is not priority number one. Au contraire, "Testing Jin and Jang!" fails to meet expectations by a long shot. As I say in the text "My thoughts began to roll like crazy at the time -", which was true at the time, the red wire in my head is not conveyed to the text as I would have liked it to. In retrospect, I could filter the preaching part out of the text, and focus on the essential and the cold facts. Would I be able to submit my own experiences or to refer to other’s experiences on the subject? Could that material be gathered and refined in to a good and sententious and incite other to think "like mad"? The text is lacking the touch with reality, which prevents the reader from taking the text seriously. This therefore calls for a re-write due to the defects and deviations of the post.
-
So, how to motivate the repair, if the repair itself is not necessary? Self-improvement! If you have the opportunity to develop yourselves, it should most certainly be done. You never know where reviewing your own shortcomings may lead.
In order that this blogging does not turns into preaching on behalf of self-improvement, I would like to stress that all faults, errors and deviations should be identified and investigated. The correction of these may happen spontaneously by studying and searching for them. This could be like examination of the desert: the examination of the sand reveals an anomaly in the smooth surface od the desert (the tip of the pyramid), examination that the finding (digging) is revealed to be mightiest tomb monument in the World containing divine treasure and riches beyond imagination.
Friday, 27 August 2010
Do we have enough billable testing to do?
In our company there are the kind of projects that are small or smallish, that have 2 or 3 developers and they are about 1 – 4 weeks in duration. Then there are mammoth projects that occupy the whole unit (20 developers) and last from 6 to 9 months. I.e. developers rush to get things done and the product to the customer.
In the midst of all this is a lone tester. There is a myriad of things to test, but you cannot always bill customer for those tests: it has not been sold to the customer! Only the development and development testing has been included in the offer and thus are the only things billable and the real testing is done as acceptance test driven style. While this kind of approach is great in small project with little changes done to the software it accumulates to lots of versioning and rework (which is redundant!) and it ties resources to minor project for a long period of time. The official testers (read a tester) are positioned in the larger projects that have lots of testing and things to bill.
Alas, cometh a situation where there is nothing to be tested for a whole week (say summer vacation weeks) and the tester is tied to this project nevertheless. It is almost certain that because the tester is tied, no other testing is sold to the customer during that large project. So there is nothing to test. So the tester walks the empty corridors of the office asking the coders whether someone has anything to test. AND EVERYONE HAS SOMETHING!!! There are thing ranging from simple graphic user interface testing to complex integration interfaces and confirming production environments etc. The developers have not the time/skills/motivation to do those tests properly (or at all) and so they are left undone. And all this time the tester is supposed to do 100% billable work! And he has to keep track on the new testing tools, techniques and whatnot. Both in testing AND development!
OK, this is exaggeration by generalising but it hides the seed of truth. So, what the hell should we do?
One answer is to create “a buffer fund” or similar which is used to bill the unbillable work done by testers. This could be like 5 hours a week which could be spent by a single tester and then share his finding to the team. This could also include the projects where testing has not been sold to the customer. This amount could vary between 5 and 25 percents of the weekly work time depending on the amount of billable work there is at that moment. This could contain unsold testing in a project where testing I crucial and in reality some (or lots of) testing must be done. This could also contain various tool tutorials, webinars, blogging (this has grown more important than ever *wink*) etc., non-testing work but test related what so ever.
By doing this the quality improves in general (people see the importance of testing and the consequences of not-testing, and they might sell the testing to the customers with more enthusiasm) and in testing genre (personal knowledge improves and the understanding of the craft and the direction improves). This can however become costly to the company so the pros and cons should be weighted with precision. After all the alternative is that the testers twiddle their thumbs and that the testers don’t count the unbillable hours at all (which leads to uncertainty of income and possibly uneven work hours).
Chew on that!
In the midst of all this is a lone tester. There is a myriad of things to test, but you cannot always bill customer for those tests: it has not been sold to the customer! Only the development and development testing has been included in the offer and thus are the only things billable and the real testing is done as acceptance test driven style. While this kind of approach is great in small project with little changes done to the software it accumulates to lots of versioning and rework (which is redundant!) and it ties resources to minor project for a long period of time. The official testers (read a tester) are positioned in the larger projects that have lots of testing and things to bill.
Alas, cometh a situation where there is nothing to be tested for a whole week (say summer vacation weeks) and the tester is tied to this project nevertheless. It is almost certain that because the tester is tied, no other testing is sold to the customer during that large project. So there is nothing to test. So the tester walks the empty corridors of the office asking the coders whether someone has anything to test. AND EVERYONE HAS SOMETHING!!! There are thing ranging from simple graphic user interface testing to complex integration interfaces and confirming production environments etc. The developers have not the time/skills/motivation to do those tests properly (or at all) and so they are left undone. And all this time the tester is supposed to do 100% billable work! And he has to keep track on the new testing tools, techniques and whatnot. Both in testing AND development!
OK, this is exaggeration by generalising but it hides the seed of truth. So, what the hell should we do?
One answer is to create “a buffer fund” or similar which is used to bill the unbillable work done by testers. This could be like 5 hours a week which could be spent by a single tester and then share his finding to the team. This could also include the projects where testing has not been sold to the customer. This amount could vary between 5 and 25 percents of the weekly work time depending on the amount of billable work there is at that moment. This could contain unsold testing in a project where testing I crucial and in reality some (or lots of) testing must be done. This could also contain various tool tutorials, webinars, blogging (this has grown more important than ever *wink*) etc., non-testing work but test related what so ever.
By doing this the quality improves in general (people see the importance of testing and the consequences of not-testing, and they might sell the testing to the customers with more enthusiasm) and in testing genre (personal knowledge improves and the understanding of the craft and the direction improves). This can however become costly to the company so the pros and cons should be weighted with precision. After all the alternative is that the testers twiddle their thumbs and that the testers don’t count the unbillable hours at all (which leads to uncertainty of income and possibly uneven work hours).
Chew on that!
How do I interview a new tester?
I'm bound to get my baptism in fire in the field which I have no previous experience. I'm going to interview two would-be testers ja pick the most qualified (or suggest to take both if they are both very good, no point cast pearls before swine). He/she/both (yes there is a male and a female applicant coming) is/are applying for an intern from university re-educating program where people are trained to be software testing experts. (Yeah, no one can be an expert without proper practical knowledge. But to become expert straight from the school bench! Huh?) This is a great opportunity for me to try my wings as a test manager and as an interviewer. Because I'm the one and only tester in our company (or fully involved in testing) I get to have a say in the selection.
Now I should find the right questions to dig out the info about occupants' testing skills. Because the information is scarce and hard to find (at the time of writing I searched stuff in Finnish and everybody seems to keep these things to themselves), so I decided to write mine down for later use. After I had searched far and wide for the perfect reference material I started writing... only to later discover some really useful material. These were in English, so they support the re-writing more that the original post.
I managed to find a funny eBook about building a testing team by STC. The book is hilarious because it portrays the interview process as a scripted, unintelligent process where the true merits of testers are irrelevant. However it contains a fragment of truth, which I incorporated in my interview material.
Also I found a document written by some guy called Kaner *wink* and I briefly had look at his thoughts about recruiting. He had some great thought about forming the right kind of questions like:
Off to the interview!
First of all the applicant should tell about his/her experience in testing scene and IT-world. This should include at least:
- project models (SCRUM, Waterfall, etc.) bonus is always to include one one your company uses most of the time
- testing levels (unit testing, integration testing, system testing, etc.)
- techniques (manual, scripted, automation, etc.)
- methods (exploratory, experience based, bug catching, test case based, ets.)
If the applicant does not include all these in the story, the interviewer should feel free to ask about those. Depending of the possible future role, one could also enquire the following:
- test tools (what has the applicant used, how and to what purpose)
- test planning and designing experience (test cases, test plans, test level plans, automation architectures, etc.)
- experience in code writing and project managing (PERL, LAMP, .Net, Java, etc.)
- test managing (reporting, managing team, customer support, etc.)
If the applicant has education, taken classes or what ever that may support the testing, it could be enquired if it has not come up. In addition possible certificates (although overlooked by some ranking testers out there) in testing genre and IT in general can give indications of the persons capabilities if the needed amount of information has not yet presented itself. In addition if the applicant has some examples to show from test cases he has designed or scripts written they might prove useful.
Even though the applicant has the merits to support his selection he must be valuated as in fitting in the team. If you have a group of tester at their twenties, a world-seen tester may change the dynamics in the team and break the successfully running engine. Whether looking for team with diversity or as must similar personalities and skills, the key is to make sure the "new guy" fits in. This raises the verbal and written skills of the applicant.
Whether the applicant is suitable or not the most important thing there is about a would-be tester is that he/she must have passion for the craft! If you have an applicant with considerable skills and merits but no passion what-so-ever he'll only hinders the test team and makes a good thing plummet to ruins.
Now I should find the right questions to dig out the info about occupants' testing skills. Because the information is scarce and hard to find (at the time of writing I searched stuff in Finnish and everybody seems to keep these things to themselves), so I decided to write mine down for later use. After I had searched far and wide for the perfect reference material I started writing... only to later discover some really useful material. These were in English, so they support the re-writing more that the original post.
I managed to find a funny eBook about building a testing team by STC. The book is hilarious because it portrays the interview process as a scripted, unintelligent process where the true merits of testers are irrelevant. However it contains a fragment of truth, which I incorporated in my interview material.
Also I found a document written by some guy called Kaner *wink* and I briefly had look at his thoughts about recruiting. He had some great thought about forming the right kind of questions like:
"What would you do with a product that came to you without specifications?'The document contains PRETTY good information concerning large scale recruiting. This however was about recruiting a test intern for two and a half months. So I took a quick look at the document and buried it deep in my mind, and it popped out while I was rewriting this post! (Man! was I dumb not to read it fully through! Next time, baby... Next time. We got a good intern 'though.)
Instead I ask,
"Have you ever worked on a product that came to you without specifications? Tell me about the challenges this raised and how you handled them. (And then, as a follow-up question,…) What do you think you did particularly well in that situation? (And then…) What did you learn that will help you handle this better in the future?"
Off to the interview!
First of all the applicant should tell about his/her experience in testing scene and IT-world. This should include at least:
- project models (SCRUM, Waterfall, etc.) bonus is always to include one one your company uses most of the time
- testing levels (unit testing, integration testing, system testing, etc.)
- techniques (manual, scripted, automation, etc.)
- methods (exploratory, experience based, bug catching, test case based, ets.)
If the applicant does not include all these in the story, the interviewer should feel free to ask about those. Depending of the possible future role, one could also enquire the following:
- test tools (what has the applicant used, how and to what purpose)
- test planning and designing experience (test cases, test plans, test level plans, automation architectures, etc.)
- experience in code writing and project managing (PERL, LAMP, .Net, Java, etc.)
- test managing (reporting, managing team, customer support, etc.)
If the applicant has education, taken classes or what ever that may support the testing, it could be enquired if it has not come up. In addition possible certificates (although overlooked by some ranking testers out there) in testing genre and IT in general can give indications of the persons capabilities if the needed amount of information has not yet presented itself. In addition if the applicant has some examples to show from test cases he has designed or scripts written they might prove useful.
Even though the applicant has the merits to support his selection he must be valuated as in fitting in the team. If you have a group of tester at their twenties, a world-seen tester may change the dynamics in the team and break the successfully running engine. Whether looking for team with diversity or as must similar personalities and skills, the key is to make sure the "new guy" fits in. This raises the verbal and written skills of the applicant.
Whether the applicant is suitable or not the most important thing there is about a would-be tester is that he/she must have passion for the craft! If you have an applicant with considerable skills and merits but no passion what-so-ever he'll only hinders the test team and makes a good thing plummet to ruins.
Subscribe to:
Posts (Atom)