Pages

Friday 27 August 2010

Do we have enough billable testing to do?

In our company there are the kind of projects that are small or smallish, that have 2 or 3 developers and they are about 1 – 4 weeks in duration. Then there are mammoth projects that occupy the whole unit (20 developers) and last from 6 to 9 months. I.e. developers rush to get things done and the product to the customer.

In the midst of all this is a lone tester. There is a myriad of things to test, but you cannot always bill customer for those tests: it has not been sold to the customer! Only the development and development testing has been included in the offer and thus are the only things billable and the real testing is done as acceptance test driven style. While this kind of approach is great in small project with little changes done to the software it accumulates to lots of versioning and rework (which is redundant!) and it ties resources to minor project for a long period of time. The official testers (read a tester) are positioned in the larger projects that have lots of testing and things to bill.

Alas, cometh a situation where there is nothing to be tested for a whole week (say summer vacation weeks) and the tester is tied to this project nevertheless. It is almost certain that because the tester is tied, no other testing is sold to the customer during that large project. So there is nothing to test. So the tester walks the empty corridors of the office asking the coders whether someone has anything to test. AND EVERYONE HAS SOMETHING!!! There are thing ranging from simple graphic user interface testing to complex integration interfaces and confirming production environments etc. The developers have not the time/skills/motivation to do those tests properly (or at all) and so they are left undone. And all this time the tester is supposed to do 100% billable work! And he has to keep track on the new testing tools, techniques and whatnot. Both in testing AND development!

OK, this is exaggeration by generalising but it hides the seed of truth. So, what the hell should we do?

One answer is to create “a buffer fund” or similar which is used to bill the unbillable work done by testers. This could be like 5 hours a week which could be spent by a single tester and then share his finding to the team. This could also include the projects where testing has not been sold to the customer. This amount could vary between 5 and 25 percents of the weekly work time depending on the amount of billable work there is at that moment. This could contain unsold testing in a project where testing I crucial and in reality some (or lots of) testing must be done. This could also contain various tool tutorials, webinars, blogging (this has grown more important than ever *wink*) etc., non-testing work but test related what so ever.

By doing this the quality improves in general (people see the importance of testing and the consequences of not-testing, and they might sell the testing to the customers with more enthusiasm) and in testing genre (personal knowledge improves and the understanding of the craft and the direction improves). This can however become costly to the company so the pros and cons should be weighted with precision. After all the alternative is that the testers twiddle their thumbs and that the testers don’t count the unbillable hours at all (which leads to uncertainty of income and possibly uneven work hours).

Chew on that!

1 comment:

Oliver Vilson said...

"One answer is to create “a buffer fund” or similar which is used to bill the unbillable work done by testers."

What you describe here could be altered a little and then it would be similar to Theory of Constraints
http://en.wikipedia.org/wiki/Theory_of_Constraints
http://en.wikipedia.org/wiki/Critical_Chain_Project_Management

I haven't been in development side testing for awhile so my know-how might be little aged, but I used to handle these problems through basic setup of risk management, ToC and so called "wasted time". What it means is that there is time set aside for testing needed to be done and part of it is buffered (for whatever way to spend). It of coursed caused quite a few of problems since time needed for "good enough" testing is in direct correlation how many bugs are found.

Currently in my own testing company statistically around 1/3-1/2 of time is spent to bug investigation and bug reporting.