Pages

Friday 5 July 2013

You gotta fight for your right to test!

I am terribly sorry for the soon to be rant and biased output that is going to happen. I have did this once, but the conversation resulting from that was rather pleasant and constructive. This is a comment / response / rant about a blog page I stumbled upon today. You can find the original post here. The page may have been outdated since it was created in 2010 but it seems that it was updated half a year ago.

My first though was “not another test case” when I read the article. I’m currently trying to figure out what I will write to my next column on the “Testaus ja Laatu” –magazine, and since the theme is juxtaposing, I thought I’d try to write some blog-stuff first. One of the topics could be “test cases vs. no test cases” and I shy away from that strict way of thinking. Exploratory testing, to me, is utilizing all the available tools and gimmicks to get to the best results. Black and white –world vision narrows too much my take on testing.

Having said that, I will use my recently found taste to logic to point out what I found could be wrong in the article. I do not know the state of mind in which the article was written or the context in which it is supposed to be fitted, so I will project it to my own workplace where that is necessary.

You are warned…

The first sentence goes like this: “A significant factor in a successful testing effort is how well the tests are written.”  I have no idea how much significant is this case. I would guess that it is either the largest part or the one after that. When I do testing, the significance of the test cases are miniscule, and I tend to do successful testing. Not all the time, but I know a certain cases where properly written test cases did not contribute to the successful testing effort. Also I don’t exactly know what qualifies as successful testing effort. It could mean having run all the tests within the schedule (which doesn’t quality for successful for many reasons), having written all the test cases (do I need to say more about this), the product is shipped to end user (lots of products have been shipped to customers and later fixed due to poor quality), etc. So I would say that well written test cases /may/ be a factor in the successful testing effort just as well as poorly or not-at-all-written test cases.


“They have to be effective in verifying that approved requirements have been met AND the test cases themselves are understandable so the tester can run the test as intended by the test author.” 
So the test cases have to be effective in verifying requirements. Granted. Do the /have to be/ written. No. Even poorly written test cases can be effective at verifying requirements. I think the ability to verify something comes not from the writing of a test case but from the skill of a tester. If a tester is skilled there may not be need for any written test cases to get the job done. “--approved requirements have been met --” So anything that is not written to requirements doesn’t get tested? We also have expectations about the product that don’t really qualify as requirements. Testers still need be aware of anything that might threaten the quality of the product.


“The 3C’s: Clear, Concise, Complete. Test Cases are not open to multiple interpretations by qualified testers. Ambiguous or nonspecific Test Cases are difficult to manage.” 
What if we do not know about the product enough at the time we write the test case? Should we wait until the product is finished and then write the cases? Isn’t that just huge waste of time and money? And when it comes to managing test cases, I think lines of text are the easiest to manage, it’s the people that might require managing. It is true that estimating coverage and depth may be difficult if the test case is ambiguous. I also think that it is difficult to estimate those with good and precise test cases. People are best to estimate the depth of their testing and the confidence in their testing. ASK IF SOMETHING IS AMBIGUOUS. -> Fight ambiguity with openness and communication


“Test Cases are easily understood.  They provide just enough detail to enable testers familiar with the functional area under test to run the test case. Note:  Detailed click by click steps are only useful for automated tests.” 
This is something that I agree! Test case can be easily understood even if it’s like this: “Play around with the product and describe the person next to you what it does.” It is both easily to understand and you already have enough familiarity because the base requirement for that is none! This could be the baseline for any testing ever to be done at any context. FAMILIARISE YOURSELF WITH THE PRODUCT.-> Fight ignorance with eagerness.


“Test cases include setup requirements (environment & data), expected results, any dependencies on other tests and tracing to the requirements they validate. Are traced to the results of each test run, including defects (bugs) discovered, test platform run on, software build version used.” 
For the data and expected results, I recommend all you read Michel Bolton’s and James Bach’s conversation and decide yourself if the test case is complete with expected results. It is necessary to document stuff that mentioned in the text so to avoid unnecessary overlapping. But traceability to test cases? Is that possible for bugs that are found during the test case execution but outside the intended observation area? BE PREPARED FOR THE UNEXPETED RESULTS WITH UNEXPETED INPUTS. -> Fight patters with chaos and vice versa.


“Measurable: For each test case, there must be a way to objectively measure that the test case has either passed or failed. All test cases must be linked to the requirements that they verify to enable impact of changes made to the requirements or solution designs to be identified (measured).” 
OK. Let’s say that the test is executed perfectly and the result is exactly as expected. A minute after that the computer crashes. Does that constitute as failed? How much do we actually know about the product to say that a test has passed or finished. And if we want to find bugs, isn’t the test passed only if it finds bugs? I think test only fails if you don’t run the test, because it didn’t test anything. TEST IS ONLY FAILED IF IT IS NOT RUN. -> Fight unnecessary documentation with rightly timed planning. I.e. keep the time between planning and doing minimum. Preferably do them at the same time.


“The test case must have been approved, prior to being run, by the key stakeholders of the requirements that the test case verifies. Any changes made to test cases caused by requirements or solution designs changes must also be approved.” 
There is ABSOLUTELY NO POINT IN THIS! Why the hell do we need approving for our testing? Only thing that requires approving is the results of our testing. If the stakeholders do not trust us to test the software, we could record everything we do and ask them to audit the test material. I would think they are happier to audit actual results than worthless, constantly changing trivial documentation. IF WE NEED EVERY TEST APPROVED, WE LOSE MONEY AND TEST COVERAGE. -> Fight über control with proper* documentation.


“Realistic test cases DO: Verify how the product will be used or could be misused, eg positive and negative tests. Verify functions and features that implement approved product requirements. Can be run using the platforms and software configurations available in the test environment.
Realistic test cases DO NOT: Verify out of scope functions and features. Verify unapproved product requirements.”
I agree with the first sentence – a good test could test how the product could be used/misused. Also how it should/will be used. How much does the product actually differ from what the customer actually wants? I also agree with the second sentence, but it should be broader. It should test the features, platforms, integrations, data integrity, security, performance, etc. I don’t understand the third sentence for if we don’t have the proper environment or setup, we should acquire them in time. If we do have a scope (referring to the fourth sentence) then I agree that we should not spend too much time on out of scope elements. They could be mutated or broken due to changes made somewhere else so regression/smoke testing should be implemented to out of scope areas. If the fifth sentence I don’t understand what is meant by “unapproved”. Unapproved by who? By the testers, client, developers, managers? It really depends, so making a claim like that is just nonsense. A REALISTIC TESTCASE IS REALISTIC BUT FLEXIBLE. -> Fight rigid descriptions with autonomy, critical thinking and challenging.


“Test Cases must be able to be successfully completed within the project scope and schedule.” 
There is no point on writing test cases that are not run. But “must”? And why do they have to be “successfully run” and is a failed test a “successfully run” test? If a test finds three hundred bugs, is it successfully run if it doesn’t reach the expected result? WE CAN ONLY RUN AS MUCH TEST CASES. -> Fight the quantification of test cases with the amount of time spent doing testing. Count time, not test cases.



To summarize, I think the whole concept of SMART test cases is wrong excluding few things that I agreed with. I also encourage to keep the writing of test cases in minimum and the amount of testing done in maximum. Use the time wisely and appropriately! But if you do, you should consider these instead of the SMART way:

  • Fight ambiguity with openness and communication
  • Fight ignorance with eagerness.
  • Fight patters with chaos and vice versa.
  • Fight unnecessary documentation with rightly timed planning. I.e. keep the time between planning and doing minimum. Preferably do them at the same time.
  • Fight über control with proper documentation.
  • Fight rigid descriptions with autonomy, critical thinking and challenging.
  • Fight the quantification of test cases with the amount of time spent doing testing. Count time, not test cases.


I will leave you to this. I promise I will be back with more about test cases (if I have the time). I’m not saying “don’t write test cases”, but use your head! It is not smart (bun intended) to follow some rigid procedure for all context, but to adapt to situation and to make most of the time you have to test.

- Peksi

* Proper: Appropriate as automated as possible. Including video recording, automated logs, scans from scribbles, session sheets, statements, etc. What is required to get enough information to the decision makers without hindering the job of a tester.

2 comments:

Anssi Lehtelä said...

Hi Peksi,

The original blog post seems to define test cases in the way auditors see (or are seen to see) them. Traceability is the king there, where each feature in the system needs to have a requirement, and each requirement needs to have a test case. This may mean that in order to do a good job, and keep it audible, the system is actually built and tested first and only after that the requirements and test cases written.

Anyway, liked your thoughts and the clear way of stating them. Will suggest a few of the testers in my workplace to take a look.

Jyothi said...

Hi Peksi,

Thanks for the post, this is a challenge(matching/writing test cases to sometimes undefined requirements and at times incomplete requirements)which me and my team face often and that which requires to be addressed by us.

I had intended to write differently, that was before I lost the comment. But this here, sums up what I had intended :

Michael Bolton ‏@michaelbolton 5 Jul
Test cases come from exploratory activity. Over-focusing on test cases freezes exploration, ending experimentation that's the REAL #testing.

This is what actually occurs day in and day out, but I am trying on a context basis to reach out to my seniors and in turn helping us both learn the importance of test ideas and then clarify the need of test cases, which of course are a business need.