Monday Sep 03, 2007

Test Case Design

We look at a list of methodologies used in the process of designing test cases. The obvious questions that folks new to / not too familiar with testing tend to ask is ... why Test Case Design and why all of these methodologies ?

The answer to the why is based on a fundamental premise of Software Testing i.e. "Complete" or "100%" testing is not possible.

Yep, sounds a little negative to the uninitiated, but thats the fact. The derivative of the above statement implies that "all" Testing is incomplete, although the degree of incompleteness varies. In reality, testing is performed within various limitations and boundaries defined by variables such as resources, time, cost, etc.

Given these constraints, Testers need to come up with a set of Test Cases that has the highest probability of unearthing the greatest number of defects. This is where Test Case Design plays a significant role.

In this post, we'll keep things short and only take a really quick, very high-level view of the methodologies. These cover both Black and White box techniques. Here they are ...

1) Equivalence Partitioning
2) Boundary Value Analysis (BVA)
3) Cause-Effect Graph / Decision Table
4) Error Guessing
5) Statement Coverage
6) Decision Coverage
7) Condition Coverage
8) Combination of Decision & Condition Coverage
9) Multiple Condition Coverage
 
We'll look at some of the above in upcoming posts. For now, if you are really keen to deep dive, Googling might be a good idea or better still grab a book on Software Testing and let the concepts soak in. One other thing since we are on this subject of Test Case Design ... while it sure is important to test to check if the program does what its supposed to do, its also very important to test to check if the program does what its not supposed to be doing ...


Add to Technorati Favorites | Slashdot   Slashdot It! | Submit to del.icio.us

Tuesday Aug 28, 2007

When a Test case passes, is it "successful" ?

One thing thats interesting to observe when interacting with folks on the subject of testing is the use of the words - "successful" and "unsuccessful" when talking about test cases that have been executed on a particular product / feature. 

Generally, people tend to associate the term "successful" with a test case that has "passed' without encountering any bug / error during execution and "unsuccessful" with a test case that "fails" due to a bug / error during execution.

From a Quality / Testing perspective, the above reasoning sounds counterintuitive and contrary to what we should really be saying which is ... a test case that fails is in reality "successful" and a test case that passes is actually "unsuccessful".

Lets imagine this scenario - my car leaks oil, belches dark fumes, rattles and makes enough noise to wake up the dead. Sensing that something could be amiss, I decide to take the vehicle to a nearby garage to check for problems and fix them. The friendly mechanic runs a set of "tests" on the automobile. After a while (and after reducing my net worth by a small fortune), the skilled tester, oops mechanic declares that all his "tests" passed and did not identify any problems with my car. Can he now claim that the "tests" were "successful" because they all "passed" ... talk of  a 100% pass rate ?

Interesting point though .. it boils down to what you think is the purpose of testing ? Is it to prove that a program is "error-free" ? Me thinks .... ok, will save that as fodder for the upcoming posts !


Add to Technorati Favorites | Slashdot   Slashdot It! | Submit to del.icio.us

 

About

John Morrison

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today