Friday Oct 09, 2009
Monday Sep 07, 2009
By John Morrison on Sep 07, 2009
Wednesday Aug 26, 2009
By John Morrison on Aug 26, 2009
Monday Aug 10, 2009
By John Morrison on Aug 10, 2009
Saturday Jul 25, 2009
Wednesday Jul 22, 2009
By John Morrison on Jul 22, 2009
Notes on Cost of Exposure Quality Risk Analysis - http://softwareandquality.blogspot.com/2009/07/quality-risk-analysis-cost-of-exposure.html
Tuesday Jul 21, 2009
By John Morrison on Jul 21, 2009
What does "Quality" mean ? Read up at - http://softwareandquality.blogspot.com/2009/07/what-is-quality.html
Monday Jul 20, 2009
By John Morrison on Jul 20, 2009
QA, QC, Testing – more often used inter-changeably and generally meant to imply "Testing". However, each of these mean different things. Here's a short post that tries to clarify - http://softwareandquality.blogspot.com/2009/07/qa-is-not-testing-qa-vs-qc.html
Saturday Feb 16, 2008
By John Morrison on Feb 16, 2008
This is a work-in-progress document and the purpose is to expand on the test types listed in a previous entry. Will keep updating as i find time to do this.
- Acceptance testing
also known as User Acceptance Testing (UAT) is generally performed by customers or user representatives.Sometimes, the product team may perform these tests too. UAT tests can be both functional as well as non-functional. These tests are performed after Integration & System testing.
UAT is supposed to be the final level of testing and verifies whether the product meets the agreed upon product acceptance criteria. A relatively small set of test cases are selected to be executed as part of the Acceptance tests. These are not meant to unearth new defects, instead they verify meeting of the product acceptance criteria. The tests that are part of UAT should have already been covered as part of the product test team's testing so that no late surprises emerge when the UAT is run by customers. Failure of acceptance tests at the hands of the customer can have serious implications including - the product being rejected by the customers, re-work & delays, penalties and so on.
- Ad hoc testing
as the name suggests, this type of testing is generally termed as un-planned testing and does not need to use any formal testing techniques. It may also be termed as monkey testing or random testing in view of its nature and approach. An important attribute in this type of testing is the precedence of test execution over test case development. The tester's gut-feel, intuition, any past experience with similar products, any related domain or subject matter expertise are very much valued and leveraged during this type of test activity.
Ad hoc testing should be a part of a sound overall Test strategy to complement other types of planned test activities. It is not meant to be used in isolation and as a sole determinant of a product's readiness to ship. Some of the attributes of ad hoc testing that tend to endear it to testers include, speed and pace of execution, the range that can be covered and the sheer flexibility inherent in this approach. While all of these may make this seem like a totally unconstrained random activity, it need not necessarily be so. We can introduce some elements of planning into this seemingly unplanned test approach. One of the factors that may be constrained or limited is time. The start and end times for this test activity may be defined. In addition, the general direction of testing, the broad area(s) to be focussed upon may also be specified. The exact actions, the screens and features to be exercised, the sequence of steps to follow are left to the individual tester's judgement.
While an earlier statement indicates that test execution precedes test development, it does not absolve the tester from developing test cases. It is important to translate tests performed into test cases that will then become part of a formal test suite to be run as part of planned testing. The idea is not to end up repeating the same set or type of tests every time ad hoc testing is performed. Once a set of new tests are identified they need to be part of your test suite to be executed as part of your regular planned test cycle. It is recommended that during ad hoc testing, tester's make notes of actions performed, observations and any relevant data that may be used to later develop test cases and help with analyzing any issues that may crop up.
- Aesthetics testing
it is said that, Beauty is in the eye of the beholder. In the case of our products, it is in the eyes of the users of our product. Aesthetics testing involves testing of the User Interface and focussing on the beauty, the product's looks rather than the products functionality alone. It is a part of Usability testing but often gets relegated to the back burner due to focus on attributes such as the ease of use and the user's ability to quickly accomplish tasks. One of the factors that go against any serious focus on product aesthetics is the notion of what engineers think is "good enough" for customers. Added to this is the primary focus on feature functionality and technical wizardry, which is good and essential. However, presentation is definitely important and should not be ignored. Bugs in this type of testing tend to be termed as cosmetic issues or even at times trivial issues which in the heat of approaching deadlines and schedule pressures invariably tend to be deferred or lowered in priority.
Aesthetics testing covers the various elements of the UI including display styles, fonts, colors, messages, windows, icons, menus, pointers, etc. An application that follows good aesthetic principles helps create a good impression on the users as well as positively influences product acceptance. The product also tends to look and feel like an output of a well designed and professional endeavor. Frankly, how many of us would be inspired to use applications that have odd color schemes and display styles, unsuited fonts and such other cosmetic glitches which may not affect functionality but make the product not pleasing to view.
Considering the subjective nature of beauty and aesthetics, it helps to consult with a representative sample of potential users and folks such as professional UI designers, human-computer interaction experts, graphic artists, and also try to conform to commonly accepted & user expected interface design guidelines.
- Buddy testing
this refers to the practice of pairing up of a developer and tester as buddies to test a piece of code or functionality. This type of testing is best applied in the early development stages after unit testing is complete and prior to checking in of the code. At this stage of product development, buddy testing may employ a variety of white and black box test techniques as appropriate to test the artifact that is available.
Buddy tests aim to avoid duplication of unit test coverage and endeavors to cover areas that may not have been addressed in unit tests, such as the user interfaces. Other important areas that buddy testing is expected to cover include, reviews of coding standards & practices, code walkthroughs, inspection and development of test scenarios to maximize coverage around statements and paths in the code. A beneficial side effect of buddy testing is greater clarity gained around product specifications which help the testers develop better tests and development to gain answers to make any needed design changes early.
Due to the early detection of issues that is possible with this type of testing, Buddy testing is often termed as "preventative" type testing where the aim is to unearth defects early in the development cycle and have them addressed when its generally cheaper to do so than in later phases. Any issues that are raised are discussed and after agreed upon changes are made, the code is checked-in. This buddy tested code is integrated and later delivered to the functional testing team by which time the specifications are clearer and there is an assurance of greater quality in the product.
- Pair testing
involves a pair of testers testing a feature or module together on the same machine. During the actual testing process, while one person tests the feature, the other one makes note of the observations as well as provides additional perspectives. The expectation is that having two people test the same feature together can help leverage their different sets of views, ideas and unique perspectives to have the feature tested better than it would have been if only one tester had tested it. Pair testing may also be used to help quickly ramp-up a new comer by pairing him with a more experienced tester.
While all this sounds pretty good, a point to be remembered is that in some circumstances it may not be possible to derive synergies from this process. Examples include, pairing testers who do not get along well, pairing a junior and senior tester when there's a fast approaching deadline to be met, etc.