Optimizing Test Design

Boris Beizer, a definitive guru in the world of software testing, famously said “More than the act of testing, the act of designing tests is one of the best bug preventers known.”  Proactive test design can help you build quality into the system instead of testing for defects towards the end of the software development lifecycle. It allows testers to identify and eliminate defects before they are coded.

Test case design is also a powerful tool for risk analysis and optimization since it decides which parts of the system are to be tested. More often than not, the number of test cases created is far greater than the time and resources available to execute them. In such cases, good test design can be used to select the right number of tests that provide optimum coverage. Test design is also a huge factor in the success of any test automation project, even more so than tool selection and scripting.

However, a single test case design technique cannot be adequate to fully test a product. It is advisable to use a combination of these techniques because the value of each technique depends on the context of the functionality being tested. At Zentest, we use a combination of traditional and advanced test case design techniques to produce an optimized set of test cases that provide maximum test coverage. We have outlined these techniques below (click image to enlarge):


Advanced techniques such as Classification Tree are also immensely helpful in optimizing test cases. Classification Trees identify test relevant aspects which are called classifications and their corresponding values which are referred to as classes. The different classes from all classifications are then combined into test cases. Test design becomes visual through Classification Trees which makes communication and understanding easy. This technique provides the bare minimum test cases in a way that every ‘classification’ is covered at least once.

In the example below, a maximum of 12096 (2 X 8 X 7 X 2 X 2 X 2 X 3 X 3 X 3) test cases are required to test every possible combination. With Classification Trees, a minimum of 8 test cases and an optimal of 128 are enough.


Designing test cases is as much an art as a science.  At the core of good test design is a combination of test design techniques backed by creative and critical thinking.  The objective of good test design is to create reasonably sized tests that are aggressive enough to find defects. Reducing test design to a dull, mechanical activity will result in shallow and obvious tests that are unable to break the system and find defects.   In our experience, more than a specific technique, a combination of test design techniques backed by creative and critical thinking can lead to optimal results.

Zen Test Labs | ‘Go-Live’ Faster !

Using Mind Maps in Testing

Mind maps are an excellent tool and can be used in a variety of testing activities like requirements analysis, test design, test planning, session reports, measuring test coverage etc. Testing relies heavily on communicating stories about what should be tested, how it should be tested, what are the risky areas and so on. Making this process visual can help testing teams articulate their thoughts & ideas better. Drawing mind maps also makes generating new ideas much easier.

Take a look at the simple “Replace” dialogue box below

Dialogue Box

We can easily create a mind map for testing this functionality using the following steps:

  1. Draw the main theme in the centre.
  2. Draw the module name/features of the application branching out from the main theme.
  3. Draw the sub module/feature branching out from each module/feature.
  4. Add colors to your mind map to make it easier for your brain to group things together.
  5. Write test cases for each feature and sub feature.
  6. Include only testable items in your mind map
  7. Try not to use full sentences in your mind map

Mind Map 1

Some examples for creating exclusive mind maps or creating branches in existing mind maps are:

  • Mind maps for field level validation of all fields on the screen
  • Identify fields that are common to all screens and create a ‘Common Fields’ mind map. Eg. Date Field- this field is the same in all screens
  • Mind maps that include business rules
  • Mind maps for events like Mouse Over, Click etc
  • Mind maps based on Screen Names
  • Mind maps based on Functionality

An example Mind Map for validating a subscription form

Subscription Form Mind Map

Ideas for using mind maps in testing:

  • Mind Map Jamming: All the testing team members read /analyze a particular requirement/feature and create a mind map for it together.
  • Using Mind Maps for Defect/ Execution Summary: Create a mind map of test cases. After execution, you can mark (tick or cross) the mind map as per the Actual Result, thus using it to provide Defect/Execution Summary.
  •  Smoke/Sanity Testing: Create a mind map for all the flows that are to be Smoke tested or Sanity tested.
  • Scope: Create a mind map to show what is in Scope and what is not in Scope.

You can use mind maps anywhere and everywhere! Mind maps exist to make your life easy, so if a mind map is getting too big or complicated try splitting it.  The great thing about mind maps is that all test cases are visible in one view; you don’t need to scroll up and down. This also makes it simpler to add new points whenever you want. Mind maps provide more coverage and the likelihood of missing important points is lesser. You cannot use long detailed sentences in mind maps. Using one word per line improves clarity and understanding. It makes recollection easier. Using single keywords will make your mind maps more powerful and flexible.

 Mindmap testing-final

Mind mapping skills improve over time and with practice your mind maps will become more extensive & wide-ranging. Although, mind maps help you simplify information and make it easily understandable, you must not forget that they are ultimately models and therefore, they may leave out important aspects. So make sure that you question what might be missing from the map and add those things. This is quite simple as all you have to do is add another node to the map!

Satish Tilokchandani | Lead Consultant | Zen Test Labs

Testing the Mobile Apps explosion

It won’t be long before it becomes A-android, B-Blackberry, C-Cupcake, D-Donut,-E-Éclair, F-Froyo, G-Gingerbread, if anything, they are words that probably half the planets population (approx. 3.2 billion people) is well versed with. Not only that another 700 million would be over the next 3 years! If you haven’t guessed it by now…I am referring to the explosion of mobile devices into our lives.

At the core of this explosion in mobile devices and here I mean smartphones and tablets; is the innovation in the field of processors. With processing speeds of these mobile devices increasing dramatically, the demand from users to run complex applications has also gone up. Business users want to have the ability to manage their personal and professional lives through a single interface and have apps that allow them to do this. Add the speed at which innovation in devices, processors and OS takes place and it is not a pretty picture for app. manufacturers.

So, what does all of this mean to you if you are an App. manufacturer or an enterprise trying to create mobile apps for your workforce or customer base?

Some of the areas of impact include:

  • A constant need to keep your app. updated with the latest OS upgrades/ devices in the market
  • Build high secure applications that lend peace of mind to users/ administrators
  • Build apps. that are not very heavy on the device resources (for optimum performance)
  • Constantly upgrade/ enhance your application to keep users engaged

Roll out apps at a speed which would put Formula 1 drivers to shame! Well, just joking on that last one there but for the ones that work in this space, you know what I mean!

Over the years of managing the Quality Assurance programs of multiple Fortune 500 companies and having setup a Mobile Testing Lab fairly early on within this space, I want to share the basic methodology that can be used to mitigate risks for you when developing/ deploying your mobile apps.

Mobile Configuration Optimization
Choose an optimum no. of configurations to test your app on using statistical techniques like Classification tree, Orthogonal Arrays, etc.
Mobile Test Automation
Automate as much of the core testing as possible right from the get go. We have experienced a reduction between 50-70% in the testing effort while ensuring complete coverage across devices. Automation built in on the right design principles also leads to high reusablity of scripts.
Mobile Performance Testing
A holistic approach to performance testing should cover areas such as volume testing, endurance testing, performance monitoring, soak testing and testing under real time scenarios.

An in depth whitepaper has also been written on Mobile is changing the face of Software Testing.I would love to hear from readers on their learning’s when developing or testing mobile apps. Please feel free to write to me

Amol Akotkar | Test Consultant | Zen test Labs

Reducing dependence on automation engineers to manage test automation!

I have always wondered what would it be to separate test automation and automation engineers. Considering that Test Automation has always been treated as the holy grail of testing! Enterprises that have managed to achieve high levels of automation in the testing process have enhanced productivity exponentially while improving coverage and thus reducing risk. This has translated into automation engineers holding  design approaches close to their heart and controlling scripting tightly. Given this dynamic, the adoptions of test automation have remained low over the years.

Test Automation today has transitioned from a “Record and Playback” mode to a virtually “Scriptless” mode thus enabling rapid on the go Test Automation

It has resulted in enterprises automating testing to be oblivious to tool specific coding thus making automation suites maintainable and resource independent. However, scriptless automation frameworks still have many missing links. For example, most scriptless automation frameworks  demand extensive Business User involvement particularly to test the technology enablement. There is a possibility it takes longer than acceptable time to market. Among many causes for greater time to market, one cause is extensive manual testing of the solution. It hamstrings the time taken to market since there is heavy dependence on business analysts (from business or IT) in QA (test design and execution). There is a strong dependence on skilled & expensive technical resources for automation. There is a need to manage spikes in demand for QA resources which results in increase of QA costs.

Considering these dynamics, the next stage in the evolution of test automation is driving in the direction of Business Process Model based test automation that aims at synchronizing Operations, Product Management and Quality functions.

At Zen Test Labs we are innovating with multiple products in this space. Our flagship test automation framework, ZenFRAME is one such example. ZenFRAME improves BA and business testers productivity while reducing dependence on technology teams by up to 40%. The GUI enables most non-technical users to create automated test cases faster  thus resulting in close to 33% lesser creation time, read our whitepaper to know how you can implement a business Process model for you QA environment. Would love to hear thoughts from everyone…

Ravikiran Indore |Sr Consultant |Zen Test Labs




Top 6 solutions for software testing failures

The cost of software testing is still not valued by its worth. Although it is a critical investment companies avoid spending on testing because they don’t realize the ROI on testing and/or a quantifiable cost of quality. The most common complaints against testing that we repeatedly hear are:

  • It is a necessary evil that stalls a project the closer it gets to a release
  • It is too costly, time consuming without any guaranteed outcome
  • Many a times regression testing is not effective to identify new defects

Having worked on a number of testing projects over the past 12 years, I realize why there is a high tendency to look at testing with such a skeptical eye. I would like to share what we have learnt over time.The top six points in our view to improve the effectiveness of manual testing are:

6. Reducing effort and time in Test Documentation

A lot of teams spend unnecessary time detailing test scenarios during the planning phase which are rarely referred to after 2-3 rounds of testing. This increases maintenance overheads and reduces flexibility and coverage in the long run thus resulting in inefficient testing. Post the initial 6-8 months a large % of test scenarios are outdated and require the same effort in updating. Instead of detailing each and every step for every test scenario, one can cover it with test conditions and the expected results.

5. Focusing on breadth and depth of testing

Many a times when execution is not prioritized the depth of testing takes lead over breadth. By aiming at covering more breadth, we align testing with the business objectives. By doing this the teams aim at being effective first and then efficient. Breadth referring to covering positive  critical cases (across the application) that are frequently used by end user.Depth referring to covering all the test cases for a module.

4. Testing, a continuous activity

Many companies look at testing as a one-time investment. They outsource/ execute in-house once during the start of the development of the product and then rarely test it during the maintenance phases. The primary reason is invariably budget driven and goes onto harm the quality of the product when not tested after newer versions. For every minor release one should ensure all the regression test cases are executed and for every major release all the high and medium priority test cases are executed at least once.

3Remembering the objective of testing

The key objective of testing is to break the system and not to prove that the system works as per the requirements.This has a direct impact and can improve testing effectiveness and the number of defects one will find. It is often observed that many senior testers habitually start proving that system is working as per the requirements which is against the primary objective of testing.

2Strategize Test optimization

Coverage is important but not at cost of redundant test cases. Test optimization is an intelligent way to ensure test coverage in less time. That’s why testing teams need to collaborate more with the development teams. Understanding the high level design and structure of the application makes testing more effective. In development, one of the main principles followed is reuse. So, we can use the same principle while testing the same code which is reused. Why not optimize and test the class/object once and just test the implementation of the class/object on other screens/modules. If the test cases are reusable maintainable and scalable it is an additional advantage to roll out in time and under budget.

1. Focusing on the Business for which you are testing

Testing cannot be done in isolation. Business priorities and challenges are equally and in most of the cases more important than testing needs. One thing I have learnt is that testing cannot drive business decisions, business drives testing most of the times. Aligning testing to the business requirements results in a disciplined and ready to market high quality product.

These are some of the solutions with which I could overcome testing failures. Do share yours if you have new solutions or methods

Mukesh Mulchandani | CTO | ZenTest Labs

Verifying 800 Million data sets in record time!

I recently was fortunate to be a part of a unique project at Zen Test Labs. This was a post-merger scenario wherein the acquirer (bank) had to consolidate the customer information systems of the two banks into a single system. This meant mapping the acquired bank’s product, service and customer portfolio, to a new and modified version of the acquirer’s products and services.

Among many other factors, ensuring seamless service to existing customers of the acquired bank implied that such customers should not expect undue increase in service charges. Processing customer data using enhanced systems required that the service fees were within the threshold that the customer would expect in normal course of business. Testing for “Go Live” was tricky since it required that for each acquired customer, the bank had to compare the results from the “Go Live” with historical data for the customer. With hundreds of thousands of customers and millions of transactions in a month, manual verification was a gigantic task, potentially impossible to accomplish.

Zen Test Labs creatively addressed this situation by leveraging its Data Migration Testing framework and extending it to include customer specific scenario. For example, each data component of the source and target data files were mapped, rules applied and integrated into the testing framework. A utility was then designed to pick each record from the source, apply the logic of migration then check if the corresponding value of the record in the target file is within the tolerance level as per the logic. During execution the selected components from the imported source and target data were compared and flagged if not meeting the tolerance levels. Once all the records were compared the utility reported:

  1. All transactions migrated as per the logic
  2. All transactions which did not meet the tolerance criteria
  3. Transactions in the target database which did not have any relation with the migration process

The framework and utility testing itself adopted an approach with three layers of testing:

  1. Utility testing using dummy data for source, target and the mapping
  2. Sampling of output files and manual verification with real data
  3. Verify against “Thumb Rules”. One of the examples of this was; the total number of Pass records and Fail records should total the count of primary key of source data.

Overall I found this project very challenging and interesting. Leveraging the data migration testing framework we created a comprehensive utility in approximately three weeks. The quality and performance of the utility was so sharp that it compared one data component with 600,000 to 700,000 records in 10 to 12 minutes. The total number of data values verified in this project was over 800 Million in a span of 30 days which is as good as verifying at least one data for the entire population of European Union! With our output files we provided great deal of ‘Data Profiled’ information of migrated customers to the bank which was used to understand behavioral patterns of the migrated customers and the performance of the products after migration.

Ravikiran Indore |Sr Consultant |Zen Test Labs

The evolution of test design techniques

Test Design is a constantly evolving area of the software testing life cycle. Surprisingly, it still remains implemented at a relatively smaller scale whenever new application features are defined and this has gone a long way in limiting the effectiveness and efficiency of testing.

The heart of it has always been the test design template, the primary goal of which is to reduce ad-hoc reinvention of test designs. However, most companies/ individuals do not assign as much importance to this as they do to the development process. Ideally, a good test design frees tester to focus on the actual job of testing the application. I have seen that just by using multiple test design techniques, the efficiency gain that we have been able to achieve has been tremendous. Some of them include

  • Specification based techniques like equivalence partitioning, boundary value analysis, state transition testing, use case testing and decision table testing
  • Structure based techniques/ White box testing through statement testing & coverage and decision testing coverage
  • Experience based techniques like error guessing and exploratory testing

Our teams have been using these techniques for quite some time now and have come up with innovative ways of creating classification trees, in order to effectively design and generate test cases, for complicated applications.  One way is, wherein we map the required output so that the classification of inputs and outputs is matched.

We are in the process of publishing a whitepaper on this area early next quarter that details out how one should go about Test Design.

Watch this space for more on the paper…

Sunil Deshmukh | Senior Test Manager | Zen Test Labs