Pressure to Release and the Impact on Testing

The pressure to release new products to a consumer with the appetite of a blue whale has led to an unprecedented situation. Companies are being pushed to embrace iterative and incremental development methodologies in a bid to keep users happy. Add what is now commonly known as the Google effect “I want it now and I want it free” and you have a disaster of epic proportions in the making.

I’d like to draw a parallel from the aircraft manufacturer industry.Airbus and Boeing have been in a battle for over 20 years now. This battle reached a crescendo in 2005 with the launch of the world’s largest passenger aircraft; i.e., the Airbus 380 releasing an extremely high pressure situation on Boeing to launch a new aircraft.Boeing worked on trying to get a better aeroplane that could beat the A380. Boeing finally delivered on its promise in 2007 with the launch of its own 787 Dreamliner. Unfortunately, Boeing has been plagued with a host of issues on the 787s including electrical fires on board mid-air, leading to the grounding of the entire fleet. Fortunately, there have been no fatal accidents to date.

Why do I state this example in a software testing blog?

The aircraft industry is a cutting edge world with great emphasis on passenger safety, deep rigor in testing all equipment thoroughly before allowing actual passengers to fly. I am very confident that Boeing would have followed a meticulous protocol to test this aircraft but somewhere the pressure to release has led to errors creeping in the process. Boeing’s issues reinforces the fact that we live in a world where the pressure to release can get to the best of us.Now this is an industry where testing is not taken lightly and this happened. Over the years, the kind of trade-offs I have seen software companies make I am surprised most of them have lasted this long 🙂

Given all these factors, we all acknowledge and accept that we cannot possibly test every point in a system to find defects. Thus one of the ways that has evolved of late is to run a “Risk based testing” program that protects you from critical failure in projects

Most customers I interact with end up using models that work on probabilities, impact, time and cost. One recommendation we end up making to most of them is to use statistical models to do this analysis. Do not rely only on past data, experience, user feelings, etc. Put it in a statistical model and see what the scenarios that come out are. Map these to your past data, experience, high failure areas, criticality to business, user feeling, etc. and add/ edit/ delete based on that. What you will end up with is a decent test plan that covers your risk sufficiently. It is critical that you revisit this periodically to adjust your plan on defects you are finding.

Ultimately, the decision to release a product is one driven by business needs. However, shipping a product that is not tested properly may not be that great an idea as a small number of people can make a lot of noise about problems-even those problems that are not so serious. Unfortunately that’s the world we live in now! As for Boeing, there is always the next battle to win in a duopoly .

Hari Raghunathan | AVP | Zen Test Labs

8 Steps to Improve Your Regression Testing Process

With business and user requirements perpetually in an evolutionary mode, I find that regression testing has become a key component of the software development lifecycle. As testers, we need to keep in mind that a constant change in the functionality of the application lends the system to vulnerabilities in the base functionality too. These vulnerabilities tend to creep in due to an oversight while adding new functionality, poor analysis of impact on interfacing/ integrating applications and many a times due to the fact that customizations are an unknown entity. Poor regression testing can not only result in poor software quality but also impact revenue and cause customer loss.
Based on many years of planning, creating and executing the Quality Assurance programs of multiple Fortune 500 companies, I suggest the following eight step methodology to improve any regression testing process.

Phase 1: Defining
Step 1: Objective Finding (OF) – Challenges and Goal Identification
This step answers one of the most important questions “Why is regression testing not effective in its current state?”

Step 2: Fact Finding (FF) – Data Collation and Analysis
During this stage, teams must trail defects found in the past to conduct a defect root cause analysis. An important part of this step is bug prediction analysis so that defect prone areas in the application can be found.

Step 3: Problem Finding (PF) – Problem Clarification and Statement
Once the results of Steps 1 and 2 are combined, the exact scope of the challenges to address is established. These refined objectives act as the equivalent of a “Requirements Document”.

Phase 2- Scoping
Step 4: Test Cases Finding (TF) –Coverage Gap Analysis
Gaps in test coverage are found based on the current test cases and the application functionality. Techniques to map test cases to requirements and testing techniques are used to identify missing test cases

Step 5: Test Case Centralization (TC) – Test Case Repository Creation
Ensure that all test cases are stored in a centralized repository and in an optimized manner. Each test case must have a clear objective, precondition, steps, expected result and test data.

Step 6 : Test Case Optimization (TO) – Maximum Coverage in Desired Time with Minimum Risk
Statistical techniques such as Classification Tree and Orthogonal Array may be used to run minimum number of test cases in a way where every business process/ function is covered at least once

Phase 3- Executing
Step 7: Reusing Test Components (RT) – A Modular Approach
Create business functions and test data in a way that they can be reused for building manual test cases. Automate the generation of descriptive manual test cases.

Step 8: Test Case Classification (TC) – Test Case Mapping
At this stage, test cases are grouped requirement wise, screen wise, module wise, etc. Small frequently used regression pack/suites are created.

We have written a detailed whitepaper ‘Progress Not Regress’ on improving any regression testing process. We would love to hear your thoughts on it!

Girish Nair | Sr. Consultant | Zen Test Labs

The chronicles of a new tester

The general mentality in the software testing industry is ‘Negative thinking is one of the most desired attributes of a software tester’. Ever since my sophomore days I aspired to be a software tester. I wondered if being an optimist would hamper my chances of success in the testing field. Many questions stormed my mind. Some of them were:

1. Does this field belong only to negative thinkers?

2. Is negativity the first criteria to become a software tester?

3. What will be the nature of the teams I work with?

4. Is this profession going to change my attitude?

5. If yes, then what kind of a life am I going to lead?

A lot of times I felt like I was passionate about a profession that did not suit me. Keeping all my apprehensions aside, I continued working towards my goal and left no stone unturned in becoming the tester I dreamt to be.

When I practiced test case writing I would come up with at least 10 negative test cases against 1 or 2 positive test cases. For example, I wrote 30 negative test cases and just 2 positive test cases for a simple ‘Change Password’ scenario. This ratio of 1:15 further increased my ‘positive – negative approach’ dilemma. Eventually, I started to believe that I needed to be more of a negative thinker than a positive thinker.

Determination got me into this field.  I have completed 6 months in an independent testing firm and this period has changed my approach towards testing. Working with vastly experienced people in a positive work environment has answered all my queries.

My new approach is

1. Testing doesn’t belong to negative thinkers at all. It belongs to people who can think along multiple directions.

2. Only people with a positive approach can survive in this field. There is no space for negativity.

3. This profession definitely impacts your behaviour outside office; it enables you to think in a 100 different ways about any situation. You can predict 100 different outcomes of an action or incident. You can come up with a wide range of solutions to any problem. So even if there are changes, they are all good.

When I look back, I wonder, where I went wrong. What made me think that? What caught me off guard?

The answer is -wrong terminology-I got stuck in a game of words!

It is not about 2 positive and 30 negative test cases; it is about 2 valid and 30 invalid test cases I brainstormed in 32 creative ways.

I believe, that the terminology ‘positive’ and ‘negative’ test cases should be refrained since they have a tendency to affect the psychology of testing in a negative way.

For me, in the battle of positive and negative thinking, the winner will always be positive thinking, creative thinking!

Mayank Raj | Trainee Test Analyst | Zen Test Labs

Testing the Mobile Apps explosion

It won’t be long before it becomes A-android, B-Blackberry, C-Cupcake, D-Donut,-E-Éclair, F-Froyo, G-Gingerbread, if anything, they are words that probably half the planets population (approx. 3.2 billion people) is well versed with. Not only that another 700 million would be over the next 3 years! If you haven’t guessed it by now…I am referring to the explosion of mobile devices into our lives.

At the core of this explosion in mobile devices and here I mean smartphones and tablets; is the innovation in the field of processors. With processing speeds of these mobile devices increasing dramatically, the demand from users to run complex applications has also gone up. Business users want to have the ability to manage their personal and professional lives through a single interface and have apps that allow them to do this. Add the speed at which innovation in devices, processors and OS takes place and it is not a pretty picture for app. manufacturers.

So, what does all of this mean to you if you are an App. manufacturer or an enterprise trying to create mobile apps for your workforce or customer base?

Some of the areas of impact include:

  • A constant need to keep your app. updated with the latest OS upgrades/ devices in the market
  • Build high secure applications that lend peace of mind to users/ administrators
  • Build apps. that are not very heavy on the device resources (for optimum performance)
  • Constantly upgrade/ enhance your application to keep users engaged

Roll out apps at a speed which would put Formula 1 drivers to shame! Well, just joking on that last one there but for the ones that work in this space, you know what I mean!

Over the years of managing the Quality Assurance programs of multiple Fortune 500 companies and having setup a Mobile Testing Lab fairly early on within this space, I want to share the basic methodology that can be used to mitigate risks for you when developing/ deploying your mobile apps.

Mobile Configuration Optimization
Choose an optimum no. of configurations to test your app on using statistical techniques like Classification tree, Orthogonal Arrays, etc.
Mobile Test Automation
Automate as much of the core testing as possible right from the get go. We have experienced a reduction between 50-70% in the testing effort while ensuring complete coverage across devices. Automation built in on the right design principles also leads to high reusablity of scripts.
Mobile Performance Testing
A holistic approach to performance testing should cover areas such as volume testing, endurance testing, performance monitoring, soak testing and testing under real time scenarios.

An in depth whitepaper has also been written on Mobile is changing the face of Software Testing.I would love to hear from readers on their learning’s when developing or testing mobile apps. Please feel free to write to me

Amol Akotkar | Test Consultant | Zen test Labs

Reducing dependence on automation engineers to manage test automation!

I have always wondered what would it be to separate test automation and automation engineers. Considering that Test Automation has always been treated as the holy grail of testing! Enterprises that have managed to achieve high levels of automation in the testing process have enhanced productivity exponentially while improving coverage and thus reducing risk. This has translated into automation engineers holding  design approaches close to their heart and controlling scripting tightly. Given this dynamic, the adoptions of test automation have remained low over the years.

Test Automation today has transitioned from a “Record and Playback” mode to a virtually “Scriptless” mode thus enabling rapid on the go Test Automation

It has resulted in enterprises automating testing to be oblivious to tool specific coding thus making automation suites maintainable and resource independent. However, scriptless automation frameworks still have many missing links. For example, most scriptless automation frameworks  demand extensive Business User involvement particularly to test the technology enablement. There is a possibility it takes longer than acceptable time to market. Among many causes for greater time to market, one cause is extensive manual testing of the solution. It hamstrings the time taken to market since there is heavy dependence on business analysts (from business or IT) in QA (test design and execution). There is a strong dependence on skilled & expensive technical resources for automation. There is a need to manage spikes in demand for QA resources which results in increase of QA costs.

Considering these dynamics, the next stage in the evolution of test automation is driving in the direction of Business Process Model based test automation that aims at synchronizing Operations, Product Management and Quality functions.

At Zen Test Labs we are innovating with multiple products in this space. Our flagship test automation framework, ZenFRAME is one such example. ZenFRAME improves BA and business testers productivity while reducing dependence on technology teams by up to 40%. The GUI enables most non-technical users to create automated test cases faster  thus resulting in close to 33% lesser creation time, read our whitepaper to know how you can implement a business Process model for you QA environment. Would love to hear thoughts from everyone…

Ravikiran Indore |Sr Consultant |Zen Test Labs

 

 

 

Top 6 solutions for software testing failures

The cost of software testing is still not valued by its worth. Although it is a critical investment companies avoid spending on testing because they don’t realize the ROI on testing and/or a quantifiable cost of quality. The most common complaints against testing that we repeatedly hear are:

  • It is a necessary evil that stalls a project the closer it gets to a release
  • It is too costly, time consuming without any guaranteed outcome
  • Many a times regression testing is not effective to identify new defects

Having worked on a number of testing projects over the past 12 years, I realize why there is a high tendency to look at testing with such a skeptical eye. I would like to share what we have learnt over time.The top six points in our view to improve the effectiveness of manual testing are:

6. Reducing effort and time in Test Documentation

A lot of teams spend unnecessary time detailing test scenarios during the planning phase which are rarely referred to after 2-3 rounds of testing. This increases maintenance overheads and reduces flexibility and coverage in the long run thus resulting in inefficient testing. Post the initial 6-8 months a large % of test scenarios are outdated and require the same effort in updating. Instead of detailing each and every step for every test scenario, one can cover it with test conditions and the expected results.

5. Focusing on breadth and depth of testing

Many a times when execution is not prioritized the depth of testing takes lead over breadth. By aiming at covering more breadth, we align testing with the business objectives. By doing this the teams aim at being effective first and then efficient. Breadth referring to covering positive  critical cases (across the application) that are frequently used by end user.Depth referring to covering all the test cases for a module.

4. Testing, a continuous activity

Many companies look at testing as a one-time investment. They outsource/ execute in-house once during the start of the development of the product and then rarely test it during the maintenance phases. The primary reason is invariably budget driven and goes onto harm the quality of the product when not tested after newer versions. For every minor release one should ensure all the regression test cases are executed and for every major release all the high and medium priority test cases are executed at least once.

3Remembering the objective of testing

The key objective of testing is to break the system and not to prove that the system works as per the requirements.This has a direct impact and can improve testing effectiveness and the number of defects one will find. It is often observed that many senior testers habitually start proving that system is working as per the requirements which is against the primary objective of testing.

2Strategize Test optimization

Coverage is important but not at cost of redundant test cases. Test optimization is an intelligent way to ensure test coverage in less time. That’s why testing teams need to collaborate more with the development teams. Understanding the high level design and structure of the application makes testing more effective. In development, one of the main principles followed is reuse. So, we can use the same principle while testing the same code which is reused. Why not optimize and test the class/object once and just test the implementation of the class/object on other screens/modules. If the test cases are reusable maintainable and scalable it is an additional advantage to roll out in time and under budget.

1. Focusing on the Business for which you are testing

Testing cannot be done in isolation. Business priorities and challenges are equally and in most of the cases more important than testing needs. One thing I have learnt is that testing cannot drive business decisions, business drives testing most of the times. Aligning testing to the business requirements results in a disciplined and ready to market high quality product.

These are some of the solutions with which I could overcome testing failures. Do share yours if you have new solutions or methods

Mukesh Mulchandani | CTO | ZenTest Labs

Verifying 800 Million data sets in record time!

I recently was fortunate to be a part of a unique project at Zen Test Labs. This was a post-merger scenario wherein the acquirer (bank) had to consolidate the customer information systems of the two banks into a single system. This meant mapping the acquired bank’s product, service and customer portfolio, to a new and modified version of the acquirer’s products and services.

Among many other factors, ensuring seamless service to existing customers of the acquired bank implied that such customers should not expect undue increase in service charges. Processing customer data using enhanced systems required that the service fees were within the threshold that the customer would expect in normal course of business. Testing for “Go Live” was tricky since it required that for each acquired customer, the bank had to compare the results from the “Go Live” with historical data for the customer. With hundreds of thousands of customers and millions of transactions in a month, manual verification was a gigantic task, potentially impossible to accomplish.

Zen Test Labs creatively addressed this situation by leveraging its Data Migration Testing framework and extending it to include customer specific scenario. For example, each data component of the source and target data files were mapped, rules applied and integrated into the testing framework. A utility was then designed to pick each record from the source, apply the logic of migration then check if the corresponding value of the record in the target file is within the tolerance level as per the logic. During execution the selected components from the imported source and target data were compared and flagged if not meeting the tolerance levels. Once all the records were compared the utility reported:

  1. All transactions migrated as per the logic
  2. All transactions which did not meet the tolerance criteria
  3. Transactions in the target database which did not have any relation with the migration process

The framework and utility testing itself adopted an approach with three layers of testing:

  1. Utility testing using dummy data for source, target and the mapping
  2. Sampling of output files and manual verification with real data
  3. Verify against “Thumb Rules”. One of the examples of this was; the total number of Pass records and Fail records should total the count of primary key of source data.

Overall I found this project very challenging and interesting. Leveraging the data migration testing framework we created a comprehensive utility in approximately three weeks. The quality and performance of the utility was so sharp that it compared one data component with 600,000 to 700,000 records in 10 to 12 minutes. The total number of data values verified in this project was over 800 Million in a span of 30 days which is as good as verifying at least one data for the entire population of European Union! With our output files we provided great deal of ‘Data Profiled’ information of migrated customers to the bank which was used to understand behavioral patterns of the migrated customers and the performance of the products after migration.

Ravikiran Indore |Sr Consultant |Zen Test Labs