Software Testing in 2020

As a CEO of a testing company, a question that plays on my mind constantly is ‘what is the future of Testing?’ In the early 2000s, Ron Radice spoke at a QAI conference in India, where he had predicted that testing will die. His call was that automatic code generators will do the job so efficiently that testing will become obsolete. When he looked at the crystal ball then, he could see that prevention will be the creed and not detection.

Well, when I look at 2020, I believe Ron was right as well as wrong. Yes, code generators are arriving. Yes, there will be automated test case generators. Yes, model based testing will replace rudimentary testing activities. But, the whole boom of software especially in ubiquitous mobile devices means only more testing.

If the future includes automated cars like the Google driverless cars, I cannot imagine such a car with a technology that has not been fully and manually validated. If the future is the “Internet of Things”, I can only imagine that the amount of embedded testing will only explode. If the future is, business operations being handled through apps and app stores that have millions of applications pervading every step of our business and personal life then imagine the amount of mobile testing that will be required. If not anything, as everything gets more interconnected, the consequences of a critical failure will only be catastrophic. Wherever the nexus of cloud, social, mobile and big data takes us, I am thoroughly convinced that the need for testing will only grow.

While there a dime a dozen predictions on how things will look in 2020, my two bits around where testing will find itself as follows:

• Huge business opportunities arising from testing for app stores directly than app manufactures
• Test automation would have evolved from script less automation to automatic test case generators and execution
• The pressure to deploy rapidly in the Mobility and embedded devices space will mean that test automation tools will evolve to provide near and real time support to these areas
• Testing and testers will evolve to become super specialized with domain testers at one end and niche technical testers at the other end.

These are some things that come to mind and as the decade continues to evolve. Would be great to know what the rest of the testing world thinks.

Krishna Iyer|CEO|Zen Test Labs

Automation lessons learnt: Funding automation projects & the role of change management

One of the many reasons test automation is often compromised is in situations where business funds technology projects on a per project basis. Key reason being business benefits of automation, primarily time to market, are realized only during the subsequent releases of the application, never in the release where automation is undertaken. Even when business agrees to fund an automation project, the order of magnitude of benefits is small due to the potentially low levels of automation feasible within the project scope. The benefits accumulate only over a period of time from increasing automation levels and therefore the return on investment is realized over a longer duration. In order to reap benefits from automation business needs to continually invest in it and maintain a long term orientation to ROI. These are typical characteristics of any change initiatives. Test automation initiatives funded by individual business units can therefore learn from the vast expanse of knowledge pertaining to other organizational change initiatives and do not need to reinvent the wheel.

I have captured my experience of change initiatives as applied to automation in the visual below. It shows key components required to not only succeed at a pilot project but also create a cascading positive spiral where the benefits accumulate over time.

 

Like any other change initiative the key components form a chain, where the initiative is just as strong as its weakest link. Successful pilot accompanied by the right communication can act as a feeder to the next project and as long as all key components act in unison incremental benefits from each project can lead to significant cumulative benefits. The problem is that the first cycle tends to be demanding and it needs continuity of the champions until such time that the framework is institutionalized. A failure at early stages can have devastating effects with a stigma associated with it presenting greater roadblocks during subsequent attempts at automation. This is where senior management support from business and IT is crucial. A champion driving each automation cycle to success is central for the overall success of automation!

What do you think of the role of change management in automation projects? Have you had difficulty funding automation projects? Please feel free to share your experiences.

Aparna Katre | Director Strategy | Zen Test Labs

Automating data migration testing

I had the opportunity to be a part of data migration project recently. I was involved in automated data migration testing, which I found it to be a very exciting and challenging form of testing.  I wanted to share my learning’s in this post.

During conversion or migration projects, data from the legacy or source systems is extracted, transformed and loaded into the target system. This is called as the ETL process. The data needs to be transformed as the data model of any two systems is different. As a standard practice the data transformations are managed in the data mapping document, which forms the basis for development as well as testing.

Testing the migrated data is usually a critical and challenging aspect. Testing the migrated data manually is a very time consuming process and so automated data validation is a good way to go ahead.

In my latest project, data from two source systems was migrated to target system. The data from the source systems’ UI was compared with the data from target system’s UI, since we did not have access to the database. Data migration was performed based on incremental loading approach to ensure cent per cent verification. The approach was to load small subsets of data every week for verification. This type of a process was a perfect solution to client’s challenges as in the event of any mismatch only that specific subset of data could be reversed.

I am also listing some of the key challenges we overcame during the course of the project

  1. We had to create scripts that could read source values and the use field level mapping rules to calculate the expected results at the destination. This had to be done because the mappings between the fields of the source system and the target system were different; i.e., both systems had their own structure
  2. We had to verify values at the target system as some extra fields were present in it leading to a mismatch with the source system
  3. We had to read the data on the target system to verify as some amount of data in the source system was in a CSV format with header changes for each customer column
  4. We also created a strong log generation mechanism that generated a result for every iteration. It also went onto ensure that when any mismatch occurs not only field name mismatches are captured but also values get captured
  5. The results also included the time taken to execute each record
  6. To counter the fact that most of the data migration was done in files of XML tab separated formats, we had to generate the input file for automation in excel format

We also went onto to create a customized data migration automation testing framework (illustrated below) to overcome these challenges which lead to a successful project.

Have any of you worked on such projects? Would love to hear some of your experiences.

Anand Gharge | Test Manager | Zen Test Labs

Pizzas, Sodas, Music and Testing…

Couple of weeks back we held our first crowd sourced testing event “the Zentestathon” which turned out to be a memorable event. Testing was on a module of a LMS application. Like all things that happen for the first time there were good results and valuable experiences that we experienced and I wanted to share some things that could help you with your crowd sourcing strategy.

Schedule

To ensure that the focus remains on testing and getting more number of defects, we discovered that maintaining a tight schedule is extremely critical to the success of the program. The schedule break up given below is what we found generating the optimum results

  • Application training-10% of the time
  • Exploring the application by the testers-15% of the time
  • Understanding business rules and requirements- 10% of the time
  • Testing-60% of the time

Other Observations

Some other observations include

  • We realized that, what most testers were skilled at was context driven testing.
  • Testers were able to suit themselves to the application and think of end to end scenarios with minimum amount of guidance
  • They add tremendous value through their experience, mentality and the ability to transcend based on this application.
  • In an extremely short time, they came up with various combinations of valid and invalid scenarios, which ensured test coverage.
  • Percentage of critical defects resulted in 40% by the virtue that there  were so many testers testing different scenarios, optimum test coverage was guaranteed.
  • The more the merrier is something that goes well with this form of testing as scenario coverage improves
  • Simulate end user scenario testing for internet bandwidth and other devices.

Through this initiative, we have validated our hunch and what we heard from a lot of people in the industry; that crowd sourced testing works best with applications that have a large global users base like mobile apps or games. We believe it will be time consuming and strenuous to involve external testers to crowd test for a product that requires a lot of internal communication with cross groups. All in all, this form of testing makes for an interesting venture and definitely has an exciting future ahead. We continue to watch what companies do globally in this space and innovate within this space. What have your experiences in crowd testing been?

Vijeethkumar Chathu |  Test Manager | Zen Test Labs

The evolution of test design techniques

Test Design is a constantly evolving area of the software testing life cycle. Surprisingly, it still remains implemented at a relatively smaller scale whenever new application features are defined and this has gone a long way in limiting the effectiveness and efficiency of testing.

The heart of it has always been the test design template, the primary goal of which is to reduce ad-hoc reinvention of test designs. However, most companies/ individuals do not assign as much importance to this as they do to the development process. Ideally, a good test design frees tester to focus on the actual job of testing the application. I have seen that just by using multiple test design techniques, the efficiency gain that we have been able to achieve has been tremendous. Some of them include

  • Specification based techniques like equivalence partitioning, boundary value analysis, state transition testing, use case testing and decision table testing
  • Structure based techniques/ White box testing through statement testing & coverage and decision testing coverage
  • Experience based techniques like error guessing and exploratory testing

Our teams have been using these techniques for quite some time now and have come up with innovative ways of creating classification trees, in order to effectively design and generate test cases, for complicated applications.  One way is, wherein we map the required output so that the classification of inputs and outputs is matched.

We are in the process of publishing a whitepaper on this area early next quarter that details out how one should go about Test Design.

Watch this space for more on the paper…

Sunil Deshmukh | Senior Test Manager | Zen Test Labs

How to estimate number of test iterations

A common question that comes up when I conduct our proprietary path breaking testing training program MOST (Mind of a Software Tester) is how to estimate the number of test iterations. In my view, a good way to do that is to compute bug insertion rate and bug fix rate by the development team. Once this is done, you can easily estimate number of iterations to test.

Let me give you an example here;

You have been asked to estimate the number of more test iterations required. You are at the end of round one. You can find the usual bug insert rate by developers in your organization when they fix bugs. As well as find the usual bug fix rate (number of bugs that usually get fixed when you report 100 bugs).  Thus in your organization if the bug fix rate is 50% and the bug insert rate is 10% (it is usually not this high), then this is how you shall calculate. Let us assume that the number of open bugs today at the end of iteration one is 100. Thus in round two, 50 bugs shall be open and 10 more shall be introduced. Thus, at the end of round two, you shall have 60 bugs. In round three, you shall have 30 bugs fixed and 6 introduced. Thus you shall be left with 36 bugs. Keep doing this calculation till you arrive at zero or one bug. That shall tell you the number of iterations.

Consider the following as well:

  • You may be asked to estimate number of iterations at the beginning of the project and not at the end of round one. In that case, you shall estimate the number of bugs at the end of round one and perform the above calculation.
  • Consider finding averages in your organization for different rounds. It is possible that your average bug fix and insert rates are higher in the initial rounds.
  • You could apply further math to this basic idea and tailor this to your organization. For instance, Lloyd Raden from Grove Consultants recommends you to use nested rate as well.
  • Note that this is not a pure statistical way. But, in our experience we have found it simple and practical to come up with an estimate on number of iterations than use SWAG (Scientific Wild Ass Guess).

My team is currently running a poll on LinkedIn to gauge how other people out there are going about test size estimation and I am due to publish a whitepaper on this topic shortly. I would like to welcome all of you to participate in this discussion here: http://linkd.in/rsty6o

Let me know your views

Krishna Iyer | CEO | Zen Test Labs

Control charts for beginners

I am going to attempt in this blog to keep the topic of control charts very simple.

In my view the very definitions of control charts can go a long way in determining ones approach to them. In my view

A complicated control chart is “A run chart with confidence intervals calculated and drawn in.  These Statistical control limits form the trip wires which enable us to determine when a process characteristic is operating under the influence of a special cause”.

When I mean by a simple definition is that, a control chart is a set of data points plotted chronologically in time sequence with five horizontal lines drawn on the chart; i.e.,

  • A centre line, drawn at the process mean
  • An upper control-limit (also called an upper natural process-limit drawn three standard deviations above the centre line;
  • A lower control-limit (also called a lower natural process-limit drawn three standard deviations below the centre line.

The above definition if for a X bar control chart or also called as a Shewart control chart (named after its inventor). The whole idea of such a chart using 3 sigma control limits is that data points will fall inside 3-sigma limits 99.7% of the time when a process is in control. Thus one could say that anything beyond the control limits requires investigation. So how do you draw one?

Let us assume you test weekly builds. And you have to finish by Friday EOD what you get on Monday. But you notice that sometimes you are late by 2 days. Sometimes early by a day. Let us say you have 30 weeks of data. How can you know how was your process variation. Every process varies. Thus you may say it is alright to be one day late or one day early. But mathematically, how can you say should it be one day or two days? And more importantly was there a week where there was something abnormal in your process which requires more investigation. This is where a control chart can help.

Click on the link above to see an example and know how to use excel without any special add on to draw a control chart.

sample_control_chart

From the chart you shall notice that there are two data points (two weeks) which are beyond the control limits and that is what you shall investigate further:

As you plot the chart for further weeks, your process is stable if all data points are within the control limits. Your process is capable if you are able to reduce the control limits (variation). As per the first 30 weeks data, you have such a poor variation that you may be 15 days late or early. Thus seems ridiculous. So you can draw the control chart again, removing the special causes (the two weeks) and find out how capable is your process and whether with time the variation is reducing.

This was just a precursor. Read more about types of control charts, common and special causes of variation. The internet is full of examples on the same. Apply to other metrics than schedule slippage to bring your testing process under control.

Krishna Iyer | CEO | Zen Test Labs