Communication- A Key Skill to Excel at Testing

Enabling better communication is not a onetime activity. It requires continuous effort across the company. Good internal and external communication is extremely important to a business’s success. In order to work together effectively, there must be clear and coherent communication among all the departments.

Here are a few scenarios wherein communication gaps may arise and lead to poor quality:

1. Continuously Changing Requirements:
At times, requirement changes are implemented directly without updating the specification document. In such cases, there is a chance that the changed requirements remain untested or are tested incorrectly.
Any change in the requirements should be communicated correctly to all stake holders and it is necessary to update the specification document on a timely basis.

2. Configurations:
Lack of clarity from the stakeholders on the configurations to be tested can lead to wasted effort and extra work. Configuration testing can be expensive and time consuming. Investment in hardware and software is required to test the different permutations and combinations. There is also the cost of test execution, test reporting and managing the infrastructure.
Increasing communication between development, QA and stakeholders can help deal with these challenges.

3. Team Size:
When team sizes are large, some members of the team may miss changes in requirements or may not be communicated updates on activities in the project. This could lead to severe problems in the project or project failure.  Each team member should be abreast of the activities in the project through a log or through other means.

4. Changes in  Application Behavior not Communicated:
Continuous changes in the application behavior may lead to requirements being tested incorrectly.  All the functionality implemented in the application should be frozen while testing. If any changes are made to the functionality, they should be communicated to the testing team on a timely basis.

5. Unclear Requirements:
Complex requirements that contain insufficient data may be difficult to understand and therefore, may lead to improper testing. The functional/technical specification documents should be clear and easy to understand; they should contain a glossary, screenshots and examples wherever necessary.

The path to project success is through ensuring that small communication problems are eliminated completely before they build up, so that the message is delivered correctly and completely. Instead of discovering problems, we should figure out how to stop them from appearing in the first place.

Poonam Rathi | Test Consultant | Zen Test Labs

Software Testing in 2020

As a CEO of a testing company, a question that plays on my mind constantly is ‘what is the future of Testing?’ In the early 2000s, Ron Radice spoke at a QAI conference in India, where he had predicted that testing will die. His call was that automatic code generators will do the job so efficiently that testing will become obsolete. When he looked at the crystal ball then, he could see that prevention will be the creed and not detection.

Well, when I look at 2020, I believe Ron was right as well as wrong. Yes, code generators are arriving. Yes, there will be automated test case generators. Yes, model based testing will replace rudimentary testing activities. But, the whole boom of software especially in ubiquitous mobile devices means only more testing.

If the future includes automated cars like the Google driverless cars, I cannot imagine such a car with a technology that has not been fully and manually validated. If the future is the “Internet of Things”, I can only imagine that the amount of embedded testing will only explode. If the future is, business operations being handled through apps and app stores that have millions of applications pervading every step of our business and personal life then imagine the amount of mobile testing that will be required. If not anything, as everything gets more interconnected, the consequences of a critical failure will only be catastrophic. Wherever the nexus of cloud, social, mobile and big data takes us, I am thoroughly convinced that the need for testing will only grow.

While there a dime a dozen predictions on how things will look in 2020, my two bits around where testing will find itself as follows:

• Huge business opportunities arising from testing for app stores directly than app manufactures
• Test automation would have evolved from script less automation to automatic test case generators and execution
• The pressure to deploy rapidly in the Mobility and embedded devices space will mean that test automation tools will evolve to provide near and real time support to these areas
• Testing and testers will evolve to become super specialized with domain testers at one end and niche technical testers at the other end.

These are some things that come to mind and as the decade continues to evolve. Would be great to know what the rest of the testing world thinks.

Krishna Iyer|CEO|Zen Test Labs

Automation lessons learnt: Funding automation projects & the role of change management

One of the many reasons test automation is often compromised is in situations where business funds technology projects on a per project basis. Key reason being business benefits of automation, primarily time to market, are realized only during the subsequent releases of the application, never in the release where automation is undertaken. Even when business agrees to fund an automation project, the order of magnitude of benefits is small due to the potentially low levels of automation feasible within the project scope. The benefits accumulate only over a period of time from increasing automation levels and therefore the return on investment is realized over a longer duration. In order to reap benefits from automation business needs to continually invest in it and maintain a long term orientation to ROI. These are typical characteristics of any change initiatives. Test automation initiatives funded by individual business units can therefore learn from the vast expanse of knowledge pertaining to other organizational change initiatives and do not need to reinvent the wheel.

I have captured my experience of change initiatives as applied to automation in the visual below. It shows key components required to not only succeed at a pilot project but also create a cascading positive spiral where the benefits accumulate over time.

 

Like any other change initiative the key components form a chain, where the initiative is just as strong as its weakest link. Successful pilot accompanied by the right communication can act as a feeder to the next project and as long as all key components act in unison incremental benefits from each project can lead to significant cumulative benefits. The problem is that the first cycle tends to be demanding and it needs continuity of the champions until such time that the framework is institutionalized. A failure at early stages can have devastating effects with a stigma associated with it presenting greater roadblocks during subsequent attempts at automation. This is where senior management support from business and IT is crucial. A champion driving each automation cycle to success is central for the overall success of automation!

What do you think of the role of change management in automation projects? Have you had difficulty funding automation projects? Please feel free to share your experiences.

Aparna Katre | Director Strategy | Zen Test Labs

Automating data migration testing

I had the opportunity to be a part of data migration project recently. I was involved in automated data migration testing, which I found it to be a very exciting and challenging form of testing.  I wanted to share my learning’s in this post.

During conversion or migration projects, data from the legacy or source systems is extracted, transformed and loaded into the target system. This is called as the ETL process. The data needs to be transformed as the data model of any two systems is different. As a standard practice the data transformations are managed in the data mapping document, which forms the basis for development as well as testing.

Testing the migrated data is usually a critical and challenging aspect. Testing the migrated data manually is a very time consuming process and so automated data validation is a good way to go ahead.

In my latest project, data from two source systems was migrated to target system. The data from the source systems’ UI was compared with the data from target system’s UI, since we did not have access to the database. Data migration was performed based on incremental loading approach to ensure cent per cent verification. The approach was to load small subsets of data every week for verification. This type of a process was a perfect solution to client’s challenges as in the event of any mismatch only that specific subset of data could be reversed.

I am also listing some of the key challenges we overcame during the course of the project

  1. We had to create scripts that could read source values and the use field level mapping rules to calculate the expected results at the destination. This had to be done because the mappings between the fields of the source system and the target system were different; i.e., both systems had their own structure
  2. We had to verify values at the target system as some extra fields were present in it leading to a mismatch with the source system
  3. We had to read the data on the target system to verify as some amount of data in the source system was in a CSV format with header changes for each customer column
  4. We also created a strong log generation mechanism that generated a result for every iteration. It also went onto ensure that when any mismatch occurs not only field name mismatches are captured but also values get captured
  5. The results also included the time taken to execute each record
  6. To counter the fact that most of the data migration was done in files of XML tab separated formats, we had to generate the input file for automation in excel format

We also went onto to create a customized data migration automation testing framework (illustrated below) to overcome these challenges which lead to a successful project.

Have any of you worked on such projects? Would love to hear some of your experiences.

Anand Gharge | Test Manager | Zen Test Labs

Implementing Object Repository in Selenium

Selenium (http://seleniumhq.org/) has now emerged as one of the top contenders to take on QTP in the test automation tools space. Our teams at Zen Test Labs (www.zentestlabs.com) were one of the early adopters of this automation tool and have built up a decent level of expertise in automating test scripts using selenium. We recently also presented a tutorial on this topic at STARWEST, California in October 2011.

While both tools have their advantages and disadvantages, object repositories are one area that I have found to be important but not supported by Selenium. An object repository is a centralized place for storing properties of objects available in the application under test (AUT) to be used in scripts. QTP comes with its own object repository where one can record and store objects.

Over the course of a recent project I have tried to implement object repository in and it worked really well. I am listing below, how I went about doing this.

1. Create an interface of a name you want to give to your object repository as follows,
Interface ObjectRepository
{
// Now here you can store properties of the object in variable
Static String ddCategory = “id=ctl00_MainContent_ddlCategory”;
Static String btnSave = “”id=ctl00_MainContent_btnSave””;
}

2. Implement the interface into your class and you are good to use objects in your functions
Class TestLibrary implements ObjectRepository
{
public String testFuntion()
{
//here you can your object to perform actions or validations
selenium.select(ddCategory, “Testing”);
if (selenium.isElementPresent(btnSave)
selenium.click(btnSave);
}

}
In the above example, I have implemented it in java and have also tried it in C#. Thus, using OOPS concepts I feel that this can be implemented in any object oriented language supported by Selenium.
What is important to note is that one can also connect to excel or any database and store values. Thus in the event of changes in the application or changes in properties of any object; a simple change in the excel file or DB will reflect across all instances of the object.

Some of the advantages of doing this include:

1. Easily maintain your tests or components when an object in your application changes
2. Manage all the objects for a suite of tests or components in one central location
3. Code becomes more readable & easy to write when user defined objects name are used instead of complex and long property name & value.

These have been some of the approaches that have worked for me and my learning’s. Would be great to get to know other people’s experiences in implementing object repositories within Selenium

Hemant Jadhav | Senior Consultant | ZenTest Labs

Agile testing uncovered…

I have been a part of agile testing for quite some time now and have experienced its edge over the traditional style of testing. Over the years I have seen that most companies shy away from this form of testing for one or a combination of these reasons; i.e., implementation issues, lack of awareness, risk involved or simply just a resistance and hesitance to change. Through my post below I am attempting at assuring readers that agile testing is not only simple and extremely effective but also goes a long way in achieving delight in your testing strategy.

What is Agile testing?

Agile testing likes its development counterpart refers to a concept of breaking down the entire process into small pieces in a bid to achieve results as quickly as possible. Thus, agile testing is nothing but validating requirements in the shortest time possible. Product Owner’s, Scrum Master’s, Agile BA’s, Agile Tester’s, Agile Developer’s, Agile Architect’s, and Agile Resource Managers can all implement this methodology of testing.

Some advantages of Agile Testing

• Ensures time and budget optimization as all phases of SDLC need to be completed quickly.
• Ensures all change requests or enhancements are implemented without budget constraints with minimum impact on time to market
• Ensures good coordination due to daily nature of activities thus determining issues & gaps in requirements in advance with countermeasures deployed rapidly
• Ensures comprehensive testing in situations where business requirements documentation is hard to quantify.
• Ensures that the product delivered is in line with business needs and timeframes.

Some learning’s

• When Agile testing is weaved into a project early in the product development cycle, it ensures that time/ work estimates are accurate thus ensuring that deadlines are met. This is primarily due to the fact that testers are exceptionally good at clarifying requirements and identifying alternative scenarios.
• Deploying Agile testing right at the beginning of projects also ensures that developers write their code to pass tests as the test approach is known well in advance.
• Early stage Agile testing also is an opportunity to bring in automated acceptance testing into the process. This is especially relevant when the development is also in the Agile mode.
• Testers are always one step ahead as they design the cases for upcoming release also, thus enabling developers to pick up at the start of the iteration.

Things to watch out for

• Project quality management is hard to implement and quantify unless the test team are able to conduct regression testing after each release.
• Attrition as always in the development cycle can have an adverse effect on project development.
• In the case that Agile Scrum is being used, it can be the leading cause of scope creep, when not managed properly.

All in all when managed properly, Agile Testing can give most projects the edge in ensuring that ROI on products can be seen much earlier than expected while keeping costs down to a minimum. I would love to write more about my experiences with agile testing but before I do, I would like to hear the views of readers in order to make this a more interactive exchange. How do you view Agile Testing?

Vikram Deshmukh |Senior Manager | Zen Test Labs

Pizzas, Sodas, Music and Testing…

Couple of weeks back we held our first crowd sourced testing event “the Zentestathon” which turned out to be a memorable event. Testing was on a module of a LMS application. Like all things that happen for the first time there were good results and valuable experiences that we experienced and I wanted to share some things that could help you with your crowd sourcing strategy.

Schedule

To ensure that the focus remains on testing and getting more number of defects, we discovered that maintaining a tight schedule is extremely critical to the success of the program. The schedule break up given below is what we found generating the optimum results

  • Application training-10% of the time
  • Exploring the application by the testers-15% of the time
  • Understanding business rules and requirements- 10% of the time
  • Testing-60% of the time

Other Observations

Some other observations include

  • We realized that, what most testers were skilled at was context driven testing.
  • Testers were able to suit themselves to the application and think of end to end scenarios with minimum amount of guidance
  • They add tremendous value through their experience, mentality and the ability to transcend based on this application.
  • In an extremely short time, they came up with various combinations of valid and invalid scenarios, which ensured test coverage.
  • Percentage of critical defects resulted in 40% by the virtue that there  were so many testers testing different scenarios, optimum test coverage was guaranteed.
  • The more the merrier is something that goes well with this form of testing as scenario coverage improves
  • Simulate end user scenario testing for internet bandwidth and other devices.

Through this initiative, we have validated our hunch and what we heard from a lot of people in the industry; that crowd sourced testing works best with applications that have a large global users base like mobile apps or games. We believe it will be time consuming and strenuous to involve external testers to crowd test for a product that requires a lot of internal communication with cross groups. All in all, this form of testing makes for an interesting venture and definitely has an exciting future ahead. We continue to watch what companies do globally in this space and innovate within this space. What have your experiences in crowd testing been?

Vijeethkumar Chathu |  Test Manager | Zen Test Labs

Software Quality, Development and Coding Standards…

The best applications are those that are not only coded properly but also easy to add, debug and maintain. The concept of ‘maintainable code’ is easy to contemplate about but difficult to practice. Developers code in specific and individualistic styles. Their styles of coding become their second nature and they continue to use that in everything they create. Such a style might include the conventions used to name variables and functions ($password, $Password or $pass_word for example). Any style should ensure that the team can read the code easily.

However, what happens when we start to code bigger projects and introduce additional people to help create a large application? Conflicts in the way you write your code will most definitely appear.

This is where the concept of ‘CODING STANDARDS’ comes into play.

Coding standards are very articulate and deeply formulated to be consistent and when developers follow these standards, it makes the end result more uniform, even if different parts of the application are written by different developers. Knowing these standards and the language is always easy, but the catch is in deciding which standard to apply when & where.

I have found that the entire process of testing & quality assurance becomes relatively simpler when developers have followed coding standards. This also goes a long way in improving the performance of the application. When coding standards have been adhered to it results in easy and quick grasping of what the application is supposed to do and what it is not supposed to. During maintenance phase, it undoubtedly, enhances readability which leads to better maintainability. By adhering to these standards testers do not feel disconnected or bowled over when they begin working with the application. I believe it’s even more relevant in the current scenario, when different processes of applications are built by different set of developers (internal teams, vendors, etc).

Some of the secure coding practices, I believe that are high priority:

  • Validate input data
  • Heed compiler warnings
  • Architect and design for security policies
  • Sanitize transferred data
  • Mitigate model threat
  • Checklist

I understand that it is not possible to apply all coding standards at all times, but if applied appropriately, it would enhance performance and reduce scientific misconduct. What are your views on coding standards and its impact on software testing?

Janhavi Hunnur | Marketing | Zen Test Labs

The evolution of test design techniques

Test Design is a constantly evolving area of the software testing life cycle. Surprisingly, it still remains implemented at a relatively smaller scale whenever new application features are defined and this has gone a long way in limiting the effectiveness and efficiency of testing.

The heart of it has always been the test design template, the primary goal of which is to reduce ad-hoc reinvention of test designs. However, most companies/ individuals do not assign as much importance to this as they do to the development process. Ideally, a good test design frees tester to focus on the actual job of testing the application. I have seen that just by using multiple test design techniques, the efficiency gain that we have been able to achieve has been tremendous. Some of them include

  • Specification based techniques like equivalence partitioning, boundary value analysis, state transition testing, use case testing and decision table testing
  • Structure based techniques/ White box testing through statement testing & coverage and decision testing coverage
  • Experience based techniques like error guessing and exploratory testing

Our teams have been using these techniques for quite some time now and have come up with innovative ways of creating classification trees, in order to effectively design and generate test cases, for complicated applications.  One way is, wherein we map the required output so that the classification of inputs and outputs is matched.

We are in the process of publishing a whitepaper on this area early next quarter that details out how one should go about Test Design.

Watch this space for more on the paper…

Sunil Deshmukh | Senior Test Manager | Zen Test Labs

First dog fooding, now crowd sourcing…next crowd feeding?

When dog fooding was introduced by Microsoft manager Paul Maritz in 1988, it caught on like wild fire in the software space. Conceptually, dogfooding had existed in various forms till this point, but Microsoft was one of the early adopters when it came to incorporating this as a part of their product development cycles. Dog fooding essentially meant that internal users became early adopters of all new technology. Typically used with pre-release or beta versions of products, it gave product teams a bird’s eye view of how their products would work in “real-world” situations.  Forcing those who build products to use them is counter intuitive to the entire process of testing as more often than not they are bling to usability or are too advanced a user of the product. Hence while a lot of companies still conceptually use dog fooding to minimize the risk of critical failure there is an increasing trend to leverage large user bases to test too.

This is where crowdsourced testing has started to kick in. Testing companies are now providing platforms where product companies can test their products at a very low cost; i.e., typically charged a rate per bug detected. In turn, these companies open the platform to a community of testers who register to test voluntarily or as a part of a competition. Testers get paid per bug that they detect. While this kind of testing has opened up an unexplored talent pool (unbiased, cross geographic and large) at a low cost, the need to maintain independent testing either within their environment or outsourced remains. In addition to this, since there is no direct control over this crowd of testers, this source continues to remain an undependable source.

The ideal way forward would be to have a single platform that can integrate in-house testers or dogfooders, outsourced testers and crowd testers into a single platform. I refer to this concept as crowd feeding. Each product should have a crack team of testers from across these three channels that are nurtured over a period of time and have significant understanding of the application they are testing. This is akin to creating an elite panel of testers from these three channels that grow in experience over time.

The reason I mention that all three channels are critical to successful testing is

  1. In-house testers/ dogfooders – Advanced users with in-depth knowledge of product
  2. Outsourced testers- Intermediate/ Advanced independent users with in-depth knowledge of testing
  3. Crowd sourced testers- Cross section of low cost testers with diverse mix of “real-world” situational experience

Would be interesting to see how crowdsourced testing and dog fooding evolve over this year.

Hari Raghunathan | AVP | Zen Test Labs