Automating Without Access To The Application

I had the opportunity to be a part of a Corporate Banking project recently. I was involved in leading the functional automation testing part of the project. I found it to be a very exciting and challenging form of testing since our automation engineers did not have access to the application under test.  I wanted to share my experience in this post.

How can one automate an application without accessing the application? Confused?

The problem was that the client did not want to provide application access to the offshore team because of security concerns. We suggested setting up an isolated environment offshore, but unfortunately, that could not happen.

The deadlines were looming closer and we still did not have access to the application, so here’s what we did.

  1. We arranged a WebEx session with our onsite team member and asked him to record the functional flow using HP Unified Functional Tester (HP UFT) with the “record active screen” option “ON”.
  2. The recorded scripts were then transferred offshore.
  3. We opened these scripts offshore and started identifying objects with the existing recorded repository and also captured objects using active screen.
  4. Thus, our Object Repository was ready.
  5. After this, we started creating functions using the object repository and created test flows.
  6. After completing an automated flow, we transferred it to our onsite team member and ran the flow (without accessing the application).
  7. We were surprised that our entire flow executed successfully in a single run without any errors.
  8. We understood exactly what we needed to do in order to complete the project on time.

Completing this task was very easy for us since we already had a corporate banking repository of 4000 automated test cases and our own framework (ZenFRAME). We created Object Repositories and functions, attached those to the framework and updated the test flow in the framework …that was it! We delivered and deployed the framework in the client’s environment, setup the application URL and were ready to run our complete test suite.

The execution of our first test suite for a specific module in the onsite environment was very smooth. All the test cases executed successfully, and the result log was created in the framework.

So, this was the issue I faced and the solution I came up with. Please feel free to share any other solutions you know of for the issues mentioned above. Also, please feel free to share other issues that you faced and if possible provide the solutions that you came up with. It would be great to know other people’s experiences. Happy Testing! 🙂

Hemant Jadhav | Zen Test Labs

Test Automation is Not the Only Answer

I have worked on several test automation projects over the past few years. I also conduct test automation trainings as a part of our company’s CSR initiative and actively participate in online discussions about testing.  Since my work is centered on test automation, a lot of people frequently come to me with questions, some of which I would like to address in this blog post.

Question: “I recently graduated with a Bachelor’s in Computer Science. I am interested in pursuing software testing as a career. Can you recommend an automation tool that I can take up?”

Question: I am interested in software testing and have 3 years of experience in the BPO industry. I also underwent a 3 month QTP and Selenium training. Will this help me get a job as a tester?”

Question: “I have been working as a tester for the last 6 months. I want some growth in my career, so I am planning to move towards test automation. Which tools do you recommend I should learn?”

Question: “I have been working in manual software testing for over 4 years now. I think this has been a great mistake as far as my career is concerned. Most of my colleagues and friends are in test automation and it drives me up the wall .I also want to shift to automated testing; can you guide me as to how I should start?”

These and many similar questions have been asked frequently. All of these questions have a common line of thought: Test Automation. It makes me wonder if automated testing really is more important than manual testing!

Most testers I spoke to wanted to learn test automation only for the following reasons:

  • Knowledge of test automation tools can help their testing career and get them better job opportunities.
  • Some of them wanted to learn test automation just because their colleague was learning it!
  • Adding an extra point in their resume to make it stronger.
  • Highlight the fact that they learned automated testing in their performance appraisal meeting! 🙂

I am not against these testers but want them to realize that automated testing is not the only choice they can make to advance their testing career. Manual testing also offers a lot of growth. Knowledge of automated testing is definitely beneficial, but manual testing is also a very lucrative career path to pursue.

What I’m trying to say is that each of these roles – Manual Testing and Automated Testing have their own very unique challenges. Someone well versed with one role might not necessarily be well-acquainted with another. Treat yourself as a tester; not a manual or automation tester. Think of yourself as a tester with a set of skills, specialties, abilities and domain expertise.

Assuming that automated testing can replace manual testing and using automation tools without understanding testing & the underlying application can be very dangerous. Manual testing is not simple. It’s an art and requires high intelligence, creativity, judgment and skill with domain knowledge.

Finally, remember that human brains cannot be replaced by automated robots. 🙂

Any comments and suggestions are welcome.

Hemant Jadhav | Zen Test Labs

Automating the Business Analyst

Business Analysts (BA’s) play a pivotal role in the success of technology projects. BA’s are expected to assume roles that go well beyond just defining and tracking requirements. Some of these roles include Business Planners, Systems Analysts, Project Managers, Subject Matter Experts, Data Analysts, Application Analysts, Testers…well this list can go on! The biggest issue with these ever changing roles that BA’s play is that the attributes and dispositions of a Business Analyst are so wide that it doesn’t feel like roles that are being carried out by the same person

Given this dynamic, our experience has been that BA’s play a critical role in ensuring end quality of a product. However, the role of BA’s does not end here. Invariably they are the ones involved in ensuring that products work the way they should post implementation; i.e., in Business as Usual modes. Thus BA’s play a crucial role in designing test programs and managing them in the long run to ensure defects are caught in time and addressed.

At Zen Test Labs, we have long been an advocate of easing the lives of BA’s when it comes to their roles in testing. It is not desirable to have BA’s test but given that it inevitable, it makes sense to automate large parts of the process in a way that BA’s can seamlessly create and execute tests. We have put together a whitepaper that talks about marrying a Business Process Model Based Test approach to Scriptless Test Automation thus ensuring that testing is synchronized across business and technology operations.

Download the whitepaper today to learn more about how you can automate the business analyst!

The Art of Test Automation

Test automation has evolved to become a strategic and integral part of the software development process. Most of us start our test automation careers with record and playback. Over time, some of us move to data-driven test automation, but very few of us move towards the core where in the principles of design and development are applied to test automation. Test automation is like developing a system where test cases are requirements. The depth of thinking and planning that goes into test automation before hitting the record button is similar to developing software

Over the last 10 years, I have seen multiple Fortune clients struggle with automation and some of them eventually getting it right. For some of the projects that failed, we had the best test automation resources and a very stable manual testing practice but in spite of all this there was a huge gap between what was dreamt and what actually got realized. Over the period, we realized that the planning process is a key component for successful test automation. In 65% of the projects that failed the planning process and sequence of steps followed were the reasons.

Based on my experience, the automation process is:

  • Why (Purpose)
  • When (Stable Setup and Manual Process)
  • Which (Tool Selection)
  • What (Test Case Selection)
  • How (Design)

We have written a detailed whitepaper “The Art of Test Automation” based on the test automation process above. Through this white paper, we have attempted to outline how to actually go about automating, planning, prioritizing and using better practices to ensure a lesser risk of complete failure in automation projects.

Some of the important test automation questions that this paper attempts to address:

  • Why automation fails in spite of having technical resources?
  • Is there a standard process to be followed for test automation?
  • When to start and stop automation?
  • Test selection criterion

Download “The Art of Test Automation” to read more about the ideal automation process.

Poonam Rathi | Test Consultant | Zen Test Labs

8 Steps to Improve Your Regression Testing Process

With business and user requirements perpetually in an evolutionary mode, I find that regression testing has become a key component of the software development lifecycle. As testers, we need to keep in mind that a constant change in the functionality of the application lends the system to vulnerabilities in the base functionality too. These vulnerabilities tend to creep in due to an oversight while adding new functionality, poor analysis of impact on interfacing/ integrating applications and many a times due to the fact that customizations are an unknown entity. Poor regression testing can not only result in poor software quality but also impact revenue and cause customer loss.
Based on many years of planning, creating and executing the Quality Assurance programs of multiple Fortune 500 companies, I suggest the following eight step methodology to improve any regression testing process.

Phase 1: Defining
Step 1: Objective Finding (OF) – Challenges and Goal Identification
This step answers one of the most important questions “Why is regression testing not effective in its current state?”

Step 2: Fact Finding (FF) – Data Collation and Analysis
During this stage, teams must trail defects found in the past to conduct a defect root cause analysis. An important part of this step is bug prediction analysis so that defect prone areas in the application can be found.

Step 3: Problem Finding (PF) – Problem Clarification and Statement
Once the results of Steps 1 and 2 are combined, the exact scope of the challenges to address is established. These refined objectives act as the equivalent of a “Requirements Document”.

Phase 2- Scoping
Step 4: Test Cases Finding (TF) –Coverage Gap Analysis
Gaps in test coverage are found based on the current test cases and the application functionality. Techniques to map test cases to requirements and testing techniques are used to identify missing test cases

Step 5: Test Case Centralization (TC) – Test Case Repository Creation
Ensure that all test cases are stored in a centralized repository and in an optimized manner. Each test case must have a clear objective, precondition, steps, expected result and test data.

Step 6 : Test Case Optimization (TO) – Maximum Coverage in Desired Time with Minimum Risk
Statistical techniques such as Classification Tree and Orthogonal Array may be used to run minimum number of test cases in a way where every business process/ function is covered at least once

Phase 3- Executing
Step 7: Reusing Test Components (RT) – A Modular Approach
Create business functions and test data in a way that they can be reused for building manual test cases. Automate the generation of descriptive manual test cases.

Step 8: Test Case Classification (TC) – Test Case Mapping
At this stage, test cases are grouped requirement wise, screen wise, module wise, etc. Small frequently used regression pack/suites are created.

We have written a detailed whitepaper ‘Progress Not Regress’ on improving any regression testing process. We would love to hear your thoughts on it!

Girish Nair | Sr. Consultant | Zen Test Labs

Testing the Mobile Apps explosion

It won’t be long before it becomes A-android, B-Blackberry, C-Cupcake, D-Donut,-E-Éclair, F-Froyo, G-Gingerbread, if anything, they are words that probably half the planets population (approx. 3.2 billion people) is well versed with. Not only that another 700 million would be over the next 3 years! If you haven’t guessed it by now…I am referring to the explosion of mobile devices into our lives.

At the core of this explosion in mobile devices and here I mean smartphones and tablets; is the innovation in the field of processors. With processing speeds of these mobile devices increasing dramatically, the demand from users to run complex applications has also gone up. Business users want to have the ability to manage their personal and professional lives through a single interface and have apps that allow them to do this. Add the speed at which innovation in devices, processors and OS takes place and it is not a pretty picture for app. manufacturers.

So, what does all of this mean to you if you are an App. manufacturer or an enterprise trying to create mobile apps for your workforce or customer base?

Some of the areas of impact include:

  • A constant need to keep your app. updated with the latest OS upgrades/ devices in the market
  • Build high secure applications that lend peace of mind to users/ administrators
  • Build apps. that are not very heavy on the device resources (for optimum performance)
  • Constantly upgrade/ enhance your application to keep users engaged

Roll out apps at a speed which would put Formula 1 drivers to shame! Well, just joking on that last one there but for the ones that work in this space, you know what I mean!

Over the years of managing the Quality Assurance programs of multiple Fortune 500 companies and having setup a Mobile Testing Lab fairly early on within this space, I want to share the basic methodology that can be used to mitigate risks for you when developing/ deploying your mobile apps.

Mobile Configuration Optimization
Choose an optimum no. of configurations to test your app on using statistical techniques like Classification tree, Orthogonal Arrays, etc.
Mobile Test Automation
Automate as much of the core testing as possible right from the get go. We have experienced a reduction between 50-70% in the testing effort while ensuring complete coverage across devices. Automation built in on the right design principles also leads to high reusablity of scripts.
Mobile Performance Testing
A holistic approach to performance testing should cover areas such as volume testing, endurance testing, performance monitoring, soak testing and testing under real time scenarios.

An in depth whitepaper has also been written on Mobile is changing the face of Software Testing.I would love to hear from readers on their learning’s when developing or testing mobile apps. Please feel free to write to me

Amol Akotkar | Test Consultant | Zen test Labs

Reducing dependence on automation engineers to manage test automation!

I have always wondered what would it be to separate test automation and automation engineers. Considering that Test Automation has always been treated as the holy grail of testing! Enterprises that have managed to achieve high levels of automation in the testing process have enhanced productivity exponentially while improving coverage and thus reducing risk. This has translated into automation engineers holding  design approaches close to their heart and controlling scripting tightly. Given this dynamic, the adoptions of test automation have remained low over the years.

Test Automation today has transitioned from a “Record and Playback” mode to a virtually “Scriptless” mode thus enabling rapid on the go Test Automation

It has resulted in enterprises automating testing to be oblivious to tool specific coding thus making automation suites maintainable and resource independent. However, scriptless automation frameworks still have many missing links. For example, most scriptless automation frameworks  demand extensive Business User involvement particularly to test the technology enablement. There is a possibility it takes longer than acceptable time to market. Among many causes for greater time to market, one cause is extensive manual testing of the solution. It hamstrings the time taken to market since there is heavy dependence on business analysts (from business or IT) in QA (test design and execution). There is a strong dependence on skilled & expensive technical resources for automation. There is a need to manage spikes in demand for QA resources which results in increase of QA costs.

Considering these dynamics, the next stage in the evolution of test automation is driving in the direction of Business Process Model based test automation that aims at synchronizing Operations, Product Management and Quality functions.

At Zen Test Labs we are innovating with multiple products in this space. Our flagship test automation framework, ZenFRAME is one such example. ZenFRAME improves BA and business testers productivity while reducing dependence on technology teams by up to 40%. The GUI enables most non-technical users to create automated test cases faster  thus resulting in close to 33% lesser creation time, read our whitepaper to know how you can implement a business Process model for you QA environment. Would love to hear thoughts from everyone…

Ravikiran Indore |Sr Consultant |Zen Test Labs

 

 

 

Top 6 solutions for software testing failures

The cost of software testing is still not valued by its worth. Although it is a critical investment companies avoid spending on testing because they don’t realize the ROI on testing and/or a quantifiable cost of quality. The most common complaints against testing that we repeatedly hear are:

  • It is a necessary evil that stalls a project the closer it gets to a release
  • It is too costly, time consuming without any guaranteed outcome
  • Many a times regression testing is not effective to identify new defects

Having worked on a number of testing projects over the past 12 years, I realize why there is a high tendency to look at testing with such a skeptical eye. I would like to share what we have learnt over time.The top six points in our view to improve the effectiveness of manual testing are:

6. Reducing effort and time in Test Documentation

A lot of teams spend unnecessary time detailing test scenarios during the planning phase which are rarely referred to after 2-3 rounds of testing. This increases maintenance overheads and reduces flexibility and coverage in the long run thus resulting in inefficient testing. Post the initial 6-8 months a large % of test scenarios are outdated and require the same effort in updating. Instead of detailing each and every step for every test scenario, one can cover it with test conditions and the expected results.

5. Focusing on breadth and depth of testing

Many a times when execution is not prioritized the depth of testing takes lead over breadth. By aiming at covering more breadth, we align testing with the business objectives. By doing this the teams aim at being effective first and then efficient. Breadth referring to covering positive  critical cases (across the application) that are frequently used by end user.Depth referring to covering all the test cases for a module.

4. Testing, a continuous activity

Many companies look at testing as a one-time investment. They outsource/ execute in-house once during the start of the development of the product and then rarely test it during the maintenance phases. The primary reason is invariably budget driven and goes onto harm the quality of the product when not tested after newer versions. For every minor release one should ensure all the regression test cases are executed and for every major release all the high and medium priority test cases are executed at least once.

3Remembering the objective of testing

The key objective of testing is to break the system and not to prove that the system works as per the requirements.This has a direct impact and can improve testing effectiveness and the number of defects one will find. It is often observed that many senior testers habitually start proving that system is working as per the requirements which is against the primary objective of testing.

2Strategize Test optimization

Coverage is important but not at cost of redundant test cases. Test optimization is an intelligent way to ensure test coverage in less time. That’s why testing teams need to collaborate more with the development teams. Understanding the high level design and structure of the application makes testing more effective. In development, one of the main principles followed is reuse. So, we can use the same principle while testing the same code which is reused. Why not optimize and test the class/object once and just test the implementation of the class/object on other screens/modules. If the test cases are reusable maintainable and scalable it is an additional advantage to roll out in time and under budget.

1. Focusing on the Business for which you are testing

Testing cannot be done in isolation. Business priorities and challenges are equally and in most of the cases more important than testing needs. One thing I have learnt is that testing cannot drive business decisions, business drives testing most of the times. Aligning testing to the business requirements results in a disciplined and ready to market high quality product.

These are some of the solutions with which I could overcome testing failures. Do share yours if you have new solutions or methods

Mukesh Mulchandani | CTO | ZenTest Labs

Verifying 800 Million data sets in record time!

I recently was fortunate to be a part of a unique project at Zen Test Labs. This was a post-merger scenario wherein the acquirer (bank) had to consolidate the customer information systems of the two banks into a single system. This meant mapping the acquired bank’s product, service and customer portfolio, to a new and modified version of the acquirer’s products and services.

Among many other factors, ensuring seamless service to existing customers of the acquired bank implied that such customers should not expect undue increase in service charges. Processing customer data using enhanced systems required that the service fees were within the threshold that the customer would expect in normal course of business. Testing for “Go Live” was tricky since it required that for each acquired customer, the bank had to compare the results from the “Go Live” with historical data for the customer. With hundreds of thousands of customers and millions of transactions in a month, manual verification was a gigantic task, potentially impossible to accomplish.

Zen Test Labs creatively addressed this situation by leveraging its Data Migration Testing framework and extending it to include customer specific scenario. For example, each data component of the source and target data files were mapped, rules applied and integrated into the testing framework. A utility was then designed to pick each record from the source, apply the logic of migration then check if the corresponding value of the record in the target file is within the tolerance level as per the logic. During execution the selected components from the imported source and target data were compared and flagged if not meeting the tolerance levels. Once all the records were compared the utility reported:

  1. All transactions migrated as per the logic
  2. All transactions which did not meet the tolerance criteria
  3. Transactions in the target database which did not have any relation with the migration process

The framework and utility testing itself adopted an approach with three layers of testing:

  1. Utility testing using dummy data for source, target and the mapping
  2. Sampling of output files and manual verification with real data
  3. Verify against “Thumb Rules”. One of the examples of this was; the total number of Pass records and Fail records should total the count of primary key of source data.

Overall I found this project very challenging and interesting. Leveraging the data migration testing framework we created a comprehensive utility in approximately three weeks. The quality and performance of the utility was so sharp that it compared one data component with 600,000 to 700,000 records in 10 to 12 minutes. The total number of data values verified in this project was over 800 Million in a span of 30 days which is as good as verifying at least one data for the entire population of European Union! With our output files we provided great deal of ‘Data Profiled’ information of migrated customers to the bank which was used to understand behavioral patterns of the migrated customers and the performance of the products after migration.

Ravikiran Indore |Sr Consultant |Zen Test Labs

Automation lessons learnt: Funding automation projects & the role of change management

One of the many reasons test automation is often compromised is in situations where business funds technology projects on a per project basis. Key reason being business benefits of automation, primarily time to market, are realized only during the subsequent releases of the application, never in the release where automation is undertaken. Even when business agrees to fund an automation project, the order of magnitude of benefits is small due to the potentially low levels of automation feasible within the project scope. The benefits accumulate only over a period of time from increasing automation levels and therefore the return on investment is realized over a longer duration. In order to reap benefits from automation business needs to continually invest in it and maintain a long term orientation to ROI. These are typical characteristics of any change initiatives. Test automation initiatives funded by individual business units can therefore learn from the vast expanse of knowledge pertaining to other organizational change initiatives and do not need to reinvent the wheel.

I have captured my experience of change initiatives as applied to automation in the visual below. It shows key components required to not only succeed at a pilot project but also create a cascading positive spiral where the benefits accumulate over time.

 

Like any other change initiative the key components form a chain, where the initiative is just as strong as its weakest link. Successful pilot accompanied by the right communication can act as a feeder to the next project and as long as all key components act in unison incremental benefits from each project can lead to significant cumulative benefits. The problem is that the first cycle tends to be demanding and it needs continuity of the champions until such time that the framework is institutionalized. A failure at early stages can have devastating effects with a stigma associated with it presenting greater roadblocks during subsequent attempts at automation. This is where senior management support from business and IT is crucial. A champion driving each automation cycle to success is central for the overall success of automation!

What do you think of the role of change management in automation projects? Have you had difficulty funding automation projects? Please feel free to share your experiences.

Aparna Katre | Director Strategy | Zen Test Labs