Testing the Mobile Apps explosion

It won’t be long before it becomes A-android, B-Blackberry, C-Cupcake, D-Donut,-E-Éclair, F-Froyo, G-Gingerbread, if anything, they are words that probably half the planets population (approx. 3.2 billion people) is well versed with. Not only that another 700 million would be over the next 3 years! If you haven’t guessed it by now…I am referring to the explosion of mobile devices into our lives.

At the core of this explosion in mobile devices and here I mean smartphones and tablets; is the innovation in the field of processors. With processing speeds of these mobile devices increasing dramatically, the demand from users to run complex applications has also gone up. Business users want to have the ability to manage their personal and professional lives through a single interface and have apps that allow them to do this. Add the speed at which innovation in devices, processors and OS takes place and it is not a pretty picture for app. manufacturers.

So, what does all of this mean to you if you are an App. manufacturer or an enterprise trying to create mobile apps for your workforce or customer base?

Some of the areas of impact include:

  • A constant need to keep your app. updated with the latest OS upgrades/ devices in the market
  • Build high secure applications that lend peace of mind to users/ administrators
  • Build apps. that are not very heavy on the device resources (for optimum performance)
  • Constantly upgrade/ enhance your application to keep users engaged

Roll out apps at a speed which would put Formula 1 drivers to shame! Well, just joking on that last one there but for the ones that work in this space, you know what I mean!

Over the years of managing the Quality Assurance programs of multiple Fortune 500 companies and having setup a Mobile Testing Lab fairly early on within this space, I want to share the basic methodology that can be used to mitigate risks for you when developing/ deploying your mobile apps.

Mobile Configuration Optimization
Choose an optimum no. of configurations to test your app on using statistical techniques like Classification tree, Orthogonal Arrays, etc.
Mobile Test Automation
Automate as much of the core testing as possible right from the get go. We have experienced a reduction between 50-70% in the testing effort while ensuring complete coverage across devices. Automation built in on the right design principles also leads to high reusablity of scripts.
Mobile Performance Testing
A holistic approach to performance testing should cover areas such as volume testing, endurance testing, performance monitoring, soak testing and testing under real time scenarios.

An in depth whitepaper has also been written on Mobile is changing the face of Software Testing.I would love to hear from readers on their learning’s when developing or testing mobile apps. Please feel free to write to me

Amol Akotkar | Test Consultant | Zen test Labs

Reducing dependence on automation engineers to manage test automation!

I have always wondered what would it be to separate test automation and automation engineers. Considering that Test Automation has always been treated as the holy grail of testing! Enterprises that have managed to achieve high levels of automation in the testing process have enhanced productivity exponentially while improving coverage and thus reducing risk. This has translated into automation engineers holding  design approaches close to their heart and controlling scripting tightly. Given this dynamic, the adoptions of test automation have remained low over the years.

Test Automation today has transitioned from a “Record and Playback” mode to a virtually “Scriptless” mode thus enabling rapid on the go Test Automation

It has resulted in enterprises automating testing to be oblivious to tool specific coding thus making automation suites maintainable and resource independent. However, scriptless automation frameworks still have many missing links. For example, most scriptless automation frameworks  demand extensive Business User involvement particularly to test the technology enablement. There is a possibility it takes longer than acceptable time to market. Among many causes for greater time to market, one cause is extensive manual testing of the solution. It hamstrings the time taken to market since there is heavy dependence on business analysts (from business or IT) in QA (test design and execution). There is a strong dependence on skilled & expensive technical resources for automation. There is a need to manage spikes in demand for QA resources which results in increase of QA costs.

Considering these dynamics, the next stage in the evolution of test automation is driving in the direction of Business Process Model based test automation that aims at synchronizing Operations, Product Management and Quality functions.

At Zen Test Labs we are innovating with multiple products in this space. Our flagship test automation framework, ZenFRAME is one such example. ZenFRAME improves BA and business testers productivity while reducing dependence on technology teams by up to 40%. The GUI enables most non-technical users to create automated test cases faster  thus resulting in close to 33% lesser creation time, read our whitepaper to know how you can implement a business Process model for you QA environment. Would love to hear thoughts from everyone…

Ravikiran Indore |Sr Consultant |Zen Test Labs

 

 

 

Verifying 800 Million data sets in record time!

I recently was fortunate to be a part of a unique project at Zen Test Labs. This was a post-merger scenario wherein the acquirer (bank) had to consolidate the customer information systems of the two banks into a single system. This meant mapping the acquired bank’s product, service and customer portfolio, to a new and modified version of the acquirer’s products and services.

Among many other factors, ensuring seamless service to existing customers of the acquired bank implied that such customers should not expect undue increase in service charges. Processing customer data using enhanced systems required that the service fees were within the threshold that the customer would expect in normal course of business. Testing for “Go Live” was tricky since it required that for each acquired customer, the bank had to compare the results from the “Go Live” with historical data for the customer. With hundreds of thousands of customers and millions of transactions in a month, manual verification was a gigantic task, potentially impossible to accomplish.

Zen Test Labs creatively addressed this situation by leveraging its Data Migration Testing framework and extending it to include customer specific scenario. For example, each data component of the source and target data files were mapped, rules applied and integrated into the testing framework. A utility was then designed to pick each record from the source, apply the logic of migration then check if the corresponding value of the record in the target file is within the tolerance level as per the logic. During execution the selected components from the imported source and target data were compared and flagged if not meeting the tolerance levels. Once all the records were compared the utility reported:

  1. All transactions migrated as per the logic
  2. All transactions which did not meet the tolerance criteria
  3. Transactions in the target database which did not have any relation with the migration process

The framework and utility testing itself adopted an approach with three layers of testing:

  1. Utility testing using dummy data for source, target and the mapping
  2. Sampling of output files and manual verification with real data
  3. Verify against “Thumb Rules”. One of the examples of this was; the total number of Pass records and Fail records should total the count of primary key of source data.

Overall I found this project very challenging and interesting. Leveraging the data migration testing framework we created a comprehensive utility in approximately three weeks. The quality and performance of the utility was so sharp that it compared one data component with 600,000 to 700,000 records in 10 to 12 minutes. The total number of data values verified in this project was over 800 Million in a span of 30 days which is as good as verifying at least one data for the entire population of European Union! With our output files we provided great deal of ‘Data Profiled’ information of migrated customers to the bank which was used to understand behavioral patterns of the migrated customers and the performance of the products after migration.

Ravikiran Indore |Sr Consultant |Zen Test Labs

How to estimate number of test iterations

A common question that comes up when I conduct our proprietary path breaking testing training program MOST (Mind of a Software Tester) is how to estimate the number of test iterations. In my view, a good way to do that is to compute bug insertion rate and bug fix rate by the development team. Once this is done, you can easily estimate number of iterations to test.

Let me give you an example here;

You have been asked to estimate the number of more test iterations required. You are at the end of round one. You can find the usual bug insert rate by developers in your organization when they fix bugs. As well as find the usual bug fix rate (number of bugs that usually get fixed when you report 100 bugs).  Thus in your organization if the bug fix rate is 50% and the bug insert rate is 10% (it is usually not this high), then this is how you shall calculate. Let us assume that the number of open bugs today at the end of iteration one is 100. Thus in round two, 50 bugs shall be open and 10 more shall be introduced. Thus, at the end of round two, you shall have 60 bugs. In round three, you shall have 30 bugs fixed and 6 introduced. Thus you shall be left with 36 bugs. Keep doing this calculation till you arrive at zero or one bug. That shall tell you the number of iterations.

Consider the following as well:

  • You may be asked to estimate number of iterations at the beginning of the project and not at the end of round one. In that case, you shall estimate the number of bugs at the end of round one and perform the above calculation.
  • Consider finding averages in your organization for different rounds. It is possible that your average bug fix and insert rates are higher in the initial rounds.
  • You could apply further math to this basic idea and tailor this to your organization. For instance, Lloyd Raden from Grove Consultants recommends you to use nested rate as well.
  • Note that this is not a pure statistical way. But, in our experience we have found it simple and practical to come up with an estimate on number of iterations than use SWAG (Scientific Wild Ass Guess).

My team is currently running a poll on LinkedIn to gauge how other people out there are going about test size estimation and I am due to publish a whitepaper on this topic shortly. I would like to welcome all of you to participate in this discussion here: http://linkd.in/rsty6o

Let me know your views

Krishna Iyer | CEO | Zen Test Labs