Pizzas, Sodas, Music and Testing…

Couple of weeks back we held our first crowd sourced testing event “the Zentestathon” which turned out to be a memorable event. Testing was on a module of a LMS application. Like all things that happen for the first time there were good results and valuable experiences that we experienced and I wanted to share some things that could help you with your crowd sourcing strategy.

Schedule

To ensure that the focus remains on testing and getting more number of defects, we discovered that maintaining a tight schedule is extremely critical to the success of the program. The schedule break up given below is what we found generating the optimum results

  • Application training-10% of the time
  • Exploring the application by the testers-15% of the time
  • Understanding business rules and requirements- 10% of the time
  • Testing-60% of the time

Other Observations

Some other observations include

  • We realized that, what most testers were skilled at was context driven testing.
  • Testers were able to suit themselves to the application and think of end to end scenarios with minimum amount of guidance
  • They add tremendous value through their experience, mentality and the ability to transcend based on this application.
  • In an extremely short time, they came up with various combinations of valid and invalid scenarios, which ensured test coverage.
  • Percentage of critical defects resulted in 40% by the virtue that there  were so many testers testing different scenarios, optimum test coverage was guaranteed.
  • The more the merrier is something that goes well with this form of testing as scenario coverage improves
  • Simulate end user scenario testing for internet bandwidth and other devices.

Through this initiative, we have validated our hunch and what we heard from a lot of people in the industry; that crowd sourced testing works best with applications that have a large global users base like mobile apps or games. We believe it will be time consuming and strenuous to involve external testers to crowd test for a product that requires a lot of internal communication with cross groups. All in all, this form of testing makes for an interesting venture and definitely has an exciting future ahead. We continue to watch what companies do globally in this space and innovate within this space. What have your experiences in crowd testing been?

Vijeethkumar Chathu |  Test Manager | Zen Test Labs

Advertisements

First dog fooding, now crowd sourcing…next crowd feeding?

When dog fooding was introduced by Microsoft manager Paul Maritz in 1988, it caught on like wild fire in the software space. Conceptually, dogfooding had existed in various forms till this point, but Microsoft was one of the early adopters when it came to incorporating this as a part of their product development cycles. Dog fooding essentially meant that internal users became early adopters of all new technology. Typically used with pre-release or beta versions of products, it gave product teams a bird’s eye view of how their products would work in “real-world” situations.  Forcing those who build products to use them is counter intuitive to the entire process of testing as more often than not they are bling to usability or are too advanced a user of the product. Hence while a lot of companies still conceptually use dog fooding to minimize the risk of critical failure there is an increasing trend to leverage large user bases to test too.

This is where crowdsourced testing has started to kick in. Testing companies are now providing platforms where product companies can test their products at a very low cost; i.e., typically charged a rate per bug detected. In turn, these companies open the platform to a community of testers who register to test voluntarily or as a part of a competition. Testers get paid per bug that they detect. While this kind of testing has opened up an unexplored talent pool (unbiased, cross geographic and large) at a low cost, the need to maintain independent testing either within their environment or outsourced remains. In addition to this, since there is no direct control over this crowd of testers, this source continues to remain an undependable source.

The ideal way forward would be to have a single platform that can integrate in-house testers or dogfooders, outsourced testers and crowd testers into a single platform. I refer to this concept as crowd feeding. Each product should have a crack team of testers from across these three channels that are nurtured over a period of time and have significant understanding of the application they are testing. This is akin to creating an elite panel of testers from these three channels that grow in experience over time.

The reason I mention that all three channels are critical to successful testing is

  1. In-house testers/ dogfooders – Advanced users with in-depth knowledge of product
  2. Outsourced testers- Intermediate/ Advanced independent users with in-depth knowledge of testing
  3. Crowd sourced testers- Cross section of low cost testers with diverse mix of “real-world” situational experience

Would be interesting to see how crowdsourced testing and dog fooding evolve over this year.

Hari Raghunathan | AVP | Zen Test Labs

Why automating regression cycles is a no brainer?

It is very surprising when you talk to customers who find regression cycles either painful or a drain on their productivity and at times a distraction from critical projects. What comes as a surprise is that most of them have an extremely mature and stable application and testing process. Yet, around regression cycles they scamper to get the entire suite executed and face the dilemma of having to deploy before completing the cycle.

Manual regression testing can be a mundane and boring activity for most test teams! Going through screens repeating the same steps and not finding bugs puts a tester’s concentration levels to the test. It also risks critical bugs escaping through due to the way these suites are structured. Over the years of executing numerous regression cycles I have seen that while automating regression is a great way to address the issue, other factors need to be considered to make regression cycles a cakewalk. I am outlining some of the steps that my teams find useful while automating regression suites

  • Find out how many of the existing test cases are automatable
  • Find out the coverage gap to identify ideal number of test cases
  • Run a test optimization drive (based on scientific methods) to arrive at the optimum number of test cases while ensure maximum coverage
  • Do not jump into QTP (explore other low cost options including Selenium)
  • Use a test automation framework that enables business users to develop, manage and execute automated regression suites
  • Train business users to automate on the fly and use the automation engineer only to maintain.

I acknowledge and understand that one can never achieve 100% test automation but the way I look at it, if you can achieve and significant amount (70-80%) you are going to end up achieving a dramatic reduction in the time taken to execute regression cycles. Eliminating pure testers from this process is also truly empowering your test teams to focus on critical projects.

Hari Raghunathan | AVP | Zen Test Labs

Upcoming testing trends?

I was at STARWEST, California earlier this month and it was interesting to hear some of the industry experts talk about current challenges software testers face. Even though, I could not make it to all sessions given the fact that we were presenting the last keynote; the ones I did attend were pretty good.

Most of the talk I heard was around how testing and testers today find themselves in a state of flux.

Development cycles are getting shorter and the ability to test rapidly in an agile mode going forward is what companies want. There was some talk around how testers need to also look at developing programming skills to stay ahead of the curve. I heard a few speakers say that you can’t dedicate time specifically to test, a few mentioned how you need to throw it out to users and let them test while others talked about how crowds could be sourced to find defects. I think this entire line of thinking was coming from one section of the software testing industry; i.e., the ones who are developing apps for mobile, web or cloud.

Take the case of our business, software testing in banks is an extremely critical activity. Not only does it involve large financial data but also invariably will have a lot of elements of regulation and compliance in it.  If I had to tell my customer, I will use a crowd to test your application which you can give your users to validate and all of this will save you 20 days- they’d just go elsewhere…To a bank the risk of losing 20 million is far greater than saving 20 days. Having said that, here’s what I feel. Going forward software testing will evolve into 2 areas; i.e., agile testing and traditional testing.

In the agile mode, most companies will be looking at optimizing, automating and crowd testing in a big way. The ideal scenario here would be to have a system which tells the developer whether he is breaking the code while developing and where he is breaking it. Companies will have a crack team of testers who can test and fix on the fly and the crowd will be used to see if anything got through.

Traditional testing will evolve to be a process wherein regression suites will be automated, business users will manually test releases with an automation engineer automating on the fly. Hybrid test automation frameworks will drive the change in this direction. Areas like performance testing and security will continue to remain specialized areas.

Be it the agile or traditional modes, testers will not be able to survive without one of the three skillsets; i.e., test automation, domain knowledge and the arguable one programming.

What do you think?

Hari Raghunathan | AVP | Zen Test Labs

Follow me on Twitter: hariraghunathan