When dog fooding was introduced by Microsoft manager Paul Maritz in 1988, it caught on like wild fire in the software space. Conceptually, dogfooding had existed in various forms till this point, but Microsoft was one of the early adopters when it came to incorporating this as a part of their product development cycles. Dog fooding essentially meant that internal users became early adopters of all new technology. Typically used with pre-release or beta versions of products, it gave product teams a bird’s eye view of how their products would work in “real-world” situations. Forcing those who build products to use them is counter intuitive to the entire process of testing as more often than not they are bling to usability or are too advanced a user of the product. Hence while a lot of companies still conceptually use dog fooding to minimize the risk of critical failure there is an increasing trend to leverage large user bases to test too.
This is where crowdsourced testing has started to kick in. Testing companies are now providing platforms where product companies can test their products at a very low cost; i.e., typically charged a rate per bug detected. In turn, these companies open the platform to a community of testers who register to test voluntarily or as a part of a competition. Testers get paid per bug that they detect. While this kind of testing has opened up an unexplored talent pool (unbiased, cross geographic and large) at a low cost, the need to maintain independent testing either within their environment or outsourced remains. In addition to this, since there is no direct control over this crowd of testers, this source continues to remain an undependable source.
The ideal way forward would be to have a single platform that can integrate in-house testers or dogfooders, outsourced testers and crowd testers into a single platform. I refer to this concept as crowd feeding. Each product should have a crack team of testers from across these three channels that are nurtured over a period of time and have significant understanding of the application they are testing. This is akin to creating an elite panel of testers from these three channels that grow in experience over time.
The reason I mention that all three channels are critical to successful testing is
- In-house testers/ dogfooders – Advanced users with in-depth knowledge of product
- Outsourced testers- Intermediate/ Advanced independent users with in-depth knowledge of testing
- Crowd sourced testers- Cross section of low cost testers with diverse mix of “real-world” situational experience
Would be interesting to see how crowdsourced testing and dog fooding evolve over this year.
Hari Raghunathan | AVP | Zen Test Labs