OWASP Testing 101 (Part 2)

In my previous post, I wrote about Broken Authentication, Session Management and Cross Site Scripting.  Today, I will continue talking about some more checkpoints to be kept in mind while performing OWASP testing.

Insecure Direct Object References
This involves modifying the URL parameter values and using them directly to retrieve a database record belonging to other users. If an ID or parameter in the URL is modified and refreshed, the application should not fetch a new record belonging to another user.This is the script we followed to test the vulnerability:

  1. Log into an application
  2. Navigate to the page where the value of a parameter is used directly to retrieve a database record, e.g. an invoice page with URL http://foo.bar/somepage?invoice=12345
  3. Modify the URL with a different invoice no. belonging to another user http://foo.bar/somepage?invoice=7985 and hit enter

Security Misconfiguration
Security Misconfiguration occurs due to poor configuration of an application (server or application level) which makes it vulnerable to malicious attacks. The application might be vulnerable to changes in website settings, unauthorized access or any other unintended actions on the application that divulge informative data or user details.This is how we tested the application for security misconfiguration:

Verify 404 Error message:

  1. Launch an application
  2. Manipulate the URL by deleting the directory structure and directly entering the page name
  3. Verify that “Server Error in ‘/’ Application” message displayed.The application should not return extra information related to the page or directory listings.

Intentionally crash the application using any of the following options where applicable and verify HTTP 404 Error:

  1. Change the DB configuration by providing invalid credentials OR
  2. Type only the domain name in the URL and hit enter
  3. Verify that error message is displayed

Sensitive Data Exposure
Even if an application is password protected, sensitive data such as credit card details, TAX ids and financial details etc should be encrypted or hashed in the database and masked while displaying at the front end. TLS/SSL should be used for transactions involving this type of data.This is how we tested the application for sensitive data exposure:

  1. Log into the application
  2. Navigate to My Profile / Password Reset page / My Account page
  3. Check the password field, Credit Card Number, SSN Number
  4. Launch the application with HTTP in the URL
  5. Check if the application is redirected to HTTPS
  6. The web application should be SSL Enabled and the URL should redirect to HTTPS

Make the following checks for sensitive data:

  • It should be masked in the application
  • It should not be cached
  • Auto complete should be disabled for forms containing sensitive data
  • CC/Account Number, Expiry/CVV Number etc., shouldn’t be exposed as clear text. Only the last four digits should be visible E.g. – **********1234
  • Account information (Account No., Routing No.) should be masked and stored in the database.  Account details should be masked on the receipt screen. E.g. – **********1234

Missing Function Level and Access Control
This is to verify user level access control of an application. Non-admin users should not be able to access screens that can only be accessed by admin users.

  1. Create two users, one with an admin role and another with a non-admin role
  2. Login as admin and verify that the application provides  functional and access privilege to  the admin user
  3. Login as a  non-admin user and verify if the restricted module is accessible

This project helped me experience a different flavor of testing and made me aware of the fact that applications are very vulnerable to malicious attacks and fraudulent users. If applications are not tested for security, then important user data and information is in danger of being compromised. Earlier, I used to test applications believing that functionality was the most important aspect, but now I have realized that for a robust and secure application, both functional as well as OWASP (security) testing are important.

Vasim Khan | Zen Test Labs


Automating the Business Analyst

Business Analysts (BA’s) play a pivotal role in the success of technology projects. BA’s are expected to assume roles that go well beyond just defining and tracking requirements. Some of these roles include Business Planners, Systems Analysts, Project Managers, Subject Matter Experts, Data Analysts, Application Analysts, Testers…well this list can go on! The biggest issue with these ever changing roles that BA’s play is that the attributes and dispositions of a Business Analyst are so wide that it doesn’t feel like roles that are being carried out by the same person

Given this dynamic, our experience has been that BA’s play a critical role in ensuring end quality of a product. However, the role of BA’s does not end here. Invariably they are the ones involved in ensuring that products work the way they should post implementation; i.e., in Business as Usual modes. Thus BA’s play a crucial role in designing test programs and managing them in the long run to ensure defects are caught in time and addressed.

At Zen Test Labs, we have long been an advocate of easing the lives of BA’s when it comes to their roles in testing. It is not desirable to have BA’s test but given that it inevitable, it makes sense to automate large parts of the process in a way that BA’s can seamlessly create and execute tests. We have put together a whitepaper that talks about marrying a Business Process Model Based Test approach to Scriptless Test Automation thus ensuring that testing is synchronized across business and technology operations.

Download the whitepaper today to learn more about how you can automate the business analyst!

How Big is Big Data?

There has been a lot of buzz around big data lately. The volume of data we’re handling is growing exponentially with the popularity of social media, digital pictures, videos, and data from sources like sensors, legacy documents, weather and traffic systems to name a few. Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone, states a report from IBM. According to a report by analyst firm IDG, 70% of enterprises have either deployed or are planning to deploy big data projects and programs this year due to the increase in the amount of data they need to manage.

I’d like to ask you, what is the maximum database size you have seen so far? A professional in database and related subjects may be able to answer this question. But if you do not know anything about databases, it might not be possible for you to answer this.

Let us take the example of Aadhar Card. UIDIA (Unique Identification Authority of India) has issued about 25 Crore Aadhar Cards in India so far.  The size of each card is around 5MB (it includes photo, finger prints, scanning), so the existing database size could be

5*25, 00, 00,000 MB = 1250 TB = 1PB (1000 TB ~ 1Peta Byte)

On an average, UIDIA issues 1M Aadhar Cards each day. So the size of the database increases by 1 Tera Byte per day. The current population of India in 2014 is 1,270,272,105 (1.27 billion), so the minimum size of the database required to store Aadhar Card data would be around 5 Peta Bytes.  Is this database big enough? Probably not.

Facebook has 680 Million active users on a monthly basis. Is this big?

Google receives 6000 million searches per day.Is this big? May be.

Storing 6000 Million records is not a big thing; you can use conventional databases like Oracle to store these many records.  But it can be more interesting that. What if I ask you to store 6000 million search phrases that are searched in Google everyday for two years and at the end of it create a report on the 25 most searched keywords related to “cricket”? This might sound insane. 6000M * 365 * 2 = 4380 Billion Records! Even if you are able to store these many records, how can you perform analysis on this data and create reports?

That is where big data technologies will help you.  Big data does not use RDBMS, SQL queries or conventional databases. Instead, it uses tools like Hadoop, Hive, Map Reduce etc. Map Reduce is a programming paradigm that allows massive job execution scalability against thousands of servers or clusters of servers. Hadoop is by far the most popular implementation of MapReduce. It aggregates multiple sources of data in order to do large scale processing and also reads data from a database in order to run processor-intensive machine learning jobs. Hive is a SQL like bridge that lets conventional BI applications run queries against a Hadoop cluster. It has increased Hadoop’s reach by making it more familiar for BI users.

While Big Data represents all kinds of opportunities for businesses, collecting, cleaning and storing it can be a nightmare. Not only is it difficult to know whether the data is being transmitted properly, but also that the best possible data is being used. Here are some key points to keep in mind while testing big data:

  • Test every entry point in the system (feeds, database, internal messaging, and front end transactions) to provide rapid localization of data issues between entry points.
  • Compare source data with the data landed on Hadoop system to ensure they match.
  • Verify the right data is extracted and loaded into the correct HDFS location.
  • Verification of output data. Validate that processed data remains the same even when executed on a distributed environment.
  • Verify the batch processes designed for data transformation.
  • Verify more data faster.
  • Verification of output data. Validate that processed data remains the same even when executed on a distributed environment.
  • Verify the batch processes designed for data transformation.
  • Automate testing efforts.
  • You should be able to test across different platforms
  • Test data management is the key to effective testing.

The list above is not static and will keep growing as big data keeps getting bigger by the day. Big data amplifies testing concerns intrinsic to databases of any size as well as poses some new challenges. Therefore, a testing strategy is critical for success with big data. Companies that get this right will be able to realize the power of big data for business expansion and growth.

Manoj Pandey | Zen Test Labs

OWASP Testing 101 (Part 1)

I recently performed OWASP testing on an application and wanted to share my experience. The Open Web Application Security Project (OWASP) is a worldwide not-for-profit charitable organization focused on improving the security of software. Their mission is to make software security visible, so that individuals and organizations worldwide can make informed decisions about true software security risks.

Before I talk about my project, here’s a brief description of security testing: It is a process intended to reveal flaws in the security mechanisms of an application that protect data and maintain functionality as intended. It is a type of non-functional testing.The main requirements for security testing are:

  • Confidentiality
  • Integrity
  • Authentication
  • Availability
  • Authorization
  • Non-repudiation

In our project, we used the following checkpoints while performing OWASP testing:

Broken Authentication
User Authentication confirms the identity of the user. But the authentication mechanism may be vulnerable due to flawed credential management functions such as password change, forgot password etc. We used the following checkpoints to safeguard against this:

  • Password Strength: Passwords should have restrictions including minimum size and use of minimum combinations of letters, numbers, alpha numeric characters etc.  E.g.  Abcd@#12
  • Password Use: No. of login attempts and no. of failed login attempts should be logged. E.g.  Users can enter invalid credentials for a maximum of 5 times before the account is locked
  • Password Storage: Passwords should be hashed while entering and stored in an encrypted format. E.g.  Passwords should be displayed as ‘*********’ , ‘#########’ , ‘adas^*da432%324fsdf#’

Session Management
Session IDs should not be exposed in the URL. They should be long, complicated and not easily guessable. The application URL may contain session IDs when a user is logged into the application. Other users should not be able to use these session IDs to log into the application. Also, other users should not able to copy and paste the URL in another browser and access the application without logging in.

A new session ID should always be created for every authentication. Logging out or clicking on the back button should not navigate the user to the application’s internal page which requires user authentication. The application can also be tested using session timeouts and session expiry details after closing the browser.

Cross-Site Scripting
Cross-Site scripting includes malicious scripts which are included in code or trusted websites. Once the end user executes the script by accessing the application, the malicious code can access user’s sensitive data, cookies or session ids.

Cross-Site Scripting (XSS) has three types:

  • Reflected Cross-Site Scripting: The user is tricked into clicking a malicious link, submitting a crafted form or browsing to a malicious site. This is by far the most common type of vulnerability.
  • Stored Cross-Site Scripting: This is a permanent type of attack since the malicious script is stored in the database. The script is retrieved when the user fetches the record from the database.
  • DOM based Cross-Site Scripting: In this attack, malicious code is based in the DOM environment. When the script executes, the client side code runs in an unexpected manner.

Here’s how we tested XSS for our project:

  1. Log into the application
  2. Navigate to a page with an input field that accepts a large number of characters e.g. Fields accepting custom messages
  3. Enter the following script and click on the save/submit button
    Ex.1 : <script>alert (“hacked”) </script>
    Ex.2 :< html> <body text=”green”>
    <script>alert (‘You just found an XSS vulnerability’) </script>
  4. The application should not display a pop up message. It should display a valid error message like “Enter valid string” etc.

I have more to say about the basic concepts of security testing. I will talk about Insecure Direct Object References , Security Misconfiguration, Sensitive Data Exposure, Missing Function Level and Access Control in my next blog.

Vasim Khan | Zen Test labs

Mistakes Still Prevalent in the 3 Decade Old Testing Industry

‘I make one mistake only once’ –the dream statement of most people, processes and organizations.

The software testing industry being an industry more than 3 decades old is still stuck in the spiral of some mistakes that should have been overcome before they resulted in grave mishaps. Although a lot has been written about the ‘classic mistakes in software testing’, I want to go ahead and highlight the mistakes that are still being overlooked at each phase of testing:

1. Requirements Phase:
We miss on acquiring articulate information. We don’t explore the requirements by using a requirements traceability matrix which is the most ideal system for maintaining the relationship of requirements to design, development, testing and release of software. An RTM also charters out the testable, non-testable and unfrozen requirements- those that have been finalized and signed off.

2. Test Plan
Anybody who has been in software testing knows what importance a test plan holds. It gives a detailed testing effort of the scope of testing, schedule, test deliverables, risks involved and release criteria. However, a number of things that could go wrong are unclear out of scope items and omitting of misplaced assumptions which could result in misdiagnosed estimations. Many a times, under business pressure to release, there is a high tendency to estimate fewer testing cycles which adversely affects the quality of testing.Test design and test data design are also an integral part of the test plan. Faults occurring in them could collapse the test plan to bits, which is possible if the test design is too detailed or too brief, if a wrong template is used or if the expected results are missing. Test data design can cause a major glitch if it is not centralized, rationalized, automatable or hard coded. Ensuring the above pointers makes the process efficient and provides high quality testing.

3. Test Environment
Establishing no separate test environment for testing would result in missing critical defects and the inability to cover business flows. Missing preconditions from the environment will further throttle the process from creating a production like environment which would overlook vital bugs.

4. Test Execution
Mistakes still occurring at the test execution stage would be not optimizing test cases for execution resulting in unnecessary effort which translates to delayed release time, a lack of smoke testing and not prioritizing test cases. Prioritizing test cases guarantees maximum coverage and depth in testing which could be compromised if not done.

These are some of the points that I think are very crucial, my next post will cover more about defect analysis and bug reporting

Zen Test Labs

Building a Test Centre of Excellence: Experiences, Insights and Failures

As organizations mature in their Testing Processes, the perennial quest to achieve ultimate excellence has led them towards attempts to establish the “Test Centre of Excellence” better known as TCoE. Many such initiatives have been plagued with issues ranging from partial implementations to complete abandonment midway. Additionally, most TCoE initiatives find heavy resistance and inertia within teams as it is perceived as a threat to their independence and way of doing things.

At the heart of some of these issues lies poor alignment to business goals, poor ROI analysis prior to investing, poor communication and incorrect choice of models to centralize amongst many others. Drawing from their experience of consulting with organizations on TCoE initiatives and building one for their own, Krishna and Mukesh have written a whitepaper to share insights, experiences and lessons learnt from both successes and failures.

Download the whitepaper to learn how to go about creating your own TCoE while overcoming the common and not so common challenges you will face along the way. Draw on their experience to troubleshoot some of your unique problems.

Communication- A Key Skill to Excel at Testing

Enabling better communication is not a onetime activity. It requires continuous effort across the company. Good internal and external communication is extremely important to a business’s success. In order to work together effectively, there must be clear and coherent communication among all the departments.

Here are a few scenarios wherein communication gaps may arise and lead to poor quality:

1. Continuously Changing Requirements:
At times, requirement changes are implemented directly without updating the specification document. In such cases, there is a chance that the changed requirements remain untested or are tested incorrectly.
Any change in the requirements should be communicated correctly to all stake holders and it is necessary to update the specification document on a timely basis.

2. Configurations:
Lack of clarity from the stakeholders on the configurations to be tested can lead to wasted effort and extra work. Configuration testing can be expensive and time consuming. Investment in hardware and software is required to test the different permutations and combinations. There is also the cost of test execution, test reporting and managing the infrastructure.
Increasing communication between development, QA and stakeholders can help deal with these challenges.

3. Team Size:
When team sizes are large, some members of the team may miss changes in requirements or may not be communicated updates on activities in the project. This could lead to severe problems in the project or project failure.  Each team member should be abreast of the activities in the project through a log or through other means.

4. Changes in  Application Behavior not Communicated:
Continuous changes in the application behavior may lead to requirements being tested incorrectly.  All the functionality implemented in the application should be frozen while testing. If any changes are made to the functionality, they should be communicated to the testing team on a timely basis.

5. Unclear Requirements:
Complex requirements that contain insufficient data may be difficult to understand and therefore, may lead to improper testing. The functional/technical specification documents should be clear and easy to understand; they should contain a glossary, screenshots and examples wherever necessary.

The path to project success is through ensuring that small communication problems are eliminated completely before they build up, so that the message is delivered correctly and completely. Instead of discovering problems, we should figure out how to stop them from appearing in the first place.

Poonam Rathi | Test Consultant | Zen Test Labs