When we give top down estimates for performance testing in scope, we normally consider the application type and the type of protocol. Once the application is ready, we discover that the performance testing effort needed is much higher due to various factors like environment availability, instable code, incompatible tool etc. It may even go to the extent of reselecting the tool, rewriting the performance test strategy etc. This might lead to surprises for the customers, as they normally expect results as per their project schedule.
In order to avoid such situations, we can consider the following five points and set the customer expectation from the beginning:
- Protocol: Even if the application being tested is a web-based application, we may need to consider the kind of protocols used in detail. We cannot assume that Http or Https for a web based application is the default – it could be a combination of protocols. If the application is being tested first time for performance, it is better to include a POC phase in the performance testing lifecycle to freeze the protocol clearly before we proceed further.
- Testing tool: If there is a tool available with the customer for performance testing, we need to consider that as the first choice. If the application has been tested already using the same tool, we may not require any new tools. If the application is new, sometimes we may end up using some other testing tool due to compatibility issues. So, a POC is highly recommended to ensure this if the application is being developed for the first time.
- Application type: Web applications could be two major types. A business process might call for one user role if it is a simple online shopping kind of scenario. In the case of workflows, a business process might call for multiple roles to complete all the activities involved in that. So, the scripting, test data preparation, application data preparation, test execution etc., might differ a lot. We need to ask specifically what kind of application is given for testing.
- Architecture: The kind of architecture might change the overall design of the performance testing. For example, if there is a cluster based architecture, then it will be possible to narrow down the reasons quickly. If it is a normal 2 or 3 tier architecture with a couple of powerful servers, then the test strategy needs to be different to verify the scalability.
- Load generation: If the requirement is to validate the network latency by injecting loads from multiple locations of the world, we may end up spending higher setup costs and time. If it is verification of different transactions from the client perspective we may not have higher setup costs, travel, multi-location licenses, etc.