Performance Testing Process
The performance testing of any application can be carried out based on the NFRs (Non-Functional Requirements). This is normally done with a discussion/meeting with the client/Business Analysts during the project planning phase. Generally this is given in terms of SLA - page response time or transaction response time, Hits Per Second, Throughput etc.
Planning a Performance Test
The following needs to be answered by a client/BA during planning of performance test –
1) Business Processes to be tested
2) Anticipated user load of application
3) Division of user load among business process (based on current logs)
4) Architecture of application
5) Environment for performance testing
For example –In mentioned point 1, three business processes need to be recorded for performance testing – Business Process 1, Business Process 2 and Business Process 3, in point 2, the evaluated application load is 200 virtual users and percentage of each business process is 40%, 30% and 20% respectively- i.e. the actual virtual user load for these business processes will be 80, 60 and 40 respectively.
Designing Performance Test Scenarios
Once the planning of performance testing is completed, performance testing team designs the test cases based on the business process provided by client/BA. These will include the step by step functionality with expected result, and the data requirement for each page. This will help the performance tester to automate the script and will help in anticipating the master/external data requirement for automated script.
Scripting Performance Test Scenarios
After completion of test case designing the performance tester will record the business processes using testing tool (Loadrunner) followed by script enhancements. These scripts will be reviewed by a technical lead/performance architect.
Test Data Generation
The testing/dev team will generate the data for test execution.
Test Scenario Design using Loadrunner Controller
Performance tester will design the scenario/mix for scheduling the groups and configuring monitors. For configuring Solaris counters the testing team can use two approaches –
1) UNIX monitor configuration using rstatd daemon utility (Please refer to Solaris counters )
2) Windows monitor configuration using perfmon (Please refer to Windows Counters)
Test Execution and Monitoring using Loadrunner Controller
After designing all test scenarios the tester will execute the test run and will be monitoring the graphs e.g – Server resource graphs, Running Vusers, Hits per Second, Throughput, Transaction Response Time, Connections per Second etc. Also the server logs can be monitored during test execution.
Test Result analysis using Loadrunner Analyzer
The result will be collated after test execution and using LR interactive graphs, the report can be published in HTML and/or Excel and/or Word document. This report will contain the summary of test and graphs as per the requirement. (Please refer to Analyzer Graphs section)
The performance testing of any application can be carried out based on the NFRs (Non-Functional Requirements). This is normally done with a discussion/meeting with the client/Business Analysts during the project planning phase. Generally this is given in terms of SLA - page response time or transaction response time, Hits Per Second, Throughput etc.
Planning a Performance Test
The following needs to be answered by a client/BA during planning of performance test –
1) Business Processes to be tested
2) Anticipated user load of application
3) Division of user load among business process (based on current logs)
4) Architecture of application
5) Environment for performance testing
For example –In mentioned point 1, three business processes need to be recorded for performance testing – Business Process 1, Business Process 2 and Business Process 3, in point 2, the evaluated application load is 200 virtual users and percentage of each business process is 40%, 30% and 20% respectively- i.e. the actual virtual user load for these business processes will be 80, 60 and 40 respectively.
Designing Performance Test Scenarios
Once the planning of performance testing is completed, performance testing team designs the test cases based on the business process provided by client/BA. These will include the step by step functionality with expected result, and the data requirement for each page. This will help the performance tester to automate the script and will help in anticipating the master/external data requirement for automated script.
Scripting Performance Test Scenarios
After completion of test case designing the performance tester will record the business processes using testing tool (Loadrunner) followed by script enhancements. These scripts will be reviewed by a technical lead/performance architect.
Test Data Generation
The testing/dev team will generate the data for test execution.
Test Scenario Design using Loadrunner Controller
Performance tester will design the scenario/mix for scheduling the groups and configuring monitors. For configuring Solaris counters the testing team can use two approaches –
1) UNIX monitor configuration using rstatd daemon utility (Please refer to Solaris counters )
2) Windows monitor configuration using perfmon (Please refer to Windows Counters)
Test Execution and Monitoring using Loadrunner Controller
After designing all test scenarios the tester will execute the test run and will be monitoring the graphs e.g – Server resource graphs, Running Vusers, Hits per Second, Throughput, Transaction Response Time, Connections per Second etc. Also the server logs can be monitored during test execution.
Test Result analysis using Loadrunner Analyzer
The result will be collated after test execution and using LR interactive graphs, the report can be published in HTML and/or Excel and/or Word document. This report will contain the summary of test and graphs as per the requirement. (Please refer to Analyzer Graphs section)
No comments:
Post a Comment