Load Testing

From PeformIQ Upgrade
Revision as of 15:06, 6 June 2015 by PeterHarding (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Load Testing // SVT // SnV // Performance Testing Software

Notes

In any load test what we aim to achieve is examine the load on a system over a period of time (say one hour) when the system is operating in steady state - This is often some 'peak' load scenario as we are most often interested in such questions as:

  1. How will the system handle the expected peak daily, weekly and monthly processing loads?
  2. How much load will the system be able to sustain? What is the breaking point?
  3. Are there any particular components of the application which are problematical from a performance perspective?

To establish a steady state processing load we 'ramp up' the number of logged in users to get the number of connections (network; database, etc) to the level for the run. Once all users are at this synchronisation point (like athletes at the starting line in a race) we release the run by allowing a proportion of the users to commence work. The number that start to do actual work is usually smaller than the number of logged in users. We expect that the work that is done will be significantly greater than any load generated by a virtual user being logged in. However, there is usually a cost associated with having a logged in user (memory utilization; open sockets; database connections; objects and processes in the application server and so on). Sometimes this can be significant, though we generally expect this to be small in the overall system utilization picture. We expect the 'real' work being done to drown out the 'noise' of logged in users waiting to do work. The work on the system then is the sum of actual tasks being performed at any particular point in time and this is the basis of the load 'scenario' for any test . So while number of users doing work is important - the actual amount of work being done is usually much more so because this will consume most of the system resources (CPU, memory, network bandwidth, etc.)

Any load scenario then consists of two aspects:

  1. The work/tasks to be done (which we specify by task and frequency [e.g. number per hour]), and
  2. The number of users this work is spread over.

The mechanics of the implementation of the 'scenario' involves distributing the work to be done over both the time available and the number of users available to do the work.

Load Testing Reports

The most significant component of any report will be the actual production of the report. In a well designed modern network available bandwidth should be such that the latency for the transfer of the report document itself will be relatively small and much smaller than the time taken to generate the report. Of course, this will not be so in the case of pre-generated reports where the real work has been done during some previous batch window. I believe this latter is the situation with the pre-billing reports.

Other References

  1. Testing Stubs
  2. Open Source Tools
  3. Performance Testing Tools
  4. LoadRunner FAQ

Categories