Archive for the ‘Performance testing’ category

Performance testing: considerations and best practices

November 16, 2008

Your test environment should be capable of simulating production environment conditions. To do this, keep the following considerations in mind during the test cycle:

  • Do not place too much stress on the client.
  • Create baselines for your test setup.
  • Allow for think time in your test script.
  • Consider test duration.
  • Minimize redundant requests during testing.
  • Consider simultaneous versus concurrent users.
  • Set an appropriate warm up time.

The next sections describe each of these considerations.



ST&P announced Rockstars of Testing

October 29, 2008

“Software Test and Performance” magazine announced its readers’ voting results of automation tools in different categories.

Some of the results:

Data/test performance

HP LOADRUNNER took the most votes in the data/test performance category, which groups products in terms of their ability to apply test data to an application or system and evaluate how
it handles the processing.

Functional test

One again, HP’s QUICKTEST PROFESSIONAL takes the top prize for functional testing.

Test/QA Management

HP is once again on top with TESTDIRECTOR FOR QUALITY CENTER, which testers voted this year and last their favorite tool for test and QA management.

Defect/Issue Management

Last year there were two winners in the defect and issue management category: HP TestDirector for Quality Center and the Mozilla Foundation’s Bugzilla were tied. This year, TESTDIRECTOR alone took the top spot and Bugzilla moved to third, edged out by Microsoft’s Visual Studio Team Edition for Software Testers.

To read more, you have to download a November issue of ST&P

Well, that’s a good PR for HP. Good tools for “good” prices 😉

Some of the best practices in the performance testing

October 27, 2008

Defining the number of Virtual Users.

The number of Virtual Users must be close to the number of real users once the application is in production, with a realistic think time applied between pages. Avoid testing with too few Virtual Users and a reduced think time. It could be assumed that the result would be the same, as the number of requests played per second is identical. However, this is not the case, for the following reasons:

  • The memory burden on the server will be different: Each user session uses a certain amount of memory. If the number of user sessions is underestimated, the server will be running under more favorable conditions than in real-life and the results will be distorted.

  • The number of sockets open simultaneously on the server will be different. An underestimation of user numbers means the maximum threshold for open server sockets cannot be tested.

  • The resource pools (JDBC connection pools) will not be operating under realistic conditions. An inappropriate pool size setting might not be detected during the test.

Using different user accounts and values

Use variables to dynamically modify key values such as user account logins or certain form parameters (such as productID in an e-business application). The main idea of this is to bypass the use of the various server caches, for the following reasons:

  • Playing the same requests with the same values produces an unrealistically high performance, due to the use of various caches: preloading into memory cache, connection pools, system swap…

  • On the other hand, completely disabling the caches (when available) will produce an unrealistically poor performance.

“Warm up” the server before you start

  • After a re-start, don’t hesitate to “warm up” the server with a few calls before generating a sudden, high load which, in addition to being unrealistic, may cause the server to crash. Sending a short, light load beforehand allows certain resources, such as connection pools or thread pools, to be pre-allocated.

  • Run the test for a significant length of time in order to iron out any outliers.

  • Make sure the Load Generators are not overloaded; CPU and memory usage are displayed in real time throughout the test.

Stop any Virtual Users containing errors

When a Virtual User receives an error, it should normally stop running. If this does not happen, it could continue playing requests that have no meaning. For example, if the user login fails, there is little point sending further browsing or search requests to the application as it will only distort the response time statistics for those pages.

Each Virtual User type may be configured to stop running in case of error or failed assertion.

Make Scenarios and Transaction Definitions Granular

Where possible, break scenarios into several smaller scenarios to focus the tests. Make sure transaction definitions are granular enough to be able to pinpoint performance issues to specific GUI actions.

Preserve Environment During Recording and Running Load Tests

Make sure that the environment when running a load test is in the same state as it was when the test was recorded. Changes to the operating environment might require tests to be rerecorded.

Validate test data before you start

If you have got an excel with user accounts, do verify them so that you make sure they are all OK. It can happen that some of the them can be disabled and will cause unnecessary errors on login page. It’s also worth having a script that would verify user accounts automatically. You can set it executed before each of runs.

Use different workloads to recognize system behavior

It’s beneficial to do several workloads that can reveal application thresholds, bottlenecks. This can be done using simple increasing models with varying paces and maximum amount of users.

Make sure that the system can handle less amount of users than a maximum one.

Sometime it can happen that customer wants to load the system with, for example, 1000 of virtual users.  In this case it’s reasonable to do one test for 500 vUsers beforehand that shows the system can handle half of maximum amount.

Of course this is far from being a complete, full-fledged list of best practices from the world of performance testing.

Related content:


How to simulate HTTP Basic Authentication in Borland SilkPerformer

September 27, 2008
How Basic Authentication works:
WebBase64Encode(sUserPass, 200, nUserPassLen, “username:password”);
sUserPass := “Basic ” + sUserPass;
WebHeaderAdd(“Authorization”, sUserPass);
Variables sUserPass and nUserPassLen should be declared.

Article: Defining performance requirements

July 16, 2008

I came across a nice, thorough article dedicated to gauging performance requirements. I believe it’s worth reading.

You can get it here: , Page 18.

Approaches to Performance Testing. Part 2

April 23, 2008

Approaches to Performance Testing. Part 1

18. Load/Volume Testing. Concept

  • In the testing literature, the term “load testing” is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with.
  • Load testing is used to determine whether the system is capable of handling various anticipated activities performed concurrently by different users.
  • Load testing ensures the level of confidence with which the customer uses the system efficiently under normal conditions.
  • Usually load tests generate 80% of traffic (amount of load) a system can potentially handle.
  • There is an extreme importance of having large datasets.

19. Load/Volume Testing. Goals

  • Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
  • Ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.

20. Load/Volume Testing. While Executing

  • During the execution of the load test, the goal is to check whether the system is performing well for the specified load or not.
  • To achieve this, system performance should be captured at periodic intervals of the load test.
  • Performance parameters like response time, throughput, memory usage, and so forth should be measured and recorded.
  • This will give a clear picture of the health of the system.

21. Stress Testing. Concept

  • Stress testing goes one step beyond the load testing and identifies the system’s capability to handle the peak load.
  • In stress testing, think time is not important as the system is stressed with more concurrent users beyond the expected load.
  • Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).
  • The main purpose behind this madness is to make sure that the system fails and recovers gracefully this quality is known as recoverability.

22. Stress Testing. Examples

  • Double the baseline number for concurrent users/HTTP connections.
  • Randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example).
  • Take the database offline, then restart it.
  • Rebuild a RAID array while the system is running.
  • Run processes that consume resources (CPU, memory, disk, network) on the Web and database servers.
  • This list can be enhanced with your favorite ways of breaking systems.


Approaches to Performance Testing. Part 1

April 16, 2008

1. Diversity of Approaches to Performance Testing

  • There are many variations within the broad framework of performance testing.
  • There is no universal or consistent set of terminology, and many organizations have their own terms such as “work load testing” and “sweet spot testing”

2. Amount of Load that is put onto the server

  • It can come from two different areas:
    • the number of connections (or virtual users) that are hitting the server simultaneously
    • the amount of think-time each virtual user has between requests to the server
  • The more users hitting the server, the more load will be generated.
  • The shorter the think-time between requests from each user, the greater the load will be on the server.
  • Keep in mind that as you put more load on the server, the throughput will climb, to a point.

3. Baseline/Performance Testing. Concept

  • Baseline — a range of measurements that represent acceptable performance under typical operating conditions.
  • Testers have a baseline for how the system behaves under normal conditions.
  • Baseline can then be used in regression tests to gauge how well a new version of the software performs.
  • Baseline provides a reference point that makes it easier to spot problems when they occur.

4. Benchmark Testing. Concept

  • The key to benchmark testing is to have consistently reproducible results.
  • Benchmark tests should be used to determine if any performance regressions are in the application.
  • Benchmark tests are great for gathering repeatable results in a relatively short period of time.
  • The best way to benchmark is to change one and only one parameter between tests.

5. Benchmark Testing. “Flat” and “Ramp-Up”. Run Modes

  • In case of “Flat” run mode, all of the users are loaded at once, and then run them for a predetermined amount of time.
  • In case of “Rump-Up” run mode, users are loaded step by step.