Approaches to Performance Testing. Part 2

Approaches to Performance Testing. Part 1

18. Load/Volume Testing. Concept

  • In the testing literature, the term “load testing” is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with.
  • Load testing is used to determine whether the system is capable of handling various anticipated activities performed concurrently by different users.
  • Load testing ensures the level of confidence with which the customer uses the system efficiently under normal conditions.
  • Usually load tests generate 80% of traffic (amount of load) a system can potentially handle.
  • There is an extreme importance of having large datasets.

19. Load/Volume Testing. Goals

  • Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
  • Ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.

20. Load/Volume Testing. While Executing

  • During the execution of the load test, the goal is to check whether the system is performing well for the specified load or not.
  • To achieve this, system performance should be captured at periodic intervals of the load test.
  • Performance parameters like response time, throughput, memory usage, and so forth should be measured and recorded.
  • This will give a clear picture of the health of the system.

21. Stress Testing. Concept

  • Stress testing goes one step beyond the load testing and identifies the system’s capability to handle the peak load.
  • In stress testing, think time is not important as the system is stressed with more concurrent users beyond the expected load.
  • Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).
  • The main purpose behind this madness is to make sure that the system fails and recovers gracefully this quality is known as recoverability.

22. Stress Testing. Examples

  • Double the baseline number for concurrent users/HTTP connections.
  • Randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example).
  • Take the database offline, then restart it.
  • Rebuild a RAID array while the system is running.
  • Run processes that consume resources (CPU, memory, disk, network) on the Web and database servers.
  • This list can be enhanced with your favorite ways of breaking systems.

23. Stress Testing. Goals

  • However, stress testing does not break the system purely for the pleasure of breaking it, but instead it allows testers to observe how the system reacts to failure.
  • Does it save its state or does it crash suddenly?
  • Does it just hang and freeze or does it fail gracefully?
  • On restart, is it able to recover from the last good state?
  • Does it print out meaningful error messages to the user, or does it merely display incomprehensible hex codes?
  • Is the security of the system compromised because of unexpected failures?
  • And the list goes on.

24. Configuration Testing. Concept

  • Integrated with performance testing to identify how the response time and throughput vary as the configuration of infrastructure varies and to determine the reliability and failure rates.
  • Configuration tests are conducted to determine the impact of adding or modifying resources.
  • Verifies whether a system works the same, or at least in a similar manner, across different platforms, Database Management Systems, Network Operating Systems, network cards, disk drives, memory and central processing unit settings, and execution or running of other applications concurrently.
  • Compatibility testing is a term which is used synonymously with configuration testing since compatibility issues are the matter of interest here.

25. Scalability Testing. Concept

  • Scalability Testing, part of the battery of non-functional tests, is the testing of a software application for measuring its capability to scale up or scale out – in terms of any of its non-functional capability – be it the user load supported, the number of transactions, the data volume etc.
  • The purpose of scalability testing is to determine whether the application automatically scales to meet the growing user load.

26. Scalability Testing. Scale Up / Scale Vertically

  • To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer.
  • A server twice as fast is more than twice as expensive.
  • Taking advantage of such resources can also be called “scaling up”, such as expanding the number of Apache daemon processes currently running.

27. Scalability Testing. Scale Out / Scale Horizontally

  • To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application.
  • An example might be scaling out from one web server system to three (organizing a cluster system).

28. Scalability Testing. Ideal Scalability

29. Scalability Testing. Tradeoffs between Scale Up and Scale Out models

  • Larger numbers of computers means increased management complexity, as well as a more complex programming model and issues such as throughput and latency between nodes; also, some applications do not lend themselves to a distributed computing model.
  • However, the price differential between the two models is increasingly in favor of “scale out” computing for those applications that fit its paradigm.
  • Super computers are scaled out!

30. Testing which is driven by what we want to measure.

  • Response time testing
  • Throughput testing
  • Availability testing
  • Measurement of resource utilization
  • Capacity testing
  • Measurement of delays (latency)
  • Measurement of losses in networks
  • Error rate measurement

31. Testing which is based on source or type of load.

  • Usage-based testing
  • Standard benchmark testing
  • Load variation testing
  • Ramp-up testing
  • Component-specific testing
  • Calibration testing

32. Testing which focuses on the impact of changes.

  • System change impact assessment
  • Infrastructure impact assessment
  • Baseline testing
  • Volume testing
  • Parallel testing
  • Live patch and change testing
  • Extreme configuration testing

33. Testing which seeks to stress the system or find its limits.

  • Scalability testing
  • Bottleneck identification and problem isolation testing
  • Duration or endurance testing
  • Hot spot testing
  • Spike and bounce testing
  • Breakpoint testing
  • Rendezvous testing
  • Feature interaction / interference testing
  • Deadlock testing
  • Synchronization testing
  • User scenario, bad day or soap opera testing
  • Disaster recovery testing
  • Risk-bases testing
  • Hazard and threat identification
  • Environmental testing
  • Compatibility and configuration testing

Sources:

Advertisements
Explore posts in the same categories: Performance testing

Tags: , , , , , , ,

You can comment below, or link to this permanent URL from your own site.

One Comment on “Approaches to Performance Testing. Part 2”


  1. Good post with Nice explanation. Needed and Useful information.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: