Some worthreading links that outline pitfalls when dealing with SPAN and TAPs:
http://www.lovemytool.com/blog/2007/08/span-ports-or-t.html
http://en.wikipedia.org/wiki/Network_tap
Some worthreading links that outline pitfalls when dealing with SPAN and TAPs:
http://www.lovemytool.com/blog/2007/08/span-ports-or-t.html
http://en.wikipedia.org/wiki/Network_tap
Your test environment should be capable of simulating production environment conditions. To do this, keep the following considerations in mind during the test cycle:
The next sections describe each of these considerations.
“Software Test and Performance” magazine announced its readers’ voting results of automation tools in different categories.
Some of the results:
Data/test performance
HP LOADRUNNER took the most votes in the data/test performance category, which groups products in terms of their ability to apply test data to an application or system and evaluate how
it handles the processing.
Functional test
One again, HP’s QUICKTEST PROFESSIONAL takes the top prize for functional testing.
Test/QA Management
HP is once again on top with TESTDIRECTOR FOR QUALITY CENTER, which testers voted this year and last their favorite tool for test and QA management.
Defect/Issue Management
Last year there were two winners in the defect and issue management category: HP TestDirector for Quality Center and the Mozilla Foundation’s Bugzilla were tied. This year, TESTDIRECTOR alone took the top spot and Bugzilla moved to third, edged out by Microsoft’s Visual Studio Team Edition for Software Testers.
To read more, you have to download a November issue of ST&P
Well, that’s a good PR for HP. Good tools for “good” prices 😉
I love autumn for the variety of reddish colors it brings to us. There are days when trees get covered with a picturesque mosaic. One of these days I decided to take my bike, camera and get out to the nearest forest.
Here are the pictures:
The number of Virtual Users must be close to the number of real users once the application is in production, with a realistic think time applied between pages. Avoid testing with too few Virtual Users and a reduced think time. It could be assumed that the result would be the same, as the number of requests played per second is identical. However, this is not the case, for the following reasons:
The memory burden on the server will be different: Each user session uses a certain amount of memory. If the number of user sessions is underestimated, the server will be running under more favorable conditions than in real-life and the results will be distorted.
The number of sockets open simultaneously on the server will be different. An underestimation of user numbers means the maximum threshold for open server sockets cannot be tested.
The resource pools (JDBC connection pools) will not be operating under realistic conditions. An inappropriate pool size setting might not be detected during the test.
Use variables to dynamically modify key values such as user account logins or certain form parameters (such as productID in an e-business application). The main idea of this is to bypass the use of the various server caches, for the following reasons:
Playing the same requests with the same values produces an unrealistically high performance, due to the use of various caches: preloading into memory cache, connection pools, system swap…
On the other hand, completely disabling the caches (when available) will produce an unrealistically poor performance.
After a re-start, don’t hesitate to “warm up” the server with a few calls before generating a sudden, high load which, in addition to being unrealistic, may cause the server to crash. Sending a short, light load beforehand allows certain resources, such as connection pools or thread pools, to be pre-allocated.
Run the test for a significant length of time in order to iron out any outliers.
Make sure the Load Generators are not overloaded; CPU and memory usage are displayed in real time throughout the test.
When a Virtual User receives an error, it should normally stop running. If this does not happen, it could continue playing requests that have no meaning. For example, if the user login fails, there is little point sending further browsing or search requests to the application as it will only distort the response time statistics for those pages.
Each Virtual User type may be configured to stop running in case of error or failed assertion.
Where possible, break scenarios into several smaller scenarios to focus the tests. Make sure transaction definitions are granular enough to be able to pinpoint performance issues to specific GUI actions.
Make sure that the environment when running a load test is in the same state as it was when the test was recorded. Changes to the operating environment might require tests to be rerecorded.
If you have got an excel with user accounts, do verify them so that you make sure they are all OK. It can happen that some of the them can be disabled and will cause unnecessary errors on login page. It’s also worth having a script that would verify user accounts automatically. You can set it executed before each of runs.
It’s beneficial to do several workloads that can reveal application thresholds, bottlenecks. This can be done using simple increasing models with varying paces and maximum amount of users.
Sometime it can happen that customer wants to load the system with, for example, 1000 of virtual users. In this case it’s reasonable to do one test for 500 vUsers beforehand that shows the system can handle half of maximum amount.
Of course this is far from being a complete, full-fledged list of best practices from the world of performance testing.
Related content: https://feelsgood11.wordpress.com/2008/11/16/performance-testing-considerations-and-best-practices/
Sources:
http://www.neotys.com/documents/support/htmldoc2.0.x/ch10.html#bestpractices.objectives
I came across a nice, thorough article dedicated to gauging performance requirements. I believe it’s worth reading.
You can get it here: http://www.stpmag.com/issues/stp-2008-01.pdf , Page 18.
My first impression about finnish people was pretty good. Later on it started getting much better 😉
Well, first days Helsinki didn’t impress me much until I got to downtown. After this, I felt in love it. It is very convenient, cozy, distinctive and it is easy to get around.
Since I love biking and everything that relates to it, it was lovely to see a lot of people riding a bike and commuting. There is a mature infrastructure for this. Helsinki has more than one thousand kms of bike paths. Most of people while riding have a helmet on.
I didn’t waste my free time just sitting at home after work. I visited a lot of places. Fortunately, Helsinki has a lot of this stuff. The Government is paying a lot attention on the tourism development. You can easily get free maps, tourist guides and necessary info about anything.
Moreover I learnt some phrases like “Hyvaa Paivaa”, “Kiitos” and more. I used them when I met our finnish colleagues. Actually, they know some Russian too 😉
Everybody speaks English in more or less extent. It’s cool. I didn’t have any difficulties with communication. Even elder people know it 😉
Approaches to Performance Testing. Part 1
18. Load/Volume Testing. Concept
19. Load/Volume Testing. Goals
20. Load/Volume Testing. While Executing
21. Stress Testing. Concept
22. Stress Testing. Examples
1. Diversity of Approaches to Performance Testing
2. Amount of Load that is put onto the server
3. Baseline/Performance Testing. Concept
4. Benchmark Testing. Concept
5. Benchmark Testing. “Flat” and “Ramp-Up”. Run Modes