4 Guidelines to Keep in Mind while Performance Testing

These days users are expecting more and more responsiveness online. You simply can’t afford anymore to, for example, have a user wait for over 10 seconds before a webservice call returns. Proper performance testing has therefore become unavoidable.

In doing or creating performance tests, there are some things to keep in mind to avoid common pitfalls (because bad performance testing is worse than no performance testing at all). Although there is a huge load of things to keep in mind, in this blog post we’ll put the focus on four of them: The metrics, the environment, the quality of the tests and the warm-up phase.

Though the following examples include mostly web performance testing, the practices apply to any kind of performance tests.

1. Define your metrics clearly

The first criterium to keep in mind while doing performance testing is that you should always have proper logging and metrics. You should clearly define your metrics with your goals already in mind, like average response time vs. peak response time, request per second, … and so on.

If you want to do proper performance testing, you are going to need proper metrics. Without proper metrics, it will be more difficult to know what to look out for or, even worse, what it actually is that you want to find out. So take your time to think them through before you start on your tests. The metrics can be grouped by response or volume metrics.

Examples of response metrics are:

  • Average response time;
  • Peak response time;
  • Error rate;

Examples of volume metrics are:

  • Concurrent users;
  • Requests per second;
  • Throughput;

Define where you want to put your focus on: Either custom, based upon your company’s goals, or your product’s service-level agreement (SLA). Then configure your tests based upon the metrics you want to check.

Remember to persist your results as well. Without persisting the data, you won’t be able to analyse the results or visualise the improvements that are needed (e.g. after a refactoring).

2. Set up a highly similar test environment

Another thing to keep in mind while doing performance testing is that your test environment should be as similar as possible to your production environment. Otherwise you can come across so-called “false negatives”.

An example: You have a production server with 16GB of RAM and 8 cores. As a test environment you use an old laptop with 2GB of RAM and 2 cores. While running the performance tests, it may be that the application does not have enough memory provided and starts swapping. After the test phase, you analyse the results and you see that this swapping has caused some delay. So you start refactoring, you run the tests again and you’re sure to get better results the next day. All set! Or so you thought …

When you run it on production the next day, you don’t see any performance increase. So, what happened? Well, the swapping was caused by the lack of memory and was never an issue on the production server in the first place.

There are much more examples for this topic (even books), but you get the point: Make sure that your test environment is as similar as possible to your production environment.

This, by the way, is relevant for all variables, be it bandwidth, RAM, CPU, operating system, the amount of data that is processed, … and so on.

3. Execute tests that are relevant

The third thing to keep in mind while doing performance testing is that your test scenarios should be well thought out. There are two major forms of performance testing: Macro vs. micro benchmarking.

Macro benchmarking is the process of testing the whole application or parts of the application as a whole (e.g. testing on REST endpoints). Micro benchmarking, on the other hand, is focused only on the performance of small parts of the application logic (e.g. internal method calls). Here we focus only on macro performance tests because you can fill books with everything related to micro benchmarking.

  1. To ensure that your tests will not influence the results themselves, it is important that your test scenarios are non-interfering.
  2. Your tests should cover real-world scenarios. If not, you are optimising parts of the application that don’t need to be treated nor optimised; examples are a compiler, parts of the caching layer, … and so on. In these cases, you are creating “custom hotspots” (in the application optimisation) and/or you may find false negatives.

    A good example is when you see that the update of profiles takes half a second, while you discard the fact that the application crashes whenever 1000+ users simultaneously use your website ... Though it is good to increase the performance of the update, you simply can’t afford to miss the “1000+ simultaneous users”-scenario.
  3. Provide some kind of randomness in your tests, because in the real world, too, not all users will perform the same kind of action over and over again. Simply put: Avoid custom hotspots.

4. “Warm up” your app and environment

The last thing to keep in mind is to foresee warm-up time. Warm-up time is needed to set your environment in a state similar to production, that is, an environment that is already optimised for usage similar to future usage.

For example, when you have configured caching, make sure that some caches are already filled if this would also be the case in production. Otherwise you could see some bad results of data gathering, which might only be related to the startup phase of your app.

For Java applications, warm-up time is needed for, e.g., the process of bytecode optimisation of a Java compiler. The Java compiler is designed to do code optimisations whenever needed; those optimisations are then calculated upon the usage of the application. This means that the performance of a freshly started application is worse than the performance of an application that has already been optimised.

The thing to keep in mind when doing any performance test, besides doing the warm-up, is that you should always avoid creating custom hotspots. For example, by calling a specific webservice a thousand times, you create your own custom hotspot around the code that calls this service. In short, it’s better to have a small load test to run that will warm up the application (i.e. without any I/O!).

Conclusion

In this blog we highlighted four criteria, out of many, that you should keep in mind while doing healthy performance tests. Define your metrics, set up a similar test environment, execute relevant tests and warm up your application and environment. With this in mind, you have a solid baseline for proper performance testing.

Tags: Performancetesting

June 28, 2017 by Maarten Vandeperre