September 7, 2021
Louise ‐ Tester
This is our second blog post on performance testing. In the previous article, we reviewed the different types of performance testing and how those might apply to the application that you are testing. This article will go into more detail about how you can approach performance testing, the pitfalls to avoid, and what to consider when choosing the best tooling for you.
Benchmarks vs non-functional requirements
Arguably, the best way to approach performance testing would be with a set of clearly defined non-functional requirements. These would define how the system should behave under stress and load, what percentage of system interactions should succeed under these conditions, and how long these calls should take.
However, you won’t always have these requirements, or they won’t always reflect the real world, so you’ll need an alternative approach. One way is to create benchmarks of your application’s performance by monitoring the way the system currently works. Once you have this information you can define your thresholds and ensure that the application doesn’t fall below these baseline performance metrics.
Manual vs automated performance testing
If you think about what is involved in performance testing it can be difficult to imagine how you might carry out such activities. If you’re trying to replicate 1,000 users all logging onto your application at the same time, how might you go about simulating this? Or what if the numbers are bigger, 10,000 or even 100,000 users?
It is important to remember that this is only one type of performance testing. It can be less daunting to consider other aspects of performance, such as the responsiveness or stability of your application.
As an individual tester, you can quite easily do some manual exploratory testing to determine the responsiveness and stability of the application to a single user. After all, how the user perceives the application is a key objective of performance testing.
You can then scale this up by getting other people to perform the same test at the same time and see if the system continues to perform in the same way. This clearly has limitations, though - a normal engineering team will run out of individuals long before they reach the sort of stress limits most applications are designed for.
This is where automated performance testing steps in; instead of lots of people, there can be one person or a small team developing automation test scripts with the help of an automation testing tool. You might still have a large hardware requirement, but we can discuss that in the next section.
What to think about when choosing a performance testing tool
1. Does the tool support the types of testing that you want to perform?
- Load testing - Does the tool allow you to specify the number of users that you want to simulate? Does it support the scales that you need?
- Stress testing - Does the tool allow for a much larger amount of users than what you hope to support, so that you can find the load which stresses your system?
- Scalability testing - Does the tool allow you to specify how you might increase the number of users applied in the test so that it mimics the use that your system might expect?
- Endurance testing - Does the tool allow you to run tests for extended amounts of time? Does the tool have a limit for how long you can run this for?
- Spike testing - Can the tool offer ramping up and down of users?
- Volume testing - Does the tool offer interaction with your application? Does it allow you to easily set up and tear down the data you need for your test? Do you need to set up the data separately?
2. Is the tool easy to use?
- Does the tool fit your current engineering skillset or are you going to need to learn how to use the tool?
- Does it allow you to use a scripting language?
- Are you familiar with that language?
- Does the tool have record and playback and present the results in an easily editable way?
3. Does the tool have good documentation and support?
- Is there an active community of users that may be able to help with any issues that you have?
- Does the tool have detailed, easy to understand documentation and tutorials?
- Does the tool offer any other user support?
4. Do you need to consider pipelines?
- Do you want to run the tests as part of your CI/CD pipeline?
- Does the tool easily support or integrate with your existing pipeline tool?
5. Do you want to use an on-premises solution or a cloud solution?
- Do you have the hardware to support performance testing already?
- Do you have a need to run your tests frequently or only occasionally?
- If you only want to run the tests once per release, then do you need to have your own dedicated hardware for this?
- Would it be more cost effective to use a cloud solution?
6. How much is the solution going to cost?
- A cloud service may look expensive, but the cost may actually be less than providing your own hardware for this testing.
- Think about how frequently you are going to be running your tests and with what level of loads to work out the cost of each approach.
7. What do you want to get out of the testing?
- Are you happy with a report that gives a pass or fail, do you want more detail?
- Do you want graphical representation of your results over time?
- Is a simple pass/fail for each suite sufficient, or is a full breakdown per test required?
- Do reports need to be human-readable, or read by other tools?
- Does the output format need to be settable?
8. Do you need to consider any special security requirements?
- Do you have security requirements that may prevent you using cloud services and require you to use an on-premises solution?
- Can security controls, for instance account logins, be accessed in the tool?
How to make performance testing successful
There are several common mistakes that are often made when undertaking performance testing. Here are some things to consider to help you avoid those mistakes.
1. Test the right environment - Ensure that the environment that you choose is representative of your production environment. If you test against a development environment that doesn’t have the ability to run at the same scale as your production environment, then the test results are not going to give you a real idea of how the system will behave. Alternatively, if you consider testing on your production environment, then you will be directly testing the performance of your system, but you risk degrading the experience for your user.
2. Consider quality over quantity - You can’t test everything, so think about the tests that are most valuable. This isn’t just a consideration for performance testing, but for all testing activities. Focus on the simplest tests that provide the most value first.
3. Test early - Don’t leave performance testing to the end of your project. If you conduct performance testing alongside development it will lead to finding problems more quickly and getting faster solutions. If you leave the performance testing until most of the main development has been completed, any major performance issues found that need to be resolved could be very expensive. Imagine your team had decided on a specific architecture for your application and you found a performance issue once the product was established that was due to a shortcoming of the architecture. At this point you may then have to choose between re-architecting the product, or accepting that your product will always have poor performance.
Additionally, if you leave performance testing to the end, then you probably won’t have time to complete endurance testing.
4. Tests are still code - Treat your performance testing scripts as you would any other code. Make test scripts reusable, don’t hard-code values and think about reusability, quality and future maintenance.
There are alot of considerations needed when starting performance testing, but starting early, and finding the right tool, will help to make your performance testing journey successful.
At WORTH we believe that knowledge sharing should be free, enabling and impactful. Want further insight into our thoughts and ideas? Sign up to our newsletter.