You are opening our English language website. You can keep reading or switch to other languages.
02.11.2022
10 min read

Testing automation: How to make a strategy and avoid potential hazards of implementation

I've been working in IT for more than 14 years, and for the last 6 years I've been team lead in a testing team. My responsibilities include developing testing strategies from the very initial project stages. Like everyone else, we are trying to implement autotests to speed up the process, increase test coverage, and generally make life and work easier for ourselves.
Testing automation: How to make a strategy and avoid potential hazards of implementation
Article authors
Ellina Azadova

I think this article, which is based on the experience of our team, will be useful to those who are just starting to build their own testing automation process, and for those who are dissatisfied with what they're getting out of testing so far. I’ll say right away that I won’t go into technical details or list specific tools: this requires an individual approach depending on the application and the specific team. I will focus instead on how to develop an overall approach and strategy. I will share the main ideas and pitfalls that we have already encountered in our team, and I will try to help you avoid these mistakes in the future. Some things may seem obvious to those who already have automation experience, but sometimes it's worth going back to the basics and remembering simple rules that are often forgotten over time. Let's begin.

How to start forming a strategy

I think everyone has received messages from users about bugs that were not detected by autotests. That said, users can be inattentive too. We don't always perfectly understand the nuances of the business for which we are developing a product, or we can't cover all possible scenarios. For example, the following is a case we had at one of the test environments. The tester followed the link and checked that a new page was opened, and only just the fact of the page being opened. However, opening a page which stated "You aren't authorized to view this page" also passed the test, even though it wasn't a successful scenario.

As a result, the problem still exists, yet the test was successful. In any case, the process can be improved, and autotests are a powerful tool for doing so.

It’s best to start testing automation by applying a standard pyramid to all work done. This pyramid is based on units and integration tests from developers.

Automation should take into account the frequency of a particular type of testing, and its necessity and risks. Therefore, smoke tests are the next to be automated. Then the team moves on to functional or regression tests. After that, you can implement automated testing at the Continuous Delivery level, all in good time.

Step 1. Choosing the functionality for automation

If there are already test cases written in advance, that's good, and we'll build an analysis based on them. If there are no test cases, it's time to write them.

Let's pay attention to the following points.

Is it possible, in principle, to automate certain scenarios, and is it advisable? For example, an entry in the database will appear in half an hour, or an hour, after being added. Does it make sense for the autotest to wait for this? We can wait, in principle, but will we speed up the testing process as a whole in this case? After all, speeding up the process is practically the main purpose of automation. It turns out that it's necessary to replace manual testing in such a process only if we want to completely save our Manual QA from having to become involved.

And if the scenario is simple, and the check is a one-time thing, is it necessary to spend time automating it? Theoretically — yes, especially if the client demands automating "absolutely everything!" But keep in mind that this inevitably entails additional costs. Are you ready for these extra costs in the current project? Maybe we should take a closer look at something more important?

Finally, if each time the data for the test needs to be carefully selected manually, is there any point in automation? It's not always possible for complex financial systems to make a universal request for information. Is it a good idea in your case to manually generate data every time you run an autotest?

There are also tests in which a person is faster, and more likely to notice an error. Should I build checks for the necessary expectations using scripts?

Will the resources spent on automation pay off? At first glance, automation of a specific test looks simple, but once we start working, we see that in the current implementation, or under certain conditions, it will require significant resources. Let's think about whether we have the time, and whether the client is willing to pay for it?

Do I need to automate simple tests? Why not! Maybe you should dig through the 500 lines of data in JSON. Once, I myself had to compare JSON with data scattered across an Excel file the day before the release. I won't say that the test was too difficult, but with maximum concentration it took me seven hours. Of course, after that incident we automated the process!

And complex ones, with mathematical calculations? Definitely! This will ensure the necessary accuracy in calculations, and eliminate the human factor.

You can add to this list!

Step 2. Let's make sure that the existing test cases are ready for automation

What does "ready for automation" mean? Let's start with the design.

In many test management systems, you can add an attribute for a test that allows you to identify whether the test needs to be automated (the reason is also indicated), or if it’s already automated. This is a convenient thing in my experience, as it becomes easier to filter and determine the coverage.

Writing clear and detailed test cases in general, as well as maintaining good documentation, is a real art. It's good practice to use a review of test cases, which can be performed by one of your colleagues who are part of the testing team, as well as a lead or business analyst. An outside view is always useful, since this person can not only make sure that we haven't missed anything, but also look at the project from the point of view of a BA. This approach will confirm that we have covered all the requirements and user scenarios.

When further checking test cases, I recommend paying attention to the following points:

  • Are the test cases written by manual testers? They often write test cases. It's great if manual testers have a general idea about automation. This will allow you to analyze the possibility of automation, and its expediency for a specific scenario, and make a meaningful decision about automation. I have often encountered situations when manual testers completely forgot to put down this attribute, and test cases were lost from the filters. Or, out of habit, they set automation for all the test cases. If necessary, you can always consult with an experienced automation expert.
  • Are the test cases written by automators? In this case, the test cases may be written in a purely technical language, and the user script won't be clear. For example, I came across a test consisting of several weighty queries: "execute query 1", "execute query 2." Again, if the request returned data — is this good or bad? When is the test considered passed? Perhaps they discussed it once, but after six months, no one remembers it anymore. This is convenient in general, but it can be difficult to figure out what exactly is being checked without delving into the details of the request itself. Check to see if you understand the scenarios, or if they still require clarification.
  • Detailed scenarios. The scenarios should be described in detail. So that with further automation, it becomes obvious what needs to be done, where to click, and what to check. For example, when executing a script for working with a document, the "save the document" step stays at the end. This step is ambiguous because this action is performed in several different ways. Choosing one of them can radically change the behavior of the autotest, which as a result will check something else than you were intending it to check. We don't want that, right? Try not to force the other person to figure out what you meant.
  • Test relevance. The tests must be kept up to date. Yes, it's difficult, but you can't get by any other way. It's a good practice to inform automation engineers about changes before their autotests fail. For example, we know that the UI will change, or an additional window will pop up somewhere. With manual testing, you don't even have to pay attention to this: we've already heard about the upcoming changes, and we know that everything will be fine. All that's left to do is click OK, and move on. Autotesting, meanwhile, definitely doesn't expect anything like this, and will start sounding the alarm.

Step 3. Decide on the data that the autotests will use

Often autotests themselves generate data for verification, and delete this data after execution.

There are pros and cons to this: on the one hand, we try not to interfere with anyone — we created what we needed ourselves, tested the functionality on the data we are sure of, and cleaned up after ourselves. Furthermore, this way we guarantee the necessary coverage. But it's important to conduct testing on the data that the user is working on as well, or as close as possible. If we can't create such data for some reason, we use what we already have, but we don't delete the data after the tests.

Why is that? "Real data" has certain features: for example, the data can be imported into a system, formed in a different way, have more complex logic — something that will be difficult to repeat for the purity of the scenario.

Another nuance is that data changes frequently. If this is your case, let's consider a different approach.

At some point in the cases we started adding criteria for the data to be used. They can be simple: for example, you need to take a user who registered in the system more than a year ago. One of the solutions is to make a request to the database with the same criteria. This is even more convenient for autotests.

This approach is suitable if the test needs to be performed on different environments, where the data varies. But there is also a minus: if the data is quite complex, i.e. there are a lot of criteria (a user registered a year ago who has not made purchases of goods of a certain category in the last 3 months), it will be difficult or even impossible to get the data in this way. It will probably be easier to select the data manually and substitute a hardcode.

In order not to interfere with each other during testing, use different environments, or separate the data for autotests and manual testing. Then, when checking a certain scenario, you won't encounter the problem of accidental data changes.

Step 4. Optimize checks

When optimizing autotests, don't lose sight of an important point — the quality of the checks. We strive to make autotests faster, and this is their obvious advantage compared to manual testing. However, make sure that necessary coverage is provided at the same time.

I've seen from my own experience that an attempt to speed up the check can have an unpleasant effect on quality. For example, you can send requests not through the user interface, but directly through the API. You've got space to move around here, and your work, of course, will go faster. But with this approach, the UI is not checked, and this is fraught with consequences. Users will not use the API, and problems with the interface will be noticed immediately — they will become an unambiguous blocker for user tasks. But if such an idea has occurred to you, it may be time to separate and combine the checks. Then calmly think over the coverage, and optimize the process for your own pleasure.

In place of a conclusion. A short list of recommendations

Let's summarize and repeat what should be remembered when starting testing automation in the project:

  • make sure that you have identified and designated all the scenarios that it are advisable to be automated;
  • combine checks, and try to provide sufficient coverage in terms of functionality and data;
  • don't lose sight of custom use cases.

And the main thing to remember: the purpose of any tests is to find problems, in order to release a quality product. Let the autotests be "green," and may the users be happy.

Most wanted
1 3
Subscribe to our IT Pro Digest
From AI and business analysis to programming tutorials and soft skills, we have it all!