Improving the maintenance of your regression suite

Share This Post

Robust regression test execution reports show how important they are and the value they bring to the product, even more so if they are automated and run continuously. A regression plan is made up of all the tests aimed at validating the implementations and business flows continuously, as the application grows so will the regression plan, while other plans such as smoke that check the main business flows will remain unchanged as long as these flows exist.

In order to have a correct process at the testing level, the corresponding tests of the functionalities at the code level must be carried out first through Unit and API tests, while the business tests (smoke, regression) will be carried out at a later stage to continue increasing the value of the tests carried out on the application.

However, on many occasions the high value of regression tests comes with a high cost in maintenance time. All QA automation has faced many hours and days of work digging through logs of test executions and investigating code trying to figure out if the test error is due to a possible bug, a programming error or false negatives and positives. In this article we develop the main guidelines to try to correct the causes that make it difficult to maintain our regression plan.

(Re-)Consider test automation strategy

One of the main reasons why we generally struggle to maintain regression is because our tests tend to perform too many actions and checks at the same time that they tend to run through user flows in our application, many of them unnecessarily. This class of so-called end-to-end tests tend to be by definition difficult to maintain, slow and fragile. They mostly represent a black hole of time in regression maintenance.

Minimizing the number of e2e tests and maximizing the number of functional tests should always be the first starting point for any regression suite.

Functional tests (feature tests) are the antonym of e2e, they allow us to verify that an implementation or feature of the application works, plain and simple. Since an application is full of hundreds of small implementations, testing each of them separately will always guarantee coverage, maintenance and reliable results on the state of the application.

Consider the following example:

Scenario: User downloads a bill

Given the user logs into the application

And the user creates a new bill

And the user opens created bill

When the user presses download option

Then the bill is downloaded

This is an example of a ‘flaky test’. The objective of this test is to check the ‘Download’ functionality of the application, for this an invoice is necessary as a precondition. Do we need to create a new invoice to download it? What happens if the create invoice implementation fails? In this case, not only would another possible test of creating an invoice fail, but this test would specifically fail because it has not been possible to create the invoice, when in this case creating the invoice is not the target of the test but downloading it. We must try to reduce to the maximum all those preconditions that are unnecessary for what we want to check.

  Process First!

Therefore, the previous example could be adjusted as follows:

Scenario: User downloads a bill

Given the user logs into the application

And the user opens bill ‘X1023’

When the user presses download option

Then the bill is downloaded

Given that in our hypothetical scenario it is still necessary to open the invoice in order to download it, we must open it before pressing the option, but in this case the invoice is already in the system in which we run the test, therefore we minimize the possibility that a precondition causes a failure in our test.

Support your tests with API calls to your advantage

Another possibility that tends to be forgotten is to use API calls to support our UI tests, on other occasions they are directly avoided under the argument of “using UI tests that are faithful to the user experience”. Obviously UI tests should interact as much as possible just as a real user would do, but care must be taken when handling preconditions as they are a potential source of unwanted errors that compromise the actual validation of our test. In addition, in our regression suite there would already be tests whose final validation will already check (true to the user experience) what would be preconditions in other tests.

Relying on API calls for our UI tests will not only make the execution of the tests much faster, but we will also not be altering the behavior of the application, since behind each form that interacts with the Backend part there will always be making a call from an endpoint by passing certain information to it. We can directly use the same endpoint by sending it the information that we would send through the form.

Suppose the following scenario: In order to verify a user’s account in the system, the account must be new, therefore we cannot have a user account stored in the system as in the previous example.

Scenario: User verifies a created account

Given the user opens sign up form

And the user submits signs up form

And the user logs into the application with created user

When the user verifies the account

Then the user sees a message of verification completed

To create a new account, the user must access the registration form, fill in all the fields of the form and send the information to create the new user. Knowing the information we send through the endpoint, we can define the test as follows:

Scenario: User verifies a created account

Given a random account is created with the following data

| field | value |

| firstname | John |

| lastname | Doe |

| email | [email protected] |

| password | Test123456 |

And the user logs into the application with created user

When the user verifies the account

Then the user sees a message of verification completed

The endpoint used in the Given step will call the same endpoint that is used in the registration form, and the information used is the same that would be sent through the form. At the end of the step, the generated email with which the user could access the system would be obtained.

  QA Interview with Serhii Zabolennyi - QA Automation engineer at Apiumhub

Although the implementation of this API call in the automation framework may seem complex at first sight, the benefits of using the application’s API for UI testing in support greatly overcomes the initial investment of preparing the framework with this capability. In addition, on the other hand, it also allows the possibility of carrying out backend tests in the same framework, something that is undoubtedly really beneficial to guarantee the correct functioning of the system API.

Speak with developers for consistent locator strategy: ID over Xpath

Undoubtedly, communication between the QA and the development team is key when it comes to achieving reliable and robust tests. Among the different ways of selecting the UI elements of an application, the following stand out above all:

  • ID
  • Xpath 

IDs are unique identifiers of the UI elements of the application. If defined correctly, the IDs are immutable names, easily accessible and used by the main software testing tools such as Selenium. Instead, they require explicit definition and maintenance by the development team.

Xpath (XML Path Language) is a language that allows you to build expressions that traverse and process an XML document. The elements of the DOM tree of an application are built as an XML document and tools like Selenium allow us to select the elements of the application with that structure, without the need for the explicit intervention of the development team and allowing us to quickly automate the test through of those elements.

However, the use of Xpath also has its drawbacks. Looking at the example below:

<form>
   <div>
      <span>Email</span>
      <input type="text">
   </div>
   <div>
      <span>Password</span>
      <input type="password">
   </div>
   <div>
      <input type="checkbox"><span>I agree to the <a href="https://www.test.com/terms" target="_blank">privacy policy</a>of this website.</span>
   <div>
   <div>
      <input type="checkbox"><span>I want to subscribe to the newsletter.</span>
   <div>
      <button type="submit">Sign up</button>
</form>

<form>
   <div>
      <span>Email</span>
      <input type="text">
   </div>
   <div>
      <span>Password</span>
      <input type="password">
   </div>
   <div>
      <input type="checkbox"><span>I agree to the <a href="https://www.test.com/terms" target="_blank">privacy policy</a>of this website.</span>
   <div>
   <div>
      <input type="checkbox"><span>I want to subscribe to the newsletter.</span>
   <div>
      <button type="submit">Sign up</button>
</form>

Having this structure, our selectors could be obtained through XPath like this:

@FindBy(xpath = "//span[text() = 'Email']/input")
 public static WebElement email;

@FindBy(xpath = "//span[text() = 'Password']/input")
 public static WebElement password;

@FindBy(xpath = "//input[@type = 'checkbox'][last() - 1]")
 public static WebElement termsCheckbox;

@FindBy(xpath = "//input[@type = 'checkbox'][last()]")
 public static WebElement subsCheckbox;

@FindBy(xpath = "//button[text() = 'Sign up']")
 public static WebElement submitButton;

Although selectors are functional, locators can be easily “broken” by any change introduced in the structure of the form, be it changing the components that result in the span > input structure, adding new elements such as checkboxes or changing the text in the form in the ‘front’ of the app. Any change will cause the test to fail and will add maintenance time by adjusting the selectors.

<form>
   <div>
      <span>Email</span>
      <input type="text" id="email">
   </div>
   <div>
      <span>Password</span>
      <input type="password" id="password">
   </div>
   <div>
      <input type="checkbox" id="terms"><span>I agree to the <a href="https://www.test.com/terms" target="_blank">privacy policy</a>of this website.</span>
   <div>
   <div>
      <input type="checkbox" id="newsletter"><span>I want to subscribe to the newsletter.</span>
   <div>
      <button type="submit" id="submit">Sign up</button>
</form>

<form>
   <div>
      <span>Email</span>
      <input type="text" id="email">
   </div>
   <div>
      <span>Password</span>
      <input type="password" id="password">
   </div>
   <div>
      <input type="checkbox" id="terms"><span>I agree to the <a href="https://www.test.com/terms" target="_blank">privacy policy</a>of this website.</span>
   <div>
   <div>
      <input type="checkbox" id="newsletter"><span>I want to subscribe to the newsletter.</span>
   <div>
      <button type="submit" id="submit">Sign up</button>
</form>
@FindBy(id = "email")
 public static WebElement email;

@FindBy(id = "password")
 public static WebElement password;

@FindBy(id = "terms")
 public static WebElement termsCheckbox;

@FindBy(id = "newsletter")
 public static WebElement subsCheckbox;

@FindBy(id = "submit")
 public static WebElement submitButton;

Thanks to the IDs introduced by the development team, the selectors are now much more robust and solid, unbreakable with any change depending on the layout, either by editing the form elements or adding new ones.

  A framework for QA Testing

We must ask the development team to define identifiers (IDs) for all those elements with which we interact in our tests.

Use a dedicated environment and dataset to run your tests

Last but not least, it is worth highlighting the enormous benefits of using a dedicated environment for regression test executions in addition to working with a specific data set for them. Working with these elements provides enormous benefits:

Dedicated environment

  • Consistent executions when testing in a controlled environment.
  • A dedicated environment allows you to isolate code and verify application behavior while ensuring that no other activities or sources interfere with the results.
  • Possibility of faithfully representing a production environment to carry out the tests, which provides certainty about the reality of the results obtained.

Dataset

  • Control over the data used in the tests for each execution.
  • Having a dataset prevents the tests from having to create data to execute their validations, which reduces the possibility that the results may be altered in the event of an error during the creation of the test data.

These elements form the starting point on which any set of regression tests should be run. Additionally, both the environment and the data set can be deployed through automated tasks in services such as Jenkins or Gitlab, which would allow having a ‘clean’ state before each execution of the regression tests.

Conclusion

Regression maintenance is essential to verify the status of the product, however it is a task that can easily be very costly in terms of time dedicated to QA, reducing capacity in other tasks such as the definition/refinement of new tests or automation. of new tests which in turn would be added to the regression plan. For this reason, we must carry out good practices that help us limit the time invested in maintaining the regression.

Author

  • Kilian Jiménez

    QA Automation with over 5 years experience working in both manual and automation testing, on web and mobile testing. Passionate about AI/ML & testing processes.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Subscribe To Our Newsletter

Get updates from our latest tech findings

Have a challenging project?

We Can Work On It Together

apiumhub software development projects barcelona
Secured By miniOrange