Decrease UI tests execution time with Kubernetes (AKS)

Clément Joye
8 min readFeb 1, 2021

--

Integrate Kubernetes into your build pipeline and keep minimum cost.

Photo by Sven Brandsma on Unsplash

Introduction

Test automation could be compared in some case to this “KAPLA” tower. It is smooth and easy at the beginning when there are not so many bricks, but the more you build, the trickier it gets to maintain without making compromises. Typically the bigger your system gets, the more time it takes to execute your automated tests. As a result, the development feedback loop provided by your tests increases alongside unfortunately…

UI tests execution time increases overtime

As a test automation engineer many of us have experienced this issue while maintaining and adding new UI (or system tests more generally) tests in our test suite: test execution time is increasing over time.

Back in the days, I thought it to be inexorable and if test execution time would start to go over the roof, I would take a few measures to mitigate that in a nice way. Here’s a few things that one should consider first:

  • Running headless if UI tests,
  • Parallelization if applicable,
  • Optimizing the test paths,
  • Impact analysis to limit the number of test cases to run.

However, some of those options can be very time consuming to implement and ultimately, optimizing can simply not be enough.

For test parallelization, one could try to run in parallel on the same host, but quickly you would find out that this is quite limited since the machine you use has limited resources.

When you reach that point, you might probably turn towards running your tests on distributed servers to spread the load, but again this is not trivial and can come at a price and again it is not always trivial.

This is when k8s comes in action.

Why are we here exactly?

I started experimenting with Docker / Kubernetes (K8s) in different projects for a couple of years now, and I really enjoyed working with these as they make deployment so simple. It is also during that time that I found out K8s is also really practical for load testing, and system testing alike.

The main purpose of this article is to show how one can leverage the advantages of K8s to distribute simply and efficiently the load of your system tests while not spending a fortune on a subscription to a CI/CD service (Circle CI etc..) with concurrency options or holding on a permanent cluster.

With the method described here it is possible to keep the cost as low as 10$ / month if implemented properly.

NB: It is worth mentioning that I have seen a project on GitHub called Zalenium but this solution still required to have an existing K8s cluster which was out of the equation in my case as it can be expensive to have. Also, I wanted to show that a PowerShell script might as well be very effective for that type of tasks.

The topic and the tech stack is quite broad so we’ll try to narrow down things as much as possible in order to avoid going sideways. Our tech stack will be the following in this example, but this is obviously up to you and your preferences to substitute any of those:

  • Azure Kubernetes Service (aka AKS)
  • DevOps Azure pipelines (CI/CD SaaS)
  • PowerShell Core (create, run, monitor, dispose our K8s cluster)
  • Cypress (UI test framework)

In order to follow this article you don’t necessarily need to know those, but it will be useful when there will be code examples.

As a result this is what we want to achieve very simply:

Multiple pods running in parallel against a single web app
Multiple pods running our tests against a web app

For the sake of keeping things simple, this is how we will architecture our solution here, but a more advanced and optimal solution would be the following:

Everything is containerized and can scale out as needed
Everything is containerized and can scale out as needed

Describing the platforms and frameworks used

The goal for our setup is to be as cost efficient and simple as possible. The choice for the tech stack mentioned above is very simple:

AKS is simply the service I have been using most, but this is applicable as well with other services obviously. They key here is about creating a cluster on-the-go in order to limit the cost. Cluster creation is quite fast and only takes a few minutes on AKS.

DevOps where our pipeline will be executed, but any other is fine too.

PowerShell Core is cross platform and is already available on most VM used by the different build platforms (DevOps, GitHub etc.). It comes pre-installed with Azure CLI and kubectl which we will need. We will use all those to accomplish the following tasks:

  • Manage the cluster creation,
  • Deploy the resources and tests,
  • Monitor the execution,
  • Gather the results from the tests,
  • Return the result back to pipeline

Cypress will obviously run our UI tests. Since it is based on JavaScript, there is no need compile the code and create new docker images for each new commit. Instead we can easily inject anything test or configuration related directly in the cluster before the execution takes place.

It is also possible to do that with Selenium, at the condition to use a language binding that does not need to be compiled (Python or JavaScript for example).

All and all, this is how the different platforms and framework are organized:

Imbrication of the different technologies used
Imbrication of the different technologies used

Folder structure

Folder structure
Folder structure

Here is how the solution is decomposed:

  • config folder contains a json file with all the data needed (cluster parameters, parameter substitution, endpoints…). Depending on the stage were in, we might want to have different parameters (local, dev, test…)
  • cypress folder contains all our UI tests and support functions.
  • k8s folder gathers all the yaml templates that will be used to deploy our UI tests properly and retrieve the results.
  • powershell folder has our scripts, needed to create, deploy, run and dispose our UI tests.
  • Eventually, if your cypress image contains specific external libraries that are not contained in the official image, you might need to build one yourself and host it in a docker registry.

PowerShell Core Scripting: The orchestrator

Our PowerShell is the entity that will manage every action related to the UI tests within our pipeline. Below is a representation of all the steps it goes through.

The sequence of steps followed by the PowerShell script
The sequence of steps followed by the PowerShell script

Our scripts are organized in different parts in order, each one having a very distinct role.

PowerShell script architecture
PowerShell script architecture

Our main script will sequence the different tasks while calling the different controllers, and the controllers will rely on our two different services (AKS and kubectl) whenever they are needed.

Typically, here are the different purpose filled by each one of them:

  • Configuration controller picks our configuration file and sets up the data in memory.
  • Cluster controller create our cluster, its node and VM size, and takes care of disposing it.
  • Run controller deploys the UI test pods along with other resources needed and queries the cluster for pod status to know when to stop.
  • Report controller retrieves the results generated by the different pods and copies them on the local build VM.

Cluster architecture and interaction with the host VM

Before showing some code, we will first illustrate how our K8s cluster is deployed, and the different events happening there and back to the host VM.

Cluster architecture, events, and interaction with the host VM
Cluster architecture, events, and interaction with the host VM
  1. Initialization of the configuration in memory, creation of the yaml templates for our UI tests pods and substitution of required variables. Our strategy consists here of creating one pod per test file.
  2. Cluster creation in AKS based on our configuration file (node size, VM size, resource group and cluster name). If a cluster is already available, we can skip this step.
  3. Deploy resource infrastructure needed to expose test files, and store the reports to retrieve them later on. For that we will use a Persistent Volume (PV) / Persistent Volume Claim (PVC) and a simple pod.
  4. Export our test files to the cluster to make them accessible.
  5. Deployment of our test pods while mounting our PVC(s) so they can access our test files.
  6. Let the test pods execute the tests until any of them failed or all have completed. Each pod will create an individual report for the series of tests it ran.
  7. Once the pods have completed (failed or passed), use one of the pods previously created to import all the reports generated to our local host.

Pipeline

The pipeline itself is pretty straightforward and can be used in different ways depending on the context. The simplest form is running the whole flow: Cluster creation, test execution and cluster disposal sequentially, as shown below:

However the powershell script is flexible enough to only execute only specific parts.

If you are building a docker image of your system under test at the same time and plan to deploy it in your K8s cluster as well, then you can execute the cluster creation asynchronously and build your docker image in parallel. This will make you gain precious minutes.

Also, there is no need to wait until the cluster is completely disposed, and this can also be executed asynchronously.

The PowerShell script I built allows different modes (All, Create, Run, Dispose) in order to do just that.

Optimized pipeline sequence
Optimized pipeline sequence

Results

Once the pipeline has run, and the results exported and published, we can see the results as if executed directly executed on the host VM. The export and logging possibilities are quite numerous with Cypress , but we will not enter into details regarding that here. However it is important to understand that the reporting possibilities are not more limited than if running the tests on the host VM.

Conclusion

I hope that this article helped you to understand the possibilities that Kubernetes and Docker can offer in terms of test execution time.

In the end, part of this solution simply relied on making the bridge with AKS from your PowerShell script, but I suppose it should not be too complicated to adapt it for other Kubernetes cloud providers, provided that you can write your own service to perform all the required actions.

All the files (except the UI tests that were part of a customer project) are accessible on my GitHub, and you can feel free to use or reuse them as you see fit in your own context (under GNU GPL v3 license).

If properly implemented, this solution can drastically reduce the test execution time of your system tests. As a reminder, a cluster takes about 4–5 min to be created (in AKS at least), and the gain in the test execution time will depend on different factors such as:

  • The number of nodes
  • The VM size in your cluster
  • The possibilities of your system tests to run in parallel or not
  • The computation capabilities of your SUT or the possibility to scale it out

--

--

Clément Joye
Clément Joye

Written by Clément Joye

IT professional with hands on automation, test and development. I’m always on the lookout for new paths and love to build solutions and systems from scratch.

No responses yet