Automated Testing of APIs

Automated Testing of APIs

Richard Brown

20 September 2021 - 6 min read

Automated Testing of APIs

Why Automated Testing?

In the last few years, test automation has matured from being the cool new trend to something that every self-respecting software team uses. New roles have sprung up around it, such as ‘Automation Engineer’ and ‘Software Developer in Test’.

Why has automated testing become such a major part of the software development lifecycle? Its main selling point is around the automation of regression testing. As software releases have become more frequent, the overhead of manual regression testing has become unsustainable. Even a relatively small application can have a regression pack that takes hours or even days to run.

Releasing multiple times per week can mean either having several people regression testing full-time or cutting corners on the testing (and therefore running the risk of shipping regression bugs on a regular basis). Neither approach is recommended.

An automated regression pack can cut the regression testing time down to minutes. Many software products now release to production several times a day, to get features into the hands of users as fast as possible. It would not be possible to do this with any kind of confidence were it not for test automation.

Automated tests do come with costs though, as the time taken to write the tests and then maintain the tests as the application evolves must be taken into consideration.

Why Focus on APIs?

Traditionally, test automation focused on user interfaces. UI automation in web applications generally meant Selenium. Selenium is an open-source framework that can be used to write code in a variety of languages (such as Java, JavaScript, Python and C#) to programmatically interact with a web browser.

As successful as Selenium has been, there remains numerous horror stories shared by developers and testers. Automated UI tests can be extremely brittle, so a test that passes one minute can fail the next.

The rise of APIs in the last decade has brought with it a compelling alternative to UI automation: automated API tests. These tests target the API (or backend) of a system directly, rather than interacting with it via the user interface. There are a few reasons why API automation can make sense:

  • Most applications are like icebergs; the bulk of their code is hidden behind an API in the backend and, therefore, API tests can cover most of the complex business logic.
  • API tests don’t suffer from the same brittleness as UI tests; they have none of the timing issues prevalent in UI tests and therefore repeated test runs against the same version of the API will generally give you the same result.
  • API tests are much faster to run than UI tests (25 times faster on one of our projects).
  • APIs are a collection of independent, stateless actions, which means that there is the flexibility to run tests in any order.

There are trade-offs involved, as by focusing entirely on API tests you are of course not going to catch any UI bugs, so a balance is usually advisable – something like an 80:20 split in favour of API tests is a decent rule of thumb.

API Automation Tools

There are various tools out there to support API testing, such as Postman and SoapUI/ReadyAPI. We decided to write our own test framework rather than use an off-the-shelf product, the reasons for which are discussed below.

Writing a Test Framework

Writing an API automation framework is not something to be taken lightly; writing a library to be used by non-programmers is hard. However, the benefits in writing API tests as test methods in code and using a framework over which we have total control were big enough to outweigh the challenges. There were additional benefits in terms of empowering the QA team to suggest and even implement features and improvements in the framework, thus giving them total ownership over API testing.

There were a few things that we had to achieve with the framework to support the kind of tests that we needed to write:

  • Various forms of authentication, using JWTs, cookies and API keys

This included re-using ‘login sessions’ across multiple tests for performance reasons

  • Support for a ‘test script’ approach, so the tests could be written top-down like a script

This made the framework much more approachable for our QA team

  • Support for calling multiple APIs and using multiple sets of user credentials, even within the same test
    • This enabled end-to-end tests involving multiple users and APIs to be written

A typical test method looks something like the code below (written in C#). ‘Roles’ are defined in configuration and associated with a set of user credentials; each API call can then be called as different user if needed. As a rule, data created in the test should be deleted when it ends, to maintain as clean a database as possible.

public void Edit_Product_Changes_The_Product_Name()
    // Create and save a product
    var productSetupResult = SetupWithSave<Product>(Resources.Product, Roles.Admin);
    var productId = int.Parse(productSetupResult.Response.Content);
    // Register the product for deletion when the test finishes
    DeleteWhenTestEnds(productId, Resources. Product, Roles.Admin);
    // Edit the product
    var product = productSetupResult.Entity;
    product.Id = productid;
    const string newProductName = "Test product";
    product.Name = newProductName;
    var editResponse = Edit(productId, product, Resources.Product, Roles.ProductMaintainer);
    // Assert the edit response
    // Get the edited product and assert the name is correct
    var editedProductResponse = Get<Product>(productId, Resources.Product, Roles.ProductMaintainer);

It’s important to note that we do still use tools like Postman where appropriate; a one-size-fits-all approach is not recommended.

Upskilling our QA Team

Once we decided to write our own test framework, the main challenge was upskilling our QA team, as most of them had little or no coding experience. By coincidence we had recently started a training programme for graduate software developers (which has now grown into our Academy), so we were able to use a lot of the training materials we had written for the graduate programme to teach C# programming and automated testing skills.

As well as programming skills, concepts like authentication, HTTP methods and build pipelines were important, therefore there was a general upskilling in a variety of technical areas. This has led to improved knowledge around other forms of technical testing that target APIs, such as performance testing and security testing.

Conclusion Writing automated tests that target the API of an application, rather than its user interface, can lead to more stable and more performant tests, with minimal trade-off in terms of test coverage. The upskilling of QA engineers is an additional benefit that must be considered.

Ebook Available

How to maximise the performance of your existing systems

Free download

Richard Brown is the head of engineering at Audacia, responsible for steering the technical direction of the company and maintaining standards across development and testing.