Run in Livebook

Creating testable HTTP API client code in Elixir


As part of my day job I’ve had to create a couple of HTTP API clients and after some experimentation I’ve ended up with a code structure that I like and that I feel makes testing code that uses a JSON API client easier.

Making actual HTTP requests to 3rd-party services while running the tests can be difficult because of credential availability on the machine that runs the tests, lack of a testing environment for making calls, slowness etc.

One solution is to mock the outgoing requests on the client while making sure that mocking the calls is as easy as possible - but there’s always some pain involved. There are other valid approaches to this kind of testing like exvcr but I feel like mocking balances out code coverage, usability and ease of setup fairly well.

The libraries to use are

  • Mox for mocking function calls
  • Knigge for simplifying the plumbing around mocking
  • Finch for making the actual HTTP requests
  • Jason for parsing JSON text responses
  • Plug for some convenience methods when composing URLs


By using the illustrious Repo Contest™ as an excuse, this post will show how to structure, use and test a client for Github APIs. The endpoint to be used will be /orgs/{org}/repos. The code snippets also feature type_spec definitions which although not required for the code to work, I still feel they can be quite useful for documentation and editor auto-completion purposes so I tend to put them in.

To make things more interesting this post is also written as a livebook which means that readers should be able to tinker with it and run it (click on the badge up top).


Let’s start by installing the dependencies first.

  {:mox, "~> 1.0"},
  {:knigge, "~> 1.4"},
  {:finch, "~> 0.9"},
  {:jason, "~> 1.2"},
  {:plug, "~> 1.12"}

To be able to mock the API client one has to create a behaviour module. With the help of some knigge magic, this module is also going double as the public interface for the client.

Using a behaviour in theory allows for multiple implementations of the same interface. In practice however it’s usual that only just one real implementation is needed - that’s why I prefer to place the behaviour module and the implementation module in a single file.

In a mix project the file would be under lib/github_api.ex or lib/my_project/github_api.ex depending on the type of project.

Test setup

For the tests to work in livebook one needs to set the correct Mix environment, otherwise ExUnit, Knigge and Mox will not work together as expected.

This is normally handled for you when working within a mix-generated Elixir project, you can safely ignore the following Mix.env/1 function call in that case.


Let’s start small by defining the callback and some dummy implementation code for the API behaviour that returns a single string. This is done to make it easier to verify with a quick test that the mock works and the function calls are indeed intercepted.

defmodule MyProject.GithubApi do
  # The otp_app config key is used to lookup the implementation dynamically.
  # By default the lookup happens during runtime for the TEST env build and at
  # compile time for all other builds
  use Knigge, otp_app: :my_project, default: MyProject.GithubApiImpl

  # The function to implement. As we're not sure yet what the
  # return type should be, we can leave it at any() for now
  @callback get_repos_for_org(org_name :: String.t()) :: any()

defmodule MyProject.GithubApiImpl do
  @behaviour MyProject.GithubApi

  def get_repos_for_org(_org_name) do

To get some immediate feedback that we’re on the right track here, let’s quickly call the GithubApi.get_repos_for_org/1 function and see that the Knigge plumbing works as expected.


As you can see, knigge allows us to call the function directly from the module that defines the behaviour, which is pretty convenient and makes the call-site code easier to follow.

Let’s also setup the name of the mock implementation module to be used in the tests. This usually goes in the test/test_helper.exs file.

# inside test_helper.exs

# Τhis configures Mox to use the MockGithubApi module as the API implementation in tests.
# Using a short and descriptive name for the mock module works best and makes life easier.
Mox.defmock(MockGithubApi, for: MyProject.GithubApi)

# we also need to instruct knigge to use the mock in all tests by using the same project name as
# we did when configuring knigge in the behaviour module with the otp_app: option.
# This can also be moved inside config/test.exs as `config :my_project, MyProject.GithubApi, MockGithubApi`
Application.put_env(:my_project, MyProject.GithubApi, MockGithubApi)

Let’s define a test now that can be kept around as a sanity check that the mock setup is working correctly. To do that let’s create the file test/github_api_test.exs and add some test code that verifies the setup.

defmodule MyProject.GithubApiTest do
  use ExUnit.Case, async: true
  import Mox

  alias MyProject.GithubApi

  # This will ensure all expected mocks have been verified by the time each test is done.
  # See the Mox documentation for nuances.
  setup :verify_on_exit!

  test "verify mock works" do
    |> expect(:get_repos_for_org, fn _ -> "hello from mock" end)

    # make sure that the interception works as expected (sanity check)
    # If we got it wrong we will get back "hello", not "hello from mock"
    assert "hello from mock" == GithubApi.get_repos_for_org("some-org")

And let’s run the test:

Modelling the response

By looking at the Github docs for the /orgs/{org}/repos API call we can get a taste of what the response looks like. But there’s nothing like the real thing so let’s see it in practice by making a request with curl (usually available on macOS and Linux) for a list of repos under the “elixir-lang” organization

{response, 0 = _exit_code} =
  System.cmd("curl", ["-s", ""])


As you can see there’s quite a bit of info in there, so for the purposes of this post let’s pretend we’re only interested in each repo’s name.

We’re going to define a new struct that models the response under lib/github_api/get_repos_response.ex

or alternatively lib/my_project/github_api/get_repos_response.ex if that matches your project setup better.

defmodule MyProject.GithubApi.GetReposResponse do
  # we're defining a repo here as a simple map with just one key. Should we need more,
  # it might make more sense to create a dedicated struct for it
  @type repo :: %{
          name: String.t()

  @type t :: %__MODULE__{
          repos: [repo()]

  defstruct repos: []

  def new(json_response) do
    repos =, fn repo ->
        %{name: Map.get(repo, "name")}

      repos: repos

Writing the actual implementation

We can now go back to the API module and use the proper signature

defmodule MyProject.GithubApi do
  use Knigge, otp_app: :my_project, default: MyProject.GithubApiImpl

  # 1. add an alias for the response module
  alias __MODULE__.GetReposResponse

  # 2. specify the return signature separately to avoid cramming everything together
  @type repo_response :: {:ok, GetReposResponse.t()} | {:error, any()}

  # 3. change the return type signature for the callback
  @callback get_repos_for_org(org_name :: String.t()) :: repo_response()

Which means we’re now at a point where we can write the actual implementation. We’ll be using Finch for actually making the HTTP requests, so we need to start its Supervisor first.

Finch requires that it’s started as part of a supervision tree. Normally we would be adding a line similar to {Finch, name: GithubApi} inside the application.ex file.

{:ok, _pid} = Finch.start_link(name: GithubApi)

Recall from above that the implementation lived in the same file as the API behaviour definition module, I only split them here for the sake of this post’s flow.

defmodule MyProject.GithubApiImpl do
  @behaviour MyProject.GithubApi

  alias MyProject.GithubApi.GetReposResponse
  alias Plug.Conn.Query

  def get_repos_for_org(org_name) do
    # no reason in returning too many results for this example code
    query = Query.encode(page: 1, per_page: 10)

    |> build_request()
    |> Finch.request(GithubApi)
    |> parse_as_json()
    |> case do
      {:ok, json} ->

      error ->

  defp build_request(path) do
    # this is where authorization and/or other headers would be added
    # which are usually common among requests for a particular API
    request_url = "{path}", request_url, [
      {"Content-Type", "application/json"}

  # for JSON-based apis this triplet of functions for parsing Finch responses
  # is usually all it takes (and copy/pasted across projects verbatim for the most part)
  defp parse_as_json({:ok, %Finch.Response{status: 200, body: body}}) do

  defp parse_as_json({:ok, %Finch.Response{status: error_code, body: body}}) do
    {:error, {:http, error_code, body}}

  defp parse_as_json({:error, _exception} = error), do: error

Let’s try it out to see that we’re on the right track.

# Mox is already running interference from above, so for this bit the simplest workaround
# is to call the *Impl module directly (none of this in your project code though!)

All right, looks like it works. So how do we write any meaningful tests for this?

Devising tests

Since we’re intercepting requests to the MyProject.GithubApi module functions it’s clear that unless the plumbing code for building the request and handling the HTTP response is moved to a separate module we won’t be able to test it. However in my opinion that’s only a very few lines of rather straight-forward code that doesn’t make me uncomfortable if not tested, and the tradeoff between that and a more complicated test setup is worth it.

OK, so we know what we cannot test. What can we test though?

  • we can test the parsing code. It’s not much in this case, but it could be in some cases so we’ll give it a go
  • but the actual value is in the ease of testing code that makes use of our API calls. Your code can remain clean from runtime module lookups, defdelegates and all that stuff while at the same time making it possible to do actually useful tests.

Let’s start with testing the response parsing as a warm-up. Create a new file under test/github_api/get_repos_response.exs

defmodule MyProject.GithubApi.GetReposResponseTest do
  use ExUnit.Case, async: true

  alias MyProject.GithubApi.GetReposResponse

  @valid_json_response [
    %{"name" => "repo A", "some_ignored_key" => "foo"},
    %{"name" => "repo B", "another_ignored_key" => "bar"}

  describe "new/1" do
    test "can parse a valid json response" do
      assert %{
               repos: [
                 %{name: "repo A"},
                 %{name: "repo B"}
             } =

… and lets run that

Perfect. Time for the good stuff.

Using the client

Up until this point we’ve written code for accessing Github APIs but haven’t yet used it anywhere. Let’s change that by introducing the Repo Contest, a no holds barred affair that pits Github organizations against each other using the undisputable evidence provided by Github APIs for the right to be nominated ‘the best org ever’.

Let’s start off with the business logic for the contest and create a new file lib/repo_contest.ex to add the following:

defmodule MyProject.RepoContest do
  @moduledoc """
  Functions for facilitating a Repo Contest (TM)

  alias MyProject.GithubApi

  @doc """
  Decide if an organization can participate in the Repo-Contest
  def can_participate?(organization) do
    # an organization needs at least one pubic repository
    # to be considered for participation
    case GithubApi.get_repos_for_org(organization) do
      {:error, _reason} -> false
      {:ok, %{repos: []}} -> false
      {:ok, _} -> true

  @doc """
  Declares a winner (or a draw) between 2 competing organizations
  in the Repo Contest. The winner is declared based on the number of
  public repos. If the counts are the same it's a draw.
  def head_to_head(org_1, org_2) do
    count_1 = org_1 |> GithubApi.get_repos_for_org() |> count_repos()
    count_2 = org_2 |> GithubApi.get_repos_for_org() |> count_repos()

    cond do
      count_1 > count_2 -> org_1
      count_1 < count_2 -> org_2
      true -> :draw

  defp count_repos({:error, _reason}), do: 0
  defp count_repos({:ok, %{repos: repos}}), do: length(repos)

As you can see, even the logic for participation eligibility can be a little tricky, not to mention the head_to_head function internals. Let’s write some tests to make sure the functions work as expected under test/repo_contest_test.exs

defmodule MyProject.RepoContestTest do
  use ExUnit.Case, async: true

  import Mox

  alias MyProject.GithubApi.GetReposResponse
  alias MyProject.RepoContest

  setup :verify_on_exit!

  describe "can_participate?/1" do
    test "cannot participate if there are no repos" do
      # as before, we need to define the API response through the mock,
      # and the way we structured the code should make it easy enough to do.

      |> expect(:get_repos_for_org, fn _ ->
        {:ok, %GetReposResponse{repos: []}}

      # and now we can call the function to test without having to manually change
      # any parts of our code to accomodate mocking/testing

      refute RepoContest.can_participate?("foo-org")

    test "cannot participate if requests to Github fail" do
      # another test just to emphasize the point
      # 1. prepare the mock
      # 2. run code that invokes the mocked functions
      # 3. assert on the results

      |> expect(:get_repos_for_org, fn _ ->
        {:error, "boom"}

      can_participate? = RepoContest.can_participate?("foo-org")

      refute can_participate?

    test "can participatee if any repos are present" do
      # as is customary, the implementation of this test
      # is left as an exercise to the reader :)

Again, let’s run the tests:

The logic of the head_to_head function involves making 2 calls to the API. We could have easily parallelized these calls through Task.async or any other way, but that might mean that the API calls would be made in some undefined order. The way to make the tests more robust then is to make sure to use pattern matching on the parameters of the substitute callback so that results are always consistent.

Continuing inside test/repo_contest_test.exs

defmodule MyProject.RepoContestTest do
  use ExUnit.Case, async: true

  import Mox

  alias MyProject.GithubApi.GetReposResponse
  alias MyProject.RepoContest

  setup :verify_on_exit!

  # ... previous describe/2 block from above ommitted ...

  describe "head_to_head/2" do
    test "when the first org has more repos than the other, the first org wins" do
      # in all cases where we setup our mocks above we used '_' to ignore the
      # parameters that the mock was called with.
      # Here though we'll be explicitly matching on the org_names
      # to make sure that we get the expected results. Also note a new parameter
      # with the value of '2' that we pass on to the expect/4 function,
      # which is the number of times we expect the :get_repos_for_org call to be made

      |> expect(:get_repos_for_org, 2, fn
        "first-org" ->
          {:ok, %GetReposResponse{repos: [%{name: "repoA"}, %{name: "repoB"}]}}

        "second-org" ->
          {:ok, %GetReposResponse{repos: [%{name: "repoC"}]}}

      assert "first-org" == RepoContest.head_to_head("first-org", "second-org")

    # All of the following tests are again left as an exercise for the reader.
    # I've always wanted to say that but with Livebook it even feels... justified :)

    test "when the request for the first org's repos fails it counts as 0 public repos" do
      # implementation missing

    test "when the second org has more repos than the other, the second org wins" do
      # implementation missing

    test "when both orgs have the same number of repos it's a draw" do
      # implementation missing

Run the new tests:

…and that concludes all the code in this post!


I hope that the examples above made the approach clear, but let me quickly recap

To build and test an HTTP client

  • We started with a behaviour module that was used as the entry point for the calls to the HTTP service endpoint.
  • Then used knigge to handle the routing of the calls to the appropriate module without having to introduce a facade module.
  • We only mocked the API calls, all other functions in the code that depended on these API calls could then be tested without any changes to their internals.

As stated at the start of this post, I find this setup pretty convenient and I hope it proves the same for you too, dear reader. Until next time!

This article reads much better after my colleagues Sebastian S. and Frerich R. offered their suggestions before publishing.