Custom client-side bindings for Phoenix LiveView
At BetterDoc we’ve been using Phoenix LiveView for a little over a year now, and it’s gradually replacing big parts of our software. And if you’ve been using LiveView as well, chances are you’ve come to appreciate the programming model and all the thoughtful conveniences on offer- not least of which the client-side bindings. These bindings can act as an entry point for client to server communication or help sprinkle a bit of client-side reactivity, and all of this works without us actually having to write any ‘proper’ JavaScript.
more...Developing a feature for elixir-ls
Thanks to BetterDoc’s 20% time for personal development1, I recently had the opportunity to add a little feature to elixir-ls, the language server that provides the smarts behind autocompletion - among other things - for the vscode-elixir plugin.
This is a post to jot down my learnings from trying to figure out how to develop the feature and the workflow I ended up using. Maybe this helps the next person looking to scratch an itch but doesn’t know where to start ;)
How the plugin works
The protocol
Language servers that cater to a specific language tend to be written in the same language that they provide the smarts for2. For VSCode (and other editors) then to be able to communicate with a language server in a manner that is not tied to a particular language, the language server needs to implement an API on top of the rest of the code according to the LSP protocol specification which is a JSON-RPC based protocol.
The flow is of the request-response persuasion: As the user is taking actions inside the editor, the editor is making requests to the language server providing it with details about what just happened (file opened, text typed etc.) and the language server responds with the appropriate suggestions (if any).
vscode-elixir plugin specifics
The plugin is packaged as 2 distinct pieces of functionality:
- a thin layer written in Typescript that is responsible for implementing VSCode plugin specific callbacks like e.g. server startup and tear down, registering appropriate file types and returning a client that propagates JSON-RPC messages to the server etc.
- the actual language server that is started up as a separate OS process which is an instance of the BEAM VM running the language server Elixir code.
What happens then is the Typescript layer which runs inside the editor process opens up a pipe for communicating with the language server process (IPC) to exchange the JSON-RPC messages of the LSP protocol.
Preparing the development environment
The Contributing section of the plugin’s README file has instructions on how to package and launch the plugin - but I TLDRd3 since the process looked a bit involved and I was looking for a faster feedback loop.
On macOS, VSCode seems to install plugins under
~/.vscode/extensions/
. Looking inside~/.vscode/extensions/jakebecker.elixir-ls-0.9.0/elixir-ls-release
, there’s a bunch of.ez
files (packaged BEAM modules) and a few script files,launch.sh
andlaunch.bat
among them (go Windows!). The last line inside the script reads:exec elixir --erl "+sbwt none +sbwtdcpu none +sbwtdio none" -e "$ELS_SCRIPT"
$ELS_SCRIPT
evaluates toElixirLS.LanguageServer.CLI.main()
(it’s set in thereal_language_server.sh
file), so now we know how the language server code is shipped and what the entry point into the language server is. Nice.I cloned the language server code under
~/code/elixir-lsp/elixir-ls
and since I needed the plugin to use the code from that directory instead of the pre-packaged.ez
archives, I went ahead and made the following modifications to thelaunch.sh
script4- exec elixir --erl "+sbwt none +sbwtdcpu none +sbwtdio none" -e "$ELS_SCRIPT" + cd ~/code/elixir-lsp/elixir-ls + exec elixir --erl "+sbwt none +sbwtdcpu none +sbwtdio none" -S mix run -e "$ELS_SCRIPT" --no-halt
Now the plugin would be loading my local code using mix, which means that after making a change the plugin can be reloaded from inside the editor using
Command Palette (cmd+shift+p) -> Developer: Restart Extension Host
which will force mix to recompile the project before starting the language server again.Tip: you can see the plugin info messages by opening the Output pane with
Command Palette -> View: Toggle Output
and then selectingElixirLS - my_project
from the dropdown (you need to have an.ex
file open in your editor view). In case something breaks while making changes you can use it for debugging the issue.At this point I did know the entrypoint but I still didn’t know the specific code file I’d need to go to for my changes. So I decided to take another shortcut.
Elixir devs tend to think about pattern matching when trying to match incoming messages to actions - and I decided to pursue that avenue: find out the message being sent by the editor when auto-completing code and use that to discover the code responsible for handling
defmodule ...
(the feature I wanted to add). I went back into the launch script and made another change:- exec elixir --erl "+sbwt none +sbwtdcpu none +sbwtdio none" -S mix run -e "$ELS_SCRIPT" --no-halt + exec tee -a ./elixir-lsp.log | elixir --erl "+sbwt none +sbwtdcpu none +sbwtdio none" -S mix run -e "$ELS_SCRIPT" --no-halt | tee -a ./elixir-lsp.log
*nix power! By using
tee
I could now intercept both incoming and outgoing messages and log them into a file before forwarding them onto the language server.Discovering the correct code file
Creating a new project and a new
foo.ex
file I typeddefmod
and looked at what messages are exchanged withtail -f elixir-lsp.log
. Here’s what the request looks likeContent-Length: 215 {"jsonrpc":"2.0","id":86,"method":"textDocument/completion","params":{"textDocument":{"uri":"file:///Users/xxx/code/my_project/lib/my_project/foo.ex"},"position":{"line":1,"character":6},"context":{"triggerKind":3}}}
Looking for matches on the method
textDocument/completion
leads to theElixirLS.LanguageServer.Providers.Completion.completion/4
method. Now it’s only a matter of adding the necessary code5… et voilà :)Outro
That was all. I was particularly happy of how amenable to introspection6 the
elixir-ls
setup is, which allows people totally unfamiliar with the project to zero-in on the right place to add their contributions to. I hope you enjoyed reading this little adventure!Thanks goes out to my colleagues Alex L. Z. and Frerich R. for reviewing this post.
-
At BetterDoc every Friday is dedicated to advancing one’s knowledge on any subject of their choosing as long as it is related to programming. At the end of the day we share what we learned with everyone else. ↩
-
Which I guess makes sense because the set of potential contributors for the language server for language X is probably a subset of the users of language X - and developing the server in X practically removes the language barrier for contributors. ↩
-
I was eager to find the place to add my changes, and reading diligently through the server code and the docs was not what I wanted to start with because that would take the wind out of me. ↩
-
If you’re doing this on Windows, changing
launch.bat
alone was not enough for me: I had to delete the.ez
files (but leave the elixir-sense archive in!) before my code would load. ↩ -
As always, the trivial task of adding the appropriate code is left as an exercise to the reader. ↩
-
Another cool feature that I noticed was how
IO.inspect/2
messages were converted into JSON-RPC info messages by the authors of the plugin. There’s probably a lesson in process group leaders to be learned there but for the moment I was just happy I could take advantage of it. ↩
Our contribution to the Hex package manager
Two days ago, Hex v1.0 was released. This is a major achievement, and all the reason to celebrate! We’re delighted that we were able to contribute to this success by implementing a new feature which is now readily available!
more...Creating testable HTTP API client code in Elixir
Creating testable HTTP API client code in Elixir
Intro
As part of my day job I’ve had to create a couple of HTTP API clients and after some experimentation I’ve ended up with a code structure that I like and that I feel makes testing code that uses a JSON API client easier.
Making actual HTTP requests to 3rd-party services while running the tests can be difficult because of credential availability on the machine that runs the tests, lack of a testing environment for making calls, slowness etc.
One solution is to mock the outgoing requests on the client while making sure that mocking the calls is as easy as possible - but there’s always some pain involved. There are other valid approaches to this kind of testing like exvcr but I feel like mocking balances out code coverage, usability and ease of setup fairly well.
The libraries to use are
- Mox for mocking function calls
- Knigge for simplifying the plumbing around mocking
- Finch for making the actual HTTP requests
- Jason for parsing JSON text responses
- Plug for some convenience methods when composing URLs
Overview
By using the illustrious Repo Contest™ as an excuse, this post will show how to structure, use and test a client for Github APIs. The endpoint to be used will be /orgs/{org}/repos. The code snippets also feature type_spec definitions which although not required for the code to work, I still feel they can be quite useful for documentation and editor auto-completion purposes so I tend to put them in.
To make things more interesting this post is also written as a livebook which means that readers should be able to tinker with it and run it (click on the badge up top).
Code
Let’s start by installing the dependencies first.
Mix.install([ {:mox, "~> 1.0"}, {:knigge, "~> 1.4"}, {:finch, "~> 0.9"}, {:jason, "~> 1.2"}, {:plug, "~> 1.12"} ])
To be able to mock the API client one has to create a
behaviour
module. With the help of someknigge
magic, this module is also going double as the public interface for the client.Using a behaviour in theory allows for multiple implementations of the same interface. In practice however it’s usual that only just one real implementation is needed - that’s why I prefer to place the behaviour module and the implementation module in a single file.
In a mix project the file would be under
lib/github_api.ex
orlib/my_project/github_api.ex
depending on the type of project.Test setup
For the tests to work in livebook one needs to set the correct
Mix
environment, otherwiseExUnit
,Knigge
andMox
will not work together as expected.This is normally handled for you when working within a mix-generated Elixir project, you can safely ignore the following
Mix.env/1
function call in that case.Mix.env(:test)
Let’s start small by defining the callback and some dummy implementation code for the API behaviour that returns a single string. This is done to make it easier to verify with a quick test that the mock works and the function calls are indeed intercepted.
defmodule MyProject.GithubApi do # The otp_app config key is used to lookup the implementation dynamically. # By default the lookup happens during runtime for the TEST env build and at # compile time for all other builds use Knigge, otp_app: :my_project, default: MyProject.GithubApiImpl # The function to implement. As we're not sure yet what the # return type should be, we can leave it at any() for now @callback get_repos_for_org(org_name :: String.t()) :: any() end defmodule MyProject.GithubApiImpl do @behaviour MyProject.GithubApi def get_repos_for_org(_org_name) do "hello" end end
To get some immediate feedback that we’re on the right track here, let’s quickly call the
GithubApi.get_repos_for_org/1
function and see that theKnigge
plumbing works as expected.MyProject.GithubApi.get_repos_for_org("some-org")
As you can see,
knigge
allows us to call the function directly from the module that defines thebehaviour
, which is pretty convenient and makes the call-site code easier to follow.Let’s also setup the name of the mock implementation module to be used in the tests. This usually goes in the
test/test_helper.exs
file.# inside test_helper.exs ExUnit.start() # Τhis configures Mox to use the MockGithubApi module as the API implementation in tests. # Using a short and descriptive name for the mock module works best and makes life easier. Mox.defmock(MockGithubApi, for: MyProject.GithubApi) # we also need to instruct knigge to use the mock in all tests by using the same project name as # we did when configuring knigge in the behaviour module with the otp_app: option. # This can also be moved inside config/test.exs as `config :my_project, MyProject.GithubApi, MockGithubApi` Application.put_env(:my_project, MyProject.GithubApi, MockGithubApi)
Let’s define a test now that can be kept around as a sanity check that the mock setup is working correctly. To do that let’s create the file
test/github_api_test.exs
and add some test code that verifies the setup.defmodule MyProject.GithubApiTest do use ExUnit.Case, async: true import Mox alias MyProject.GithubApi # This will ensure all expected mocks have been verified by the time each test is done. # See the Mox documentation https://hexdocs.pm/mox/Mox.html for nuances. setup :verify_on_exit! test "verify mock works" do MockGithubApi |> expect(:get_repos_for_org, fn _ -> "hello from mock" end) # make sure that the interception works as expected (sanity check) # If we got it wrong we will get back "hello", not "hello from mock" assert "hello from mock" == GithubApi.get_repos_for_org("some-org") end end
And let’s run the test:
ExUnit.run()
Modelling the response
By looking at the Github docs for the /orgs/{org}/repos API call we can get a taste of what the response looks like. But there’s nothing like the real thing so let’s see it in practice by making a request with
curl
(usually available on macOS and Linux) for a list of repos under the “elixir-lang” organization{response, 0 = _exit_code} = System.cmd("curl", ["-s", "https://api.github.com/orgs/elixir-lang/repos?per_page=2"]) Jason.decode!(response)
As you can see there’s quite a bit of info in there, so for the purposes of this post let’s pretend we’re only interested in each repo’s
name
.We’re going to define a new struct that models the response under
lib/github_api/get_repos_response.ex
or alternatively
lib/my_project/github_api/get_repos_response.ex
if that matches your project setup better.defmodule MyProject.GithubApi.GetReposResponse do # we're defining a repo here as a simple map with just one key. Should we need more, # it might make more sense to create a dedicated struct for it @type repo :: %{ name: String.t() } @type t :: %__MODULE__{ repos: [repo()] } defstruct repos: [] def new(json_response) do repos = Enum.map(json_response, fn repo -> %{name: Map.get(repo, "name")} end) %__MODULE__{ repos: repos } end end
Writing the actual implementation
We can now go back to the API module and use the proper signature
defmodule MyProject.GithubApi do use Knigge, otp_app: :my_project, default: MyProject.GithubApiImpl # 1. add an alias for the response module alias __MODULE__.GetReposResponse # 2. specify the return signature separately to avoid cramming everything together @type repo_response :: {:ok, GetReposResponse.t()} | {:error, any()} # 3. change the return type signature for the callback @callback get_repos_for_org(org_name :: String.t()) :: repo_response() end
Which means we’re now at a point where we can write the actual implementation. We’ll be using
Finch
for actually making the HTTP requests, so we need to start itsSupervisor
first.Finch requires that it’s started as part of a supervision tree. Normally we would be adding a line similar to
{Finch, name: GithubApi}
inside theapplication.ex
file.{:ok, _pid} = Finch.start_link(name: GithubApi)
Recall from above that the implementation lived in the same file as the API behaviour definition module, I only split them here for the sake of this post’s flow.
defmodule MyProject.GithubApiImpl do @behaviour MyProject.GithubApi alias MyProject.GithubApi.GetReposResponse alias Plug.Conn.Query def get_repos_for_org(org_name) do # no reason in returning too many results for this example code query = Query.encode(page: 1, per_page: 10) "/orgs/#{org_name}/repos?#{query}" |> build_request() |> Finch.request(GithubApi) |> parse_as_json() |> case do {:ok, json} -> {:ok, GetReposResponse.new(json)} error -> error end end defp build_request(path) do # this is where authorization and/or other headers would be added # which are usually common among requests for a particular API request_url = "https://api.github.com#{path}" Finch.build(:get, request_url, [ {"Content-Type", "application/json"} ]) end # for JSON-based apis this triplet of functions for parsing Finch responses # is usually all it takes (and copy/pasted across projects verbatim for the most part) defp parse_as_json({:ok, %Finch.Response{status: 200, body: body}}) do Jason.decode(body) end defp parse_as_json({:ok, %Finch.Response{status: error_code, body: body}}) do {:error, {:http, error_code, body}} end defp parse_as_json({:error, _exception} = error), do: error end
Let’s try it out to see that we’re on the right track.
# Mox is already running interference from above, so for this bit the simplest workaround # is to call the *Impl module directly (none of this in your project code though!) MyProject.GithubApiImpl.get_repos_for_org("elixir-lang")
All right, looks like it works. So how do we write any meaningful tests for this?
Devising tests
Since we’re intercepting requests to the
MyProject.GithubApi
module functions it’s clear that unless the plumbing code for building the request and handling the HTTP response is moved to a separate module we won’t be able to test it. However in my opinion that’s only a very few lines of rather straight-forward code that doesn’t make me uncomfortable if not tested, and the tradeoff between that and a more complicated test setup is worth it.OK, so we know what we cannot test. What can we test though?
- we can test the
GetReposResponse.new/1
parsing code. It’s not much in this case, but it could be in some cases so we’ll give it a go - but the actual value is in the ease of testing code that makes use of our API calls. Your code
can remain clean from runtime module lookups,
defdelegate
s and all that stuff while at the same time making it possible to do actually useful tests.
Let’s start with testing the response parsing as a warm-up. Create a new file under
test/github_api/get_repos_response.exs
defmodule MyProject.GithubApi.GetReposResponseTest do use ExUnit.Case, async: true alias MyProject.GithubApi.GetReposResponse @valid_json_response [ %{"name" => "repo A", "some_ignored_key" => "foo"}, %{"name" => "repo B", "another_ignored_key" => "bar"} ] describe "new/1" do test "can parse a valid json response" do assert %{ repos: [ %{name: "repo A"}, %{name: "repo B"} ] } = GetReposResponse.new(@valid_json_response) end end end
… and lets run that
ExUnit.run()
Perfect. Time for the good stuff.
Using the client
Up until this point we’ve written code for accessing Github APIs but haven’t yet used it anywhere. Let’s change that by introducing the Repo Contest, a no holds barred affair that pits Github organizations against each other using the undisputable evidence provided by Github APIs for the right to be nominated ‘the best org ever’.
Let’s start off with the business logic for the contest and create a new file
lib/repo_contest.ex
to add the following:defmodule MyProject.RepoContest do @moduledoc """ Functions for facilitating a Repo Contest (TM) """ alias MyProject.GithubApi @doc """ Decide if an organization can participate in the Repo-Contest """ def can_participate?(organization) do # an organization needs at least one pubic repository # to be considered for participation case GithubApi.get_repos_for_org(organization) do {:error, _reason} -> false {:ok, %{repos: []}} -> false {:ok, _} -> true end end @doc """ Declares a winner (or a draw) between 2 competing organizations in the Repo Contest. The winner is declared based on the number of public repos. If the counts are the same it's a draw. """ def head_to_head(org_1, org_2) do count_1 = org_1 |> GithubApi.get_repos_for_org() |> count_repos() count_2 = org_2 |> GithubApi.get_repos_for_org() |> count_repos() cond do count_1 > count_2 -> org_1 count_1 < count_2 -> org_2 true -> :draw end end defp count_repos({:error, _reason}), do: 0 defp count_repos({:ok, %{repos: repos}}), do: length(repos) end
As you can see, even the logic for participation eligibility can be a little tricky, not to mention the head_to_head function internals. Let’s write some tests to make sure the functions work as expected under
test/repo_contest_test.exs
defmodule MyProject.RepoContestTest do use ExUnit.Case, async: true import Mox alias MyProject.GithubApi.GetReposResponse alias MyProject.RepoContest setup :verify_on_exit! describe "can_participate?/1" do test "cannot participate if there are no repos" do # as before, we need to define the API response through the mock, # and the way we structured the code should make it easy enough to do. MockGithubApi |> expect(:get_repos_for_org, fn _ -> {:ok, %GetReposResponse{repos: []}} end) # and now we can call the function to test without having to manually change # any parts of our code to accomodate mocking/testing refute RepoContest.can_participate?("foo-org") end test "cannot participate if requests to Github fail" do # another test just to emphasize the point # 1. prepare the mock # 2. run code that invokes the mocked functions # 3. assert on the results MockGithubApi |> expect(:get_repos_for_org, fn _ -> {:error, "boom"} end) can_participate? = RepoContest.can_participate?("foo-org") refute can_participate? end test "can participatee if any repos are present" do # as is customary, the implementation of this test # is left as an exercise to the reader :) end end end
Again, let’s run the tests:
ExUnit.run()
The logic of the
head_to_head
function involves making 2 calls to the API. We could have easily parallelized these calls throughTask.async
or any other way, but that might mean that the API calls would be made in some undefined order. The way to make the tests more robust then is to make sure to use pattern matching on the parameters of the substitute callback so that results are always consistent.Continuing inside
test/repo_contest_test.exs
defmodule MyProject.RepoContestTest do use ExUnit.Case, async: true import Mox alias MyProject.GithubApi.GetReposResponse alias MyProject.RepoContest setup :verify_on_exit! # # ... previous describe/2 block from above ommitted ... # describe "head_to_head/2" do test "when the first org has more repos than the other, the first org wins" do # in all cases where we setup our mocks above we used '_' to ignore the # parameters that the mock was called with. # # Here though we'll be explicitly matching on the org_names # to make sure that we get the expected results. Also note a new parameter # with the value of '2' that we pass on to the expect/4 function, # which is the number of times we expect the :get_repos_for_org call to be made MockGithubApi |> expect(:get_repos_for_org, 2, fn "first-org" -> {:ok, %GetReposResponse{repos: [%{name: "repoA"}, %{name: "repoB"}]}} "second-org" -> {:ok, %GetReposResponse{repos: [%{name: "repoC"}]}} end) assert "first-org" == RepoContest.head_to_head("first-org", "second-org") end # All of the following tests are again left as an exercise for the reader. # I've always wanted to say that but with Livebook it even feels... justified :) test "when the request for the first org's repos fails it counts as 0 public repos" do # implementation missing end test "when the second org has more repos than the other, the second org wins" do # implementation missing end test "when both orgs have the same number of repos it's a draw" do # implementation missing end end end
Run the new tests:
ExUnit.run()
…and that concludes all the code in this post!
Outro
I hope that the examples above made the approach clear, but let me quickly recap
To build and test an HTTP client
- We started with a behaviour module that was used as the entry point for the calls to the HTTP service endpoint.
- Then used knigge to handle the routing of the calls to the appropriate module without having to introduce a facade module.
- We only mocked the API calls, all other functions in the code that depended on these API calls could then be tested without any changes to their internals.
As stated at the start of this post, I find this setup pretty convenient and I hope it proves the same for you too, dear reader. Until next time!
This article reads much better after my colleagues Sebastian S. and Frerich R. offered their suggestions before publishing.
more...Hiring steps for developers at BetterDoc
There was a lot of discussion about how many interviews a candidate must go through to get hired.
This post describes the four hiring steps we have in place.
more...Insights from working with a complex but functional setup
Reflections on how a complicated technology setup can be made to work for the team instead of against them.
more...We're looking for an infrastructure engineer
Our team as well as our software landscape is growing, so it’s time for us to start looking for someone focusing on infrastructure. In this post we want to share a bit our reasoning behind this.
more...How Docker Forced Me To Learn More About Linux
This post tells the story of some missing data and how it turned out to be related to how we built our Docker images.
To give you the proper context it will talk about:
- the problem
- the root cause
- how we fixed it
- what we learned from this
Grab a coffee and make yourself comfortable, we’re going iiiiiiiiin!
more...Our case identifiers
Coming up with identifying a case both internally and externally isn’t an easy thing. This post shows how we solved this issue for us.
more...Friday Projects
Friday is innovation and learning time for the Product Development team — here’s an (incomplete) list of topics team members have tackled in the last months.
more...-
Page 1 of 7