How We Automated Our Engineering Skills Test for Hundreds of Applicants

Brian Cooksey
Brian Cooksey / June 21, 2018

Implement an algorithm that computes a minimum spanning tree for a weighted undirected graph.

Don’t worry, we aren’t going to ask you a question like that. At Zapier, we think putting candidates through a crummy interview process is a poor way to assess them and doesn’t help us make Zapier an appealing place to work. Instead, we want our interview process to reflect our culture and to evaluate people on the kind of work they will actually do.

To accomplish this, part of our interview process involves what we call a “skills test.” This interview focuses on assessing specific knowledge and skills needed to do the role. We present the candidate with one or more challenges that require those skills and use a rubric to determine how well they did. The goal is an unbiased evaluation of what a candidate can do.

In engineering roles, the skills test is a coding challenge. There are a lot of bad ways to do them. There are the on-spot-whiteboard approaches that can be terrifying for candidates. There are the programmatic online tests that can be frustrating to get to accept what you wrote. Questions can range from trivial “add these numbers together” to the data structure example at the top of this post. We looked at a variety of options available and did not like most of them. What we wanted was a solution that approximated what an engineer actually does day-to-day. So, we built our own little system.

Our approach is a test that simulates a real-world challenge an engineer could face on the job. We aim to have the challenge use part of our tech stack, in a language we use to code, and try to strike a balance between direction and openness that allows folks to show off what they can do. It’s a take-home test, so candidates are free to use any resources they normally would on the job. The goal of the test is to get beyond “can they solve the puzzle?” We want to know if they can problem solve and write good code.

To power this system, we need a few things. First, we need a project that is big enough to be completed in the desired timeframe. Then we need an automated way to setup a candidate with the challenge. Lastly, we need a way for the candidate to start the test on their own, when they are ready.

How We Design a Coding Challenge

Creating a challenge is a delicate task. Our aim is a project that candidates can complete in four hours. That duration is enough to have a non-trivial problem to solve, but not so big that candidates are overwhelmed by the time demand. A candidate has time to read the project spec, think up a solution, build the solution, and possibly do some polish at the end.

To get to that finely scoped project, we start with a scenario that relates to the kind of work we do day-to-day. We write out a spec with this format:

    High level problem description (1-2 sentences)

    Deliverables section that explains what output we want

    Specifications section with a bullet list of requirements that the solution must fulfill. These are very specific things like Should include at least one test or Should validate that the input is less than 500 bytes.”

    Instructions about time limit and what to do when finished

We have multiple engineers go over the spec to make sure it is clear. In a take-home format, the candidate can’t ask clarifying questions, so the list of requirements and the deliverable need to be precise. In cases where different approaches are acceptable, we explicitly say the candidate can decide how to handle it. As a catchall, we also say that if a candidate runs into something we didn’t cover, they should make a judgement call and document the assumption they made.

Once we have a solid spec, we test it to make sure the project is the right size. This happens in two rounds. First, those who wrote the spec try following it to see how quickly they can complete the project. As a rule of thumb, if an engineer with prior-knowledge of what the project entails can knock it out in an hour, four hours is reasonable for a person seeing it for the first time. The second round confirms this guess with one or two engineers who haven’t seen the spec before do the project as though they are a candidate. Based on the results, we tweak the spec and retest until the project fits the time limit.

After the spec is finalized, we write setup instructions to accompany it. This is a document we give candidates before they take the test. It gives generic info on what the candidate needs to install or configure to be ready to take the test. We do this so that a candidate can spend all their time during the test on the project. It also gives candidates a chance to do a bit of research if they aren’t familiar with parts of the tech stack we ask them to use.

How We Conduct the Test

Each challenge we create lives in a GitHub repo. In that repo are two markdown files, one with the setup instructions and another with the project spec. We use a convention of naming the files SETUP_DOC.md and PROJECT_SPEC.md. The repos themselves follow a convention of “skill-test-.” The consistent formats let us do some nifty automation.

To send a challenge to a candidate, we input their GitHub username and the repo name of one of the challenges into a form. This form submission kicks off a Zap that does the following:

  1. Creates a private repo with a random name
  2. Adds the candidate and up to four Zapier employees as collaborators on the repo
  3. Copies the setup instructions into the repo

At this point, the candidate has access to the repo and can browse the setup file. They have as much time as they want to get their machine configured. Many candidates wait until a free evening or weekend to continue. When ready, they follow a link in the setup file which takes them to a form.

To start the test, the candidate submits the form by inputting their unique repo name. This kicks off another Zap which:

  1. Copies the project spec into their repo
  2. Notifies us in Slack that the test has started

With spec in hand, the test has officially begun. A candidate has four hours to spin up a PR with their solution. After the final commits are in, candidates have time to go back and add a README or details in the PR description to explain what they built and how to run it.

We watch the Slack channel that gets notified, and queue up grading of the tests as they come in. We have a rubric we run through that helps us make sure we are grading as fairly and consistently as possible. As much as possible, the criteria in the rubric are specific, observable facts we can gather from the code.

After grading is complete, we typically schedule a follow-up call to talk about the project. This discussion gives us a chance to explore the thought process behind the code that the candidate wrote. We dig into assumptions made, things to improve with more time, and ask clarifying questions.

Our Path to Automated Skills Tests

The automated skill test is the result of two years of iteration on our interview process. Back in the day, we did not ask for any code. Sure, it was the least stressful for candidates, but not seeing any examples of the type of work you are going to ask somebody to do 30+ hours a week is not a good way to evaluate skill.

Next we introduced a 45-minute pair-programming session. We would hop on a video call with a candidate, provide the spec, and then cut them loose to code up a solution. They were free to ask any questions at any time. They could work solo with video and audio turned off or ask us to work alongside them via screen sharing.

The live version had some benefits. Candidates could ask questions, like they would in the real world. It was also a smaller ask on their time. The downside was that it was very stressful. If the candidate didn’t jump into coding after a few minutes of digesting the spec, they were not likely to complete it. It also resulted in rushed code that wasn’t necessarily indicative of what they could accomplish on the job.

The automated test gives candidates a little more breathing room. They still have to be efficient, but they have time to plan and to polish code. They can show what they can really do. They also show they can accurately read and digest a spec, as the spec is a bit larger now that there is more time. Overall, it’s a better signal of strong candidates.

An important step in this iteration process is to ask candidates for feedback. When we are trying out a new test, we ask candidates in follow-up calls if they have feedback on how to improve the test. We probe to find out if the test was enjoyable, if the spec was clear, if the time felt reasonable. This feedback has been great to help us hone our tests. Anything we can do to help candidates succeed and enjoy the interview process as much as possible is a win.

So far, we’ve been really happy with the automated test and plan to continue using it for the foreseeable future. Perhaps you can see it for yourself by applying for one of our open engineering roles.


Load Comments...

Comments powered by Disqus