Support Bliss: Inbox zero in Help Scout
Like most operations at Zapier, our process to measure support has gone through several iterations. With over 150,000 users that can create over 600,000 potential integrations—such as saving Gmail attachments to Box or creating a lead in RelateIQ from a Wufoo form—it provides us a unique challenge that requires data-driven decisions to provide the best customer support experience. Today, I'd like to share the story behind how we measure support, what we measure and how those measurements drive our decisions.
New Support Recap Steers Descisions
When I joined Zapier in the fall of 2012, zero measurement was in place for our support volume and performance. We'd occasionally glance at reports generated by Help Scout, our support tool, but there was nothing beyond those charts and graphs.
By year’s end, we realized we needed something more systemic to guide our decisions. The result is what we call the Support Recap, a Google Doc that pulls information from the Help Scout reports, providing a template and permanent home for that data. Since their implementation, the recaps have been instrumental in influencing a number of large initiatives our team has undertaken.
In the second quarter of last year, for example, we noticed a steady climb in support volume. Though this matched our intuition, seeing the sometimes shocking numbers gave us the confidence to put team resources towards managing that problem. In this case, it meant having the entire team hunker down for a week to write 278 pages of , which had been nearly non-existent before.
The new recap also helped us evaluate the decision afterwards, where we saw support volume immediately decline.
Chart from 2013 Zapier Annual Report
Hively Shows Us Happy Customers
In hindsight, our next iteration in measuring support was shockingly simple: We needed to know our customers’ happiness! Much like with the support recap before, we had an intuition that we were meeting our users’ needs, but we wanted data to have more certainty.
Insert Hively, a service that lets you add a snippet to your email signature to let customers rate their interaction with support. We added Hively to our support emails near the end of 2013, so now users can give us feedback with a single click on a very simple question: How did I do?
We add this to every email we send so we get user feedback at each point of interaction, not just when their support ticket has reached a conclusion.
The Recap Gets Revamped
After another round of reflection earlier this year, we decided to re-do the Support Recap once again to better align with the type of data we wanted to track. The original recap was closely aligned with Help Scout's default reporting, and though it certainly had value, we decided to re-purpose the information to better let us understand the data at our fingertips. So what changed?
Decreased focus on conversations - A conversation in this case refers to an email thread that's added to our support inbox. While valuable, with the way we use Help Scout, conversations become a noisy metric. We often found ourselves saying "conversations moved in this direction but…", so though we still track that number, it's not the baseline for support volume it was previously.
Decreased focus on individual contributions - The first support recap put a breakdown of support volume by team member front and center, complete with comparisons to a previous month. Though some level of individual breakdown is instructive—for example, our developers taking on an increasing load in support for more technical tickets—it was not the table we wanted to dominate the recap.
Increased focus on replies and response time - In combination with the previous two changes, replies and response time are now more prominently featured. Replies give us a good measure of the overall support volume, since a conversation with 10 replies is obviously more work than a conversation with one reply. Replies also gives us a measure of our efficiency as a support team, when used in combination with Help Scout's measure of "resolved" conversations. We want to get people solutions, and we want to do it with as little back-and-forth as possible—replies allows us to track both. As for response time, it's always been something we look at, and now a bucketed measure of how many users are getting replies within X hours is front and center.
The new and improved Support Recap looks like this:
The Future of Support Measurement
While we're happy with the improvements that have been made to support measurement this past year, there's always more that can be done. Here are a couple spots we're looking to get more or better data:
Time-specific data - Help Scout gives us data in aggregate, which is good at getting an overall picture. In the future, we'd like to get to more specific timeframes, to not only see how well we do when we're actively monitoring support during the day, but also to make sure that those who contact us during off-hours aren't getting a terrible experience that gets lost in the volume of daytime conversations.
Leveraging Hively data - Through Hively's Help Scout integration, we can pretty easily tie that feedback to specific conversations. This provides us a ton of opportunity to really measure user satisfaction in a more granular way and then act on it. Right now, we see user satisfaction at a high level, with some more investigation we can more easily identify recurring issues that are leading to unhappy users.
That's how we measure support at Zapier, does it match up with how your team does the same? I'd love to hear more about how you handle that topic, especially any differences from our workflow, in the comments below!