We learn from our support how our customers are using the application and what are the pit falls. From every case we proactively try to find out how to create a feature that no one asked for, but that has great value. We change the application so that the user falls into the pit of success
This is how we came up with the Recursive-Fakes. By seeing how users write their tests we saw that a lot of their time is spend writing a fake, running the test and then seeing it fail because the fake was part of a call chain [cat().tail().wag()] and doing this recursively (hence the name) until all calls are satisfied.
We supported the call chain scenario for quite some time, but we didn’t understand the pain that our user has to go to setup his tests. After Recursive-Fakes was created, the user falls into the pit of success and gets it right the first time. Once an method is fake all the chains from that method are fake too. This feature was so strong that it became the default and both other frameworks have partially implemented this feature too.
In order to do this, we have to leave our comfort zone in our support. We can not just close a case after the customer is satisfied, even though we would really love to. We have to take responsibility that it won’t happen again. That users will fall into the pit of success and not need to look it up in the documentation or search the forums. To do this we must think of a feature that will do this and feed it to our backlog of features. This is where we must use loads of creativity, and although it is difficult, we must do this to excel.
There is a problem measuring support. On one hand once the product gets better, we should be getting less and less support. On the other hand, because more people are using the product, we should expect more support cases. So I would like to see a trend of our product getting better, but we have to make sure that there is enough people in support.
The current important metrics are
- Cases that one support reps can handle in a week
This is a derivative of the time it takes to fix an issue. This metric is important to see if we need to add another support rep.
We expect this to raise as we get better in answering issues and as our product evolves and become more maintainable
- Cases open per week
Are we getting more support cases?
Other measurements for the quality of support are
- Time to fix issue
Our customers want their issues solves, so we should measure the time it takes us to solve the issues.
We expect this to metric to become faster as our product becomes better
- Time to first treatment
We know how agonizing it is to send a support issue and not get any answer. We care about our customers and strive to answer them as quick as possible
- Number of pit success features
This is how we can tell if our support is effective at creating customer features, and not just repeating the same issues.
Although it seems easy, these metrics are actually hard to extract from most issue tracking systems. But these metrics can be used for integrity management. Each rep can ‘commit’ to the number of cases he will handle, the time for first treatment, and the number of pit success features he can create.
What metrics do you use?