

When we talk about “leader roulette” at Church, it’s usually a commentary on the inconsistency from bishop to bishop (and other leaders, too, but bishops are the most common). We would even find an occasional call monitor who wasn’t doing her or his job at all, instead just randomly marking the form with average scores, hoping not to get caught. A few even had prejudices against specific employees or accents. Some of them made innocent mistakes about product knowledge, incorrectly believing a call to be wrong when they were actually the wrong ones.

We found that some call monitors were just more severe than others while others were too lax. The goal was that most of our call evaluations would be consistent and accurate. There were also “all monitor” sessions to review and discuss calls with team members to give them a chance to work through their own thought process. Those with a pattern of outlier scores would receive coaching. Then, all team members had to independently score these calls. Every week, the leadership team would score five calls together, discussing and coming to consensus on a score for each. While some aspects of a customer service interaction are objective (did you use the greeting, use the customer’s name, give accurate information), other aspects are a little more subjective (was your tone friendly, did you fully solve the problem, did you prevent future calls on this issue). Ensuring that all our quality monitors evaluated the calls the same way was a neverending quest. Decades ago, I was in charge of a quality department in a call center.
