Connect the Dots between Enterprise Software Training and Customer SuccessBill Cushard from ServiceRocket Interviews Frontleaf's Tom Krackeler
There are many important issues in the enterprise software training business, which include how to develop great training content through solid instructional design processes, using learning technologies (eLearning, LMSs, MOOCs, etc), and measuring the effectiveness of training. Among the most important issues in this business, training professionals are weakest (in my opinion) at measuring the effectiveness of training.
Sure, we have frameworks that have been around for many years, institutes dedicated to helping us learn and apply these models in our organizations, conferences in our field with tracks dedicated to the topic of measuring the success of training programs, and competency models that highlight the analytics skill. But the gap between the demand to measure the impact of our training programs on outcomes and our lack of ability to do it well is a critical gap that must be addressed.
What better way to explore the link between training and customer success than to speak with a leader in the customer success analytics space. Recently, I had the opportunity to sit down with Tom Krackeler, the Co-Founder and CEO of Frontleaf. We had an enriching conversation about how analytics can be used to show a link between training programs and outcomes for customers (popularly known as customer success). I found it a privilege to get insight from Tom, and I wanted to share his insights with training professionals who are looking to figure out how to make sure their training programs are having the biggest impact possible on customer outcomes.
Join me in our discussion about how to connect the dots between training activities, customer usage, and results.
Bill Cushard (Bill):
Training professionals are not known for being great at analytics, but they all want to ensure training has a meaningful impact on the desired results of the organization. How should training professionals look at data as they design and deliver customer training programs?
Tom Krackeler (Tom):
Training professionals see and hear about the impact of their work all the time. Well-trained customers adopt their company's product more quickly and take advantage of its advanced capabilities. The same customers give them feedback through learner satisfaction surveys. There is every reason to believe that good training helps customers get better results. And that this translates directly into higher customer retention and more upsell/cross-sell opportunities.
However, in the companies I've worked with, training teams don't get full credit for their impact on revenue. Sometimes they are viewed narrowly through the lens of a Services P&L, or worse, are considered to be a cost center. Why? Because they don't have the data that proves their full revenue contribution. Or they have the raw data, but are unable to connect the dots between today's training activities and tomorrow's customer renewal or upsell.
So training professionals should begin to look at data analysis as a tool to measure the true downstream revenue impact of their training programs, and to let that guide their upfront curriculum design. In other words, don't use data just to show that customers are satisfied with training offerings and are learning the material. Prove that good training changes customer behaviors in a way that lets them achieve stronger results with your product. And that those results (1) deliver higher renewal and upsell rates, and (2) can in fact be attributed to the training program.
I am not a quant or a data scientist, so big data is an intimidating concept. Where should training professionals begin so that it is something attainable?
Begin with tracking which customers attend each training class or consume self-service training content. Companies like ServiceRocket make this all very easy, so you can then proceed to figuring out what effect training activities have on usage behaviors and customer retention.
But not so fast. There is a blocking-and-tackling issue that has derailed countless companies in their attempts to generate insights from their customer data. In order to measure the business impact of your training program, you need begin with customer data integrity .
This means establishing a single unique ID that maps the customers (both companies and individuals) that are stored in your training solution with their records in your CRM and your own software application (not to mention the systems for customer support, forums, knowledge base, etc). Not a sexy concept, but it's an absolute must-do, and something that a surprising number of companies struggle with.
It also means striking the right balance between making training content super easy for your customers to access, while at the same time keeping the ability (through single-sign-on or other means of authentication) to track exactly who is accessing it. Remember, you can't tie anonymous training visitors back to actual customers and their outcomes.
This might be a good time to take someone from your IT or Engineering team out to lunch — you might need their help getting these things right!
If I want to show evidence that training is having a meaningful impact, what data points should I focus on/collect?
It all comes down to knowing what makes your customers successful and determining whether training is moving the needle on those behaviors. Start by picking a customer performance metric for each of your training classes or resources. In other words, identify exactly what customer behaviors it's intended to influence or what customer result it should accelerate.
For each training class, state this goal in the form of an objective measure, such as:
- increasing usage of a particular feature set;
- reducing the time to it takes a customer reach a milestone;
- increasing the percentage of users above an activity threshold; or best of all
- a specific measurement for the very reason the customer bought your product in the first place.
Maybe you don't have access to a wide variety of customer engagement metrics. If so, just start with whatever basic usage or results measures you have in place. But bottom line, if you are completely blind to your customer's usage activities, you'll be out of luck trying to figure out how your training programs changed them.
Now power lies in showing the before/after impact of attending training classes or consuming training content on your chosen customer success metrics. You can also calculate the difference in customer performance across groups of customers who participated in training versus those that did not.
When you're ready to pull everything together, Customer Success analytics solutions like Frontleaf can help you track all of these customer metrics, and then connect the dots between training activities, customer usage and results, and business outcomes like increased retention and upsell.
I understand that smart use of customer data can help me figure out if my training initiatives really mitigate customer churn. But my sixth sense is telling me it's not always quite so easy. What are common things that can go wrong when undertaking this?
What are you talking about? Nothing ever goes wrong when it comes to software! Okay, scratch that. There are a few things you need to be on the look out for, even if you do a good job of capturing and organizing all the kinds of customer data we've been talking about.
First, get prepared for when a training class that gets terrific customer survey results turns out to have zero impact on the customer behavior it's meant to influence. I guarantee this will happen (probably too often for comfort). Silver lining: now that you are measuring it, you can decide what to do about it.
Also, it's easy to get tricked into seeing training as a revenue rainmaker by the presence of external factors that don't have anything to do with your training efforts. There is a fancy term for this called a confounding variable. For instance, suppose your Enterprise customers have a much higher retention rate than your SMB customers. The Enterprise customers may also be much more likely to take advantage of your training offering. Does this mean that training is directly responsible for their higher retention? Perhaps not, and that's where it's important to have some good analysts on it at your company or good analytics software at the ready, so they can help you draw the right insights from the data you're collecting.
Last, and most importantly, the thing that most frequently derails data analysis projects is poor data management practices. The old "garbage in, garbage out" adage applies. I can't tell you how many smart, "on top of it" technology companies struggle with getting a single view of their customers and their activity data. Duplicates, missing records, nonsensical data formats — you name it. When companies are growing fast, they tend to evolve customer data management on the fly and switch CRMs and other business apps. It's almost inevitable to have a few "data casualties" along the way.
What is an example of a company that uses data analytics to show a training program is having the desired effect?
Mindtouch jumps to mind first. Mindtouch is a platform for delivering customer-facing content that helps companies accelerate sales cycles and drive Customer Success. Mindtouch optimizes their self-service training in a couple of interesting ways. First, they determine where they have gaps in their training content by analyzing their customers' aggregate search patterns and tying in each article's bounce rate.
The Mindtouch team then goes the extra step of overlaying their CRM data, so that they can identify the content resources that are the strongest drivers of new sales and customer renewals. That way, they know what help content to promote at each stage of the customer lifecycle, based on the type of customer and what training content would benefit it most.
What is the top area that training teams should focus on in order to make the most impact on their company's growth?
A training team can make the biggest impact on growth by building their curriculum (and measuring their effectiveness) around driving the actual usage behaviors and customer results that influence retention and expansion. Of course, there are plenty of non-usage reasons that customers churn, but the training team is perfectly positioned to influence the most important one: the customer's ability to achieve their desired outcome with your product.
Yes, this means placing a little less emphasis on operating margins for paid training and the results of learner satisfaction surveys. I know that can be uncomfortable. But as technology companies mature, the revenue impact of small improvements in customer retention will dwarf the typical growth in paid training revenue.
To briefly recap this insightful discussion with Tom, I would say that it's time for training professionals to step away from our laser focus on learner satisfaction surveys as the primary data we collect and start measuring the impact our training programs have on customer outcomes. That is certainly what customers want. They want to know that if they take your training, their teams (and companies overall) will achieve some desired outcome. With the right tools, like the one Frontleaf has built, heads of education services and customer success can now collect training activity data, customer usage results, and even key customer metrics data, then produce insight into how training is impacting customer success.
- Like this post? Subscribe by email