Overcoming Analysis Paralysis

Measurement remains a significant challenge for learning executives.

Measurement remains a significant challenge for learning executives, but companies that can incorporate key practices into their assessment methodologies stand a good chance to see improvement in their measurement initiatives.

With economic conditions unclear, CLOs are in a dilemma regarding measurement of their learning initiatives. They must demonstrate impact on business, but in order to command, or even request, resources to document impact, CLOs must have documented relevance. Even with this constant pressure, research shows there has been positive movement in the use of metrics and measurement.

Every other month, IDC surveys Chief Learning Officer magazine’s Business Intelligence Board (BIB) on an array of topics to gauge the issues, opportunities and attitudes that make up the role of a senior learning executive. This article examines CLOs’ thoughts on the topic of learning measurement.

The Economic Crisis Response
When the survey was fielded in January, the economic crisis was acute. Five months had passed since the credit crisis caused bank failures. The new president was about to be sworn in and the economic stimulus plan was still being drafted. Even so, nearly 25 percent of respondents felt their companies were performing better than they had expected. More importantly, about 25 percent felt the learning organization had a significant role in helping execute their enterprises’ response to the crisis.

Unfortunately, that means nearly 75 percent of learning organizations had something less than a significant role. And almost one-third of learning organizations had a minor role in the enterprise’s response to the economic crisis. This finding clearly illustrates a chicken-and-egg dilemma for CLOs on demonstrating impact.

Measurably Disappointed
Training professionals like to talk about the importance of measurement, though most aren’t particularly satisfied with their organizations’ approaches and success with measurement. But it’s getting better.

There is little disagreement among learning professionals about the value of measurement. When done properly, measurement can demonstrate training’s impact on the company’s top and bottom lines. Key metrics may include employee performance, speed to proficiency, customer satisfaction, improved sales numbers and more. As the BIB survey shows, the challenge lies in gaining access to these key metrics and finding the time and resources to conduct measurement.

In the 2008 survey, a majority of enterprises indicated a high level of dissatisfaction with the extent of training measurement that occurs in their organizations. This year, that feeling moderated somewhat, with about the same percentage of respondents feeling satisfied as dissatisfied.

The relationship between satisfaction with measurement and the role the learning organization can play in enterprise strategy is clear. About 60 percent of companies that were very dissatisfied with the state of measurement in their organizations play only a minor role in their companies’ responses to the economic crisis. On the other hand, about 60 percent of organizations that were very satisfied play a significant role in their organizations’ responses. Learning organizations that can demonstrate their impact can have a greater impact on their organizations when needed.

The big issues working against satisfaction seem to be a combination of resources and motivation. Specifically, CLOs believe their ability to effectively deploy effective measurement systems is limited by:
• Level of resources.
• Level of leadership support.
• Level of interest.
• Availability of technology.
• Level of funding.
• Culture of indifference.
• No means to automate processes.
• No dialogue with executive level on what metrics it wants to see.
• Lack of understanding of how to implement measurement.

A Stagnant State of Affairs
With the competing pressures and the chicken and egg of measurement and relevance, it is no wonder there has been little change in the use of measurement. The BIB survey shows little change during the past three years in the amount of measurement. About 80 percent of companies do some form of measurement, with a bit more than half of those using a manual process, and a bit less than half using some form of automated system.

The processes for measurement also are still very similar to how it was performed in 2007. A look further back shows a similar mix as far back as 2004. There has been a slight increase in the percentage of respondents that indicate they use their LMSs to develop metrics, suggesting the efforts of the LMS vendors to enhance their metric and analytic capabilities have paid off somewhat.

Consistent with that interpretation, another area of change is the percentage of enterprises that believe their technology-based learning platforms give them a greater ability to make correlations between training and performance. Learning systems allow respondents to automatically track who has completed training and integrate that information with a performance management system to run reports comparing performance levels of those who have been trained with those who have not.

One respondent described how the LMS is being used effectively in this manner. “In addition to the standard smiley sheets, our LMS automatically sends out an email 90 days after the training with a survey which asks a series of questions. We have found that this provides much more valid feedback.”

Organizations are increasingly taking advantage of these built-in capabilities of learning systems, and the belief that technology can help develop correlations between training and performance is increasing slowly.

Correlation of Training to Outcomes
Similarly, there is a meaningful change in the number of enterprises correlating training assessments to changes in the organization. There are real increases in the percentage of companies that correlate training with employee performance, customer satisfaction and business performance.

Other research suggests organizations that can consistently tie training to specific changes are more likely to train less. They focus training efforts on the most appropriate people and the most appropriate topics and get rid of training that is seen as useless.

Additional Reason for Optimism
Despite the obvious challenges, the news isn’t all bad. As Figure 1 demonstrates, more than a third of respondents were either satisfied or very satisfied with the extent of measurement going on at their companies. This group typically has wider support for measurement and enough manpower to consistently tackle measurement initiatives.

Even though some organizations are satisfied, more than 90 percent of all organizations (satisfied and dissatisfied) have plans for some additional measurement initiatives in the next 24 months. With great progress in recent years correlating training to employee performance, in the future, CLOs will be most interested in making training correlations to business performance and customer satisfaction.

Ending the Stalemate
For reasons already cited, measurement will remain a significant challenge for learning executives going forward, but it continues to be on the agenda. There are steps CLOs can take to help break the chicken-and-egg tension between measurement and relevance. Three of the most significant practices are:

1. Define success early. By defining with stakeholders what success will look like up-front, learning professionals are more easily able to identify and benchmark key metrics for measurement and make post-training results more easy to quantify.
2. Establish metrics at the project or business-unit level. Successful measurement programs typically start off as smaller initiatives that focus on the project or business-unit levels, where there are fewer obstacles.
3. Set expectations up-front. Help stakeholders understand the commitment required to see assessment projects through to the end. This way, less resistance will be encountered during the measurement phase.

Companies that can incorporate even some of these prac-tices into their assessment methodologies stand a good chance of breaking their analysis paralysis and seeing a marked improvement in their measurement initiatives.