Times are tough in the economy at the time of writing, and that is putting inevitable pressure on L&D budgets. We all know how important learning is to organisations, and the discourse over the last few years has been that learning is critical to digital transformation and competitiveness. And yet, we still see those budgets being cut.
So how do we deal with this dichotomy?
Typically during times of hardship, businesses don’t cut the parts of their business that are making money, or that help the business to grow or save money. They trawl through the data they have to find the right places to make budget cuts, so they maximise their opportunity to come out the other side of the downturn ahead. In most organisations, when they look at the data on the investment made in learning initiatives, sadly they see cost centre, not growth centre!
So, L&D folks today, if you can’t demonstrate impact, outcomes or value, then you might be facing a lean year or two. You might also face challenges in raising the profile of L&D within the business, and demonstrating the impact of learning on the company culture overall, which will therefore potentially impact future L&D budgets.
Traditionally, L&D providers have used one-time satisfaction scores to evaluate the perceived quality of their training, but at Cognitive Union, we argue that measurement needs to go further than this in order to give an accurate indication of success.
Measure the impact, not just the satisfaction
One of the purposes of investing in L&D is to improve staff capability, which, ultimately, should have an impact on the company’s bottom line.
So, why are satisfaction surveys or ‘happy sheets’ still used, which don’t provide any meaningful data on the impact of the learning on the business?
While it’s nice to know whether someone liked the facilitator, or to know how much they enjoyed their time in the workshop, these questions remain: Are they going to apply the learning? Are they going to actually use what they learned in their role? Will it make any difference at all?
At Cognitive Union, through our learning philosophy framework, we take an in-depth approach to measuring both satisfaction and impact. I thought it might be helpful to share some of our thinking here, in case it’s helpful.
Simple principles for measuring the impact of learning
There are four factors in the learning process we consider when it comes to building an accurate picture of impact: training-specific KPIs, scrap learning percentage, and evaluation.
1. Scope the project and set the KPIs
Before any training begins, both the business and the learning provider need to agree on the overall scope and objectives of the programme, and clearly define which KPIs will best measure the impact. Where possible, it’s also important to provide mechanisms for measuring those KPIs, such as a dedicated team member to observe learner behaviour/activity before and after training.
For instance, for a company wanting to increase its percentage revenue of sales from ecommerce platforms, one of the goals would be a clear percentage increase, and KPIs might include conversion rates on whichever platform it is they use most. Of course, not only the training would contribute to this, so these would be part of an overall programme objectives and KPI list.
Naturally, as a result of a focused, long-term training programme that is developed through understanding business context, these KPIs should start to improve. With less measurable KPIs, however, such as the ability to make better data-driven decisions, performance must be observed, recorded, and reported to the provider.
2. Measure the level of scrap learning
Calculating scrap learning is a simple yet valuable concept, in which we ask learners: “Based on what you have learned today, what percentage will you apply to your job?” The learner then selects from 10%, 20%, 30%, up to 100%.
If the result is 80%, then the remaining 20% will be the scrap learning percentage.
This is a valuable metric to track because the lower the scrap percentage, the more impact the learning programme has had. It’s also a valuable tool for identifying learning patterns across multi-day programmes.
For example, if the scrap percentage decreases throughout the programme, this shows that delegates are finding the programme increasingly applicable as it progresses. The one caveat here might be that the learner themselves are making that judgement – it’s not an observation of the actual amount they have used. But it’s a start in the right direction.
3. Evaluate the learning throughout
The final method of measuring learning impact that we use successfully with clients is evaluation surveys.
For this, we conduct surveys that ask questions that lead to a Net Promoter Score (NPS®) and other quantitative evaluation metrics, including average confidence increase, workshop score, and facilitator score.
Originally developed by Bain & Company in 2003, NPS® is a measurement system that allows customers – in this case learners – to rate the likelihood of recommending a brand – or in this case, training course – to a friend or colleague on a scale of 0-10.
The system identifies ‘detractors’ who score six or below, ‘passives’ who score seven or eight, and ‘promoters’ who score nine or ten.
Using this system is not all that common in learning, but it forms an invaluable part of the evaluation process because it can be a good indicator of overall satisfaction, and therefore can be crucial for building an overall picture of the impact of the learning.
In addition to using a wider breadth of survey metrics, it’s also important to create multiple data points to build a more holistic picture over time. So instead of having just one survey post-training, surveys should be conducted at four stages:
- Prior to, or at the beginning of, training: this determines baseline levels of knowledge and confidence for future comparison.
- Immediately upon completion of training: to assess the immediate impact of learning.
- Approximately three months after completion: to assess the short-term impact of learning.
- Approximately six months after completion: to assess the longer-term impact of learning.
These three and six-month evaluations can be extremely telling, highlighting the fact that learning impact a) often lasts longer than L&D leaders might expect, and b) is not always felt immediately after training, but can emerge and be applied at a later date.
4. Measure the behaviour or capability change through observation
It could be argued that this is the most effective way to measure the impact of learning, as it’s the only method based on real-life application in business situations. It can be hard though – it really requires support from line managers.
More on this, and how it is effectively carried out, in a later post.
Using a holistic approach to measurement, including the quality, applicability, and impact of learning on a business, can help L&D leaders gain clarity on the value of the programmes they provide, as well as helping them secure larger budgets for future programmes.
At Cognitive Union, we use our 25 years of combined experience to provide our clients with consultancy and training in digital, marketing, data, and culture transformation. If you need help understanding your brand’s relationship with retailers, contact our team today.