A Six Minute Read
Executives at a large industrial company were concerned with a rising injury rate, particularly among its contractors. After months of investigations and modelling, a five-point improvement plan was beta-tested at its most hazardous location. Over the following 18 months, dozens of internal Black Belts and external consultants introduced these initiatives across the entire corporation.
During the implementation, and in an unprecedented 48 months following (and – knock wood – still counting), there were no reports of significant injuries with contractors, and only one with an employee at an affected site – in an area outside the scope of the effort. By the most important measure, the program was a complete success: Workers went home to their families at the end of their shifts in the same condition that they left.
That said, the team was constantly stressed. When asked, “how is the program doing?”, all we could point to was an exceptional zero-harm rate. It was difficult to tie specific actions directly to a day-to-day improvement in the “safety culture”. We knew, overall, things were improving – but only because of the absence of the negative outcome.
When diagnosing issues for clients, there are two common complaints I hear among individuals. One is a lack of timely, direct feedback (not the lack of the annual performance review, something very different); the other is a lack of quality inputs. (“Quality inputs” can range from physical inputs, such as raw materials or work-in-process, to intangible inputs – accurate data, met deadlines, availability of decision-makers at a meeting).
These are both serious concerns, and both are caused by a swirling cocktail of larger-scale management missteps . On the face of it, these appear to be two very different dilemmas, but both can be relieved by attacking a single core issue: A poor measurement system.
The lack of feedback is a felt need not because managers necessarily want face-time with their bosses, but because they need to know and understand how well they are performing. Is the work they’re doing contributing to the company’s goals, or is it bureaucratic claptrap? Is it high quality, or a dog’s breakfast? Without a feedback mechanism, it can be impossible to know this, which creates both tension and that second problem.
Not knowing if the work you are doing is good – or even good enough – increases the likelihood of passing on something that does not meet customers’ (or colleague’s) needs. In some cases, it may be too much of something – an overly detailed analysis when a quick summary would suffice – and in others, it may not be enough (think of the accountant who has to double-check standard cost on requisitions). And, sometimes, inputs are just wrong – as anyone who’s ever had to “adjust” something to fit knows all too well.
So how can better measurements help?
Lagging Versus Leading Measures
Think about the team measuring the success of its safety initiative above – or of any project, where the chief measure of success comes after implementation is complete. The metrics that are in place may be ideal indicators of success, but they are not predictors of success. (When consultants drone on about “leading” and “lagging” measures, this is what we are talking about.)
A lagging measure only signifies what has already happened. That can be useful, but most of the time it’s as helpful as a TV meteorologist reporting what today’s high temperature was (“See, Martha! I told you I didn’t need my sweater vest!”).
Leading measures are helpful because they either confirm that work in process is being done correctly, or they highlight concerns before getting passed on to someone else and become problems.
The absolute measures of a pastry chef, for example, are the taste and texture of a buttercream cake; these are lagging indicators because they can’t be checked until after the pie has baked and cooled. The risk here is obvious – if quality is found wanting, tremendous resources have been wasted. The ingredients, the chef’s time – none of that can be recovered. To minimize that, she utilizes in-process measures – testing the quality of the ingredients, putting a toothpick in the cake, keeping the light on in the oven, and so on – to both predict the success of the endeavor and to quickly identify any problems.
By creating in-process measures, we gain additional benefits. If we take the time to “check our work”, we can easily see if we are performing well – i.e., instant, direct feedback. This minimizes – really, eliminates – the possibility of passing shoddy work on to our customers (internal or external). Looked at from the flip side, if our colleagues have methods to check the quality of their work, and adhere to those methods, the chance of us receiving poor-quality inputs drops, as well.
Creating a Simple Measurement System
The work involved in creating in-process measures need not be complicated.
First, identify specific deliverables. In our safety initiative, one of the major components was an improvement in the communication of hazards and risks before the job was even bid. We agreed that watching the long-term trends in this communication would be a bellwether.
Next, collaborate with customers (internal or external) to identify and prioritize the key attributes they need. “Priority” is essential – find the select few that are the most critical variables. It can be easy to go overboard, so don’t! According to the contractors who received these pre-job communications, there were two main components they most valued: Understanding why the job needed doing, and knowing how “fresh” the risk and hazard profiles were.
As a team we brainstormed – the third step – what would be the best indicators of success for those critical variables. To track the “understanding the why”, we created simple tallies around the inclusion of objectives in the communique; we also designed a “freshness indicator” that measured the age of included risk analyses.
The last part of a successful design is to set the target performance. Building off existing baselines is easiest. In our case, we were starting from scratch, so we found ourselves tweaking the goals early on.
To achieve the maximum benefit of the internal measurements, everyone involved needs to know what “good” looks like – including how to recover when it’s missed. This can take the form of control charts, annotated examples, poka-yoke boxes, or anything that allows the user to check the quality of his work before it moves onto the next stage.
New measures don’t have to be complex, or even “brilliant”. They do need to be simple, they need to indicate performance, and head off any problems before they become irreversible. If it is important, you should be able to describe the difference between “good” and “bad”; and if you can describe the difference, you can find something about it you can measure and track.
When we can objectively measure our own work before sharing it, the feedback gives us confidence and improves the quality of our outputs. By ensuring that it meets our customers’ requirements, we avoid being the subject of those “everything I get is crap” complaints.
Now, to fix the “my boss never gives me feedback” problem…