A Six Minute Read
This is not a tale about me. The events that happened, while true, happened to some other guy I know.
I was recently clearing some old, battered furniture from the garage; I also had recently chopped down a tree, and had a yardful of logs that had to go. (Scratch that – it wasn’t me, it was Some Other Guy I Know).
The county dump charged by weight – arriving vehicles drive across a scale at the entrance, and again at the exit, and pay a price-per-pound on the delta. My plan – rather, that other guy’s plan – was to load the furniture, dump it, and come back with the logs.
Surprise! The dump changed its pricing model – for passenger vehicles, it now charged a flat fee per trip. So instead of two small-ish loads, the “smart” move was to load as much into the bed of the truck as possible and make one trip.
Everything fit perfectly, but the driver decided to do the right thing and cover the load; so I, er, he, opened the gate, tied down the tarp, and slammed the gate shut. When the gate opened, the load shifted aft just a bit – including the nine-foot branch leaning in the eight-and-a-half foot bed. The closing gate pushed that branch back up against the truck’s rear window – applying just the teensiest amount of pressure that, upon hitting the first bump in the road, caused the window to shatter as if Tarantino’s thugs were standing in the driveway with sawed-off double-aughts.
By trying to minimize “dumping expense”, I made some bad decisions that ultimately cost a lot more money. Measurement systems have an important role to play in how we manage and make decisions – but a risk to consider is that any single metric will be used out of context, or a simply as a blind proxy, and that’s just bad business.
On the other side of the world, a financial-services client was selling buggy software, and focused its improvement efforts on pre-release testing. Dashboards were created, making various measures of testing visible for all to see. So far, so good.
But what happened was, like with that overloaded truck, people tried to make the system work for them. Projects that were largely bulletproof went through “extra” testing in order to turn green on the dashboard – adding to the Test departments’ queues. Projects that could probably benefit from additional testing – surprise, surprise – moved on to Change Control once they passed the requisite metrics.
In short, people were working towards the measures, rather than using the measurement system as it should be intended.
A fundamental step in any significant improvement effort is getting the metrics right. This is more than just identifying what to measure and how to measure it: “Getting measures right” includes adapting the right attitude and using measurement systems correctly.
Dashboards Aren’t Weapons
One thing to be careful of is using singular measurements as indicators of overall performance, or divorcing one system from another. Overloading the truck put one measure – “dump expense” – in the green, but created significant expense (in dollars and time) to recover.
We have all been in reviews where a manager pulls one number off the operating report or dashboard and takes everyone to the woodshed. What does one data point really tell us?
Think of a reading on another dashboard: For example, let’s say a speedometer reads “55 miles per hour”. Is that good or bad?
That is impossible to say, based on just that information: Is the vehicle on an empty freeway or a school zone? Are the roads dry or icy? Is the car well-maintained, or are bald tires held onto the rims with duct tape? Is this even a car, or is it an A321 36,000 feet over the Atlantic?
Management dashboards are valuable only in context. Measurements work best when viewed as trends, and in overall systems.
Using the System Correctly
There are essentially four objectives behind a robust measurement system, and managers would do well to remember all four when reviewing progress.
So why measure?
- To Provide a Foundation for Identifying Problems
Although a single speedometer reading is not enough to judge the success or failure of a road trip, a sudden change may be an indicator of something. When a number is “missed”, the initial reaction should not be to punish whoever is presumed to be responsible; the first thing to do is to decide if this is something that needs to be fixed, is a symptom of a larger issue, or is otherwise explainable.
Another client performed business-development services for its franchisees, and built a number of dashboards reviewing established metrics of recruiting rates, enrollment costs, community sponsorship, and the like. While the employees had influence over those items, any one particular project could be held up for a number of factors outside of their control, ranging from undercapitalized franchisees to something as simple as a field manager not returning phone calls.
Seeing blips against these numbers was not an indication that the individual in the corporate office was not doing her job, but tripped an alert that said, “We need to take a deeper look here. Is this a temporary issue with this one franchisee? Is this a performance issue with the manager? Are we offering the wrong products?”
The measurement was the catalyst for that review – and knowing that it was good, objective data meant that if any deeper investigation was needed, it could provide the basis for that analysis.
- To Provide a Basis for Identifying Improvement Opportunities
Look at the data in as many interesting ways as possible. What are the situations that outperform the rest? Does productivity skyrocket on certain products? What can we do to make all runs like that one? Is “First Call Resolution” higher on one day of the week? Why was March the best sales month ever? We tend to get so caught up in looking for problems that we forget to look for chances to grow. A good measurement system allows for both.
- To Focus Capability Development Efforts
Similar to the above, but looking at the human element. Is everyone struggling in a certain area? That may be a chance to shore up peoples’ skills. Alternately, is there one performer – or crew, or position – that seems to be ahead of the curve? This may provide a learning opportunity for everyone.
- To Drive Behaviors
As stated initially, this tends to be the default use of a measurement system. When we know how we are being assessed, we will do the things that “look good” – such as the software developers ordering extra, unneeded testing iterations, because they don’t want the numbers to drop.This is not to imply that this is wrong – far from it. Clearly understanding what exactly is expected, and how it will be determined, is the key to success. The danger comes from pursuing the metric at the expense of other factors.
The simple fact is, we are going to pursue the things that look good on the dashboard and avoid the rest. That something is being measured indicates its relative priority in the organization. This makes it doubly important that we design the right measures, but also that we separate the concept of “process measures” from incentives. Having significant financial stakes (e.g., bonus plans) tied to pieces of a system, reduces the likelihood that we will use the measures as the information system to troubleshoot and improve our overall performance.