• Thomas Meloche

Measuring Up! 4 Easy Metrics

Updated: Aug 11

After a working lunch with my business partner Mike Russell, we decided to walk over to the nearby GNC because Mike wanted to restock his supply of protein bars which have turned into a staple portion of his diet. We walked into the store together talking about our executive training class (PEER-XL) and he began collecting box after box of the delicious nutritious protein superfood.

Inspired by Mike’s enthusiasm, I decided to purchase a few bars myself. I put them on the counter expecting to be quickly cashed out while Mike continued to hunt around the store.

The cashier asked me if it was OK to run the two purchases together, saying we would still pay separately. I found the request strange, but trying to be amenable, agreed. So, instead of cashing out, I waited a few more minutes for Mike to empty the store of its supplies of various flavored bars.

Once he was done, the employee at the cash register went to work. A mad typist at the keyboard he was entering in all of our purchases and then carefully dividing them in two, so we could each pay for one purchase with two credit cards. Inside my head a phrase echoed, “Thank you for turning a simple transaction into a bizarre ritual.” I actually had that phrase printed on a button I would wear.

There were actually two employees at the cash register and they worked in tandem to get our orders straight, even writing down by hand the two subtotals on an available scrap of paper. I did the mental math on my order, checked the paper scrap, and determined it was close enough.

After a few minutes of pounding keys madly at the cash register, they finally produced a joint receipt, spit across our two credit cards. We both dutifully signed.

They thanked us profusely for letting them process the transaction this way. Finally I had to ask, “So why did you want to put them on one receipt?”

They happily responded “They measure the average sale.”

“Say no more,” I said. “We get it—we are both methodologists.”

Okay. Probably a weird thing to say to them, as in their whole lives they have likely never met anyone who self-identified themselves as methodologists—much less two together in a protein bar buying frenzy. But Mike and I are methodologists, we study process for a living, and we both instantly knew exactly what was going on.

Some “higher power” in GNC thought it would be a good incentive for employees to upsell. The approach they used was to reward growing the average bill at the GNC.

I am sure the goal was well intentioned, but it is a stupid metric full of potentially negative side effects, one of which we just experienced. Create a metric and the employees will jump through hoops to make the metric—even if it inconveniences and confuses the customer!

I immediately flashed back to another recent purchase at another GNC. I went in and bought only one nutrition bar. The guy behind the counter did not seem particularly enthusiastic about my purchase. No wonder! My purchase blew away his average for the morning.

A good definition of a bad metric at a retail establishment is a metric that:

  1. Causes bizarre and confusing behavior in front of the customer

  2. Makes the customer not actually want to make more purchases

  3. Makes the customer feel in some way unwanted

The GNC metric delivers all of these in spades. Of course, we see hundreds of metrics like this in almost every company we visit. Metrics that drive the wrong behavior.

A friend forwarded my blog post on this story to GNC. They wrote back they wanted to know the store and the date of the purchases. I just laughed. Hell no, I don't want to cause trouble for the poor cashiers who are likely trying to survive on minimum wage. The cashiers were trying to follow the crazy directives they received from management. I sympathize with them. I don't want to get them in trouble, in fact, I was happy to help.

The problem is managers who spend no time in stores creating bad metrics. GNC should be happy to have 100 new customers visiting its store everyday to buy just a single protein bar. Why? Because the habit you really want customers in is thinking they should be visiting you every time they are hungry. The workers should be out on the sidewalk hustling to get 100 new $2 sales.

This is the problem with almost all metrics. They actually work against the real goals of the organization. Most likely somebody at corporate headquarters think this metric is working great.

Here is a secret. They aren't.

It is almost guaranteed your metrics are driving the exact opposite behavior from what you really need.

The Metrics Question


Large organizations live and die by the metrics they establish. Metrics drive bonuses, rewards, and punishments and impact all organizational behaviors.

Every time we work with a new group of executives we always hear the metrics question:

What new metrics do we need to support this change?

It is a loaded question. Metrics have extensive power to drive behavior. Metrics have the power to create and destroy change initiatives. Metrics have the power to make good people do very bad things.

Hiding the Bodies

Brazil was selected to host the World’s Cup in 2014. The problem was Rio de Janeiro had a very high murder rate. This is bad for business. Something about a high murder rate discourages tourists.

The government established a metric for the Brazilian security forces—get the murder rate down!

The security forces rose to the challenge and over the course of a year Rio de Janeiro’s murder rate dropped dramatically. The initiative was a success. The metric was a success. The method was interesting.

HBO’s documentary ‘The Pacification of Rio’ reported that the murder rate dropped dramatically before the games because if the Brazilian security forces encountered a body they simply made it disappear. No body, no murder to report. The murder rate drops.

The missing person rate increased, of course, but apparently World Cup tourists don’t pay much attention to that statistic. All of this leads to the old metrics adage What Gets Measured, Gets Done.

The metrics trap is that metrics work. They really really work. What you measure and reward actually gets turned into human behavior. If you establish a metric, even a great sounding metric like “lets decrease the murder rate,” your staff will do whatever it takes to get the desire result. Even if their behaviors are counterproductive to the real goal.

The security forces were being handsomely rewarded on their ability to decrease the Murder Rate. So they did whatever they could to decrease the number of murders reported. Do you honestly think your local police are all that different?

Metrics are nefarious.

It is hard to determine in advance exactly what creative, innovative, and intelligent people will do to meet the metric. This you can know for certain… the more metrics you put in place the more your people are hiding the bodies.

Bad Metrics

Most metrics are bad. They are bad because they work against the true goal. They are bad because they result in unexpected and even bizarre behavior. Noble intentions do not guarantee noble outcomes.

Having your security forces steal bodies off of the street before the local homicide investigators can get there is certainly an unexpected behavior. Having them bury the body in the forest or dump it in the ocean is categorically bizarre. Yet we see similar behaviors in almost every large company we visit!

Best of all, hiding the bodies almost guarantees the killer will kill again. Which works against what should be the real goal.

This is the problem with almost all commonly used metrics. If we closely observe the effects they produce we rapidly discover ways they work against the real goals. Of course, the numbers look great—people even get rewards and bonuses based on those numbers!

Trust only Movement

The true answer is to make an entire operation as transparent as possible. Then observe the actual behaviors. The behaviors matter. Metrics can always be gamed. The only way to really understand what is going on in your organization is to observe it in action—trust only movement.

Methodologists are deeply aware that metrics create counterproductive behaviors. Hence most transformations begin by eliminating most or all of the existing metrics. There are a few things you may still need to measure, let's start with these vital few:

1. User Satisfaction

2. Business Value

3. Cost of Change

4. Employee Satisfaction We focus on a few simple and visible metrics. Then we watch the process in action to ensure we are actually getting the results we really want, not just the results the metrics measure. Abandon blindly steering and managing by metrics. Only by abandoning most of your metrics will you unleash a totally new level of success.

The First Metric: User Satisfaction

“A metric is a trigger for a conversation.” Those seeking real improvements are never interested in stoplight metrics “red—yellow—green.” Instead they are interested in trending metrics. For example, are we making the those who use our systems more or less happy this week? How?

Measure user satisfaction by directly asking the users for their impression of your products and services. This is a simple rating with a follow-on question. One of our favorite simple forms was suggested by Eric Ries in his Lean Startup blog. Continually ask a random sample of your users the following questions about your product, service, or feature.

How would you rate this service? (poor) 0 1 2 3 4 5 6 7 8 9 10 (great)

Would you recommend this to a friend? No or Yes

Why? For products with large enough user bases, the average answer to this question are plotted and publicly displayed daily to get a feel for user satisfaction level and trends over time. If there is suddenly a sharp rise or fall this is clearly evident in the metrics review ceremony and we can investigate the cause. Perhaps a specific change correlates with the change? If we trended negative, we may wish to wait to see if we recover. As Facebook discovers with every change they make most change causes an initial negative user response which may rapidly improve—so watch and learn.

Questions like the one above should be asked using the best scientific principles. As much as possible care should be taken to ensure the data is unbiased by the techniques used to collect it. Of course, if the data is being collected by the people being "managed by the metric" this will not happen.

Asking a question about satisfaction can cover not only the entire product or service but also the addition of new features within the existing line. If you roll out a new feature to a mobile app it is possible to ask this question just about the feature. Do users really like the feature? Are they actually using the feature? If not, why?

This is the second gauge of user satisfaction—observe them in action. This is essential for digital interfaces to customers. We get a significant feel for user satisfaction simply by observing if they successfully and repeatedly use the interface to interact with the product, service, or feature.


Interactions to observe include:

  • How frequently do users lose the ability to access the service (down time).

  • How frequently do users abandon our digital interfaces mid-transaction (abandoned transaction).

  • How frequently do abandon transaction users end up calling our call center instead.

With insight like these we get a peek as what our users actually do, not what we think they do or what they say they do. This data helps establish a foundation to improve our users’ experience and grow their satisfaction. Furthermore, we can automate data collection and reporting so the metrics function as an integral part of our product and are not easily subject to significant gaming.


It turns out, by the way, when people rigorously measure feature use they always discover that most of the features you release are almost never be used. Yes, we really mean over 50% of the features! You need to then decided why they aren’t being used. For many features the best course of action is to remove them entirely from your system. The user interface required to support these unwanted features is getting in the way and simply slowing the users down. An objective measure of User Satisfaction is a first-order metric. Feedback from real users is a primary control. A Second Metric: Business Value

I attended a presentation by one of the top metric gurus for a leading agile tool vendor—agile is a popular approach to software development. The tool vendor's tool tracks tasks, estimates, hours worked, and generates charts and reports. Because over 35,000 “projects” have been captured with the tool the company did a great deal of analysis across their big data looking for trends and making observations. This presentation was on “Agile Metrics” supposedly providing us insights on ways to improve performance and productivity.

After observing for an hour I reached the conclusion that every metric they shared is something the lean processes would call a “vanity metric.” Direct marketers would have laughed the entire presentation out of the room. The big agile tool company did not have one truly valuable observation to share.


To understand why I came to this conclusion I will describe a different industry, digital direct marketing and advertising—my second profession. Imagine I had two teams (A & B) writing ads for an a Facebook-based direct marketing campaign. I take all sorts of measures on how they work and how many ads they produce, capturing the following data for the first week: Ads Produced and Deployed in One Week

Team A: 400 ads produced and deployed

Team B: 200 ads produced and deployed

Can I determine that team A was more productive than team B?

No.


You do not measure productivity in ad campaigns using ads produced and launched. You measure productivity based on the business value the ads deliver—do they deliver new profitable customers or not.


But team A produced twice as many ads so they must be twice as productive, right?

No.

We have no knowledge about how the number of ads they wrote impacted the goals of the organization. We can make no meaningful productivity statements whatsoever from this data. Any organization’s goals always include adding value to the business, and we have no evidence that producing twice as many ads produced twice as many paid customers or twice as much revenue. We have no evidence that it produced any value at all! What if I told you team B worked in pairs and team A worked as individuals. Could we conclude that working in pairs decreases productivity?

No. Because we have not established a reasonable measure of productivity we cannot use this data to evaluate any of the process steps. A reasonable measure of productivity must include how we impacted the bottom line of the business. The team writing 400 ads may have written ads that anger and lose our customers—they may actually be twice as good at destroying our business. We don’t know because we did not capture the key data required to evaluate our process.


What if I told you team A worked in a common work area and team B worked alone in private offices. Could we conclude that working in a common work area doubles our productivity?

No.

I hope by now the reason is obvious.

Now imagine a company sells tools for judging the productivity of ad writing teams. Say this company gathered data not just for one week but for 500 weeks, not just for one team but for 35,000 teams. Would that make the observations more accurate? No. More believable?

Yes, unfortunately.

At least I didn’t detect one other person in the audience who considered the entire agile metrics presentation to be entirely invalid—and it was entirely invalid. Perhaps I am lucky, I not only founded one of the world's most renown agile software houses I also work and teach direct marketing techniques with Perry Marshall and Associates. Direct marketers cannot tolerate any BS around how their campaigns actually perform. If they do, they go out of business, fast.


The Second Metric you always need is on how each element of work you deliver creates business value. If you are writing software, this calculation frequently can be automated and reported in much the same manner as digital direct marketers automate and report the success of their ad campaigns—tying individual ads all the way to revenue generated.

It is possible for many features to automatically calculate and display the value the feature brings to the business once the software is deployed. These can even be done for features that are simply meeting a government regulation. It is essential to calculate this value in one way or another for every feature if we are every going to understand real productivity.


I suggest anyone doing metrics begin by taking a good course in direct marketing. It is time for metrics to grow up and join the adults. It is time for people to laugh a presentation like the one I saw out of the room. The Third Metric: Cost of Change Delta


This metric I also describe as the forgotten metric. Some of the earliest and best writings on what would be later called agile were by Kent Beck and his associates. They wrote extensively about a key goal being to flatten the software development cost curve. That is, the cost to make a change near the beginning of a software development initiative is close to the cost of making the same change later in the software development initiative.


A whole series of software engineering practices are required to flatten the cost curve which are included in Kent’s Extreme Programming, and totally missing in another supposedly agile process called Scrum.


Flattening the cost curve is important because a vast majority of the cost of corporate IT systems is not during initial development but during long term system maintenance, support, and enhancement. Many IT systems last ten to twenty years or more. For these systems eighty to ninety percent of the cost of the system is likely to occur in maintenance and enhancements.


The Third Metric is a measure of the difference of a cost of a change over time. The Third Metric is Cost of Change Delta.

Here is a simply way to understand this metric. Go to any screen in your system and ask the software developers to estimate the effort to add a new field to the screen and to be able to create, read, update, and delete (CRUD) the data—and deploy it all to production.

In traditional development processes a request for a new data field early in development are very cheap, and the same change later in development is very expensive, perhaps hundreds of times more expensive than an initial change. This is why traditional development projects try to get all the requirements right at the beginning and try to get them all in quickly—it saves a great deal of money.

Lets look at two cases, Project A & Product B.

In project A we observe an estimate to create a new CRUD field on the user interface is $1,000 early in a project. Later, the cost of that same change is estimated at $150,000. Cost of Change Delta is 150x. In product B we observe an estimate for a new CRUD field on the user interface is $1,250 early in product development. Later, the cost of that same change is estimated at $1,500. Cost of Change Delta is 1.2x.

Project A’s cost of change delta is what is seen on many traditional IT departments, and Product B’s cost of change delta is what is seen on a well-implemented new startups. The lower the cost of change delta the more likely your business will survive future change.

Want to measure your ability to respond to change?

Track a cost of change delta over all of your projects and products.

This metric is essential for understanding the agility of your entire system. If your Cost of Change Delta is high you are not particularly agile, and competitive changes may simply put you out of business. It is likely your code base was created with little or no attention paid to key agile engineering practices.

The Fourth Metric: Employee Satisfaction


A shift toward enterprise agility is a significant change for anyone, but especially for large bureaucracies. Employees self-select to work in large bureaucracies precisely because they find them comforting. Changing work patterns solidified through decades of operant conditioning is extremely threatening.


For employees the status quo at least is predictable and dependable. That stability alone provides them with a degree of emotional support. An agile transformation threatens everything. By the way, this is frequently true even in small companies. Therefore, it is especially important for organizations undertaking an agile transformation to have employee satisfaction feedback.


Employee satisfaction feedback helps us improve our processes and accelerate our rate of change.


We monitor it continually with some form of running survey, just as we do with customers. The goal of the running survey is to watch the trends and detect shifts over time. For example, we can use these simple questions as a starting point:


How happy am I at work? (unhappy) 0 1 2 3 4 5 6 7 8 9 10 (happy)

Would you recommend working here to a friend? No or Yes

Why?

Recognize, the biggest cause for unhappiness in an agile transformation is management. Specifically, management not letting the employees work in a truly agile manner:

  • Not letting workers design their own work environments

  • Forcing the workers to multitask

  • Creating unnecessary role distinctions

  • Requiring unnecessary documents, reports, and meeting

  • Creating lags in work while waiting for approvals

The happiness metric also comes with a parallel automated metric that is difficult to game—the current turnover rate. Agile treats team satisfaction seriously and build in multiple techniques to obtain it including:

  • Dedicating people to one team and one task at a time

  • Providing teams with a sense of purpose or mission

  • Encouraging and equipping teams to master multiple skills

  • Giving teams autonomy to deliver their work as they see fit

  • Discouraging overtime and encouraging vacations

  • Promoting authenticity through transparency

It is impossible to sustain the benefits of agile without achieving or maintaining employee satisfaction.

 

734-355-0983

  • Facebook
  • LinkedIn

©2020 by Tom Meloche.