A good outcome does not necessarily mean a good decision was made, nor does a bad outcome necessarily mean a bad decision was made.
While I am a fervent proponent of “It’s performance that counts,” on the down side I know that circumstances unforeseeable by even the most vigilant can occur, that random events intervene, that even the most intelligent efforts do not always yield the desired results. On the up side, Lady Luck often intervenes, a rising tide lifts all boats, and we often confuse a bull market with brilliance.
Excellence lies in the quality of the response to bad outcomes, not in the guarantee of good outcomes. Yet I’ve noticed that we as a society tend to simplistically equate good outcomes with good decisions, and bad outcomes with incompetence and bad decision making.
We tend to make a one-to-one correlation between outcomes and competence. Yet we know that correlation is not causation: just because trends or outcomes are linked in direction or time does NOT mean that the causes behind them, the forces that drive them, also are linked.
100% of the people who take high school algebra eventually die. Does this mean that high school algebra classes lead to death? (I tried that on my dad as a kid. Didn’t even get to first base.)
The way data is presented can effect how we interpret it. Skewed presentation, even if inadvertent, can easily lead to bad decision making.
To evaluate performance correctly, intelligently, we need to include peer group observations and have a long enough base line to rule out statistical flukes. Finding the right peer group is no easy task; there are generally many choices and I assure you if the people being evaluated, CEOs on down, get to pick the peer group, they will pick the one that shows them in the best light!
Statistics can be manipulated, context is always vital. Example: The officers and the enlisted men at a remote base played a series of baseball games that the enlisted men handily won. At the conclusion of the season, a notice on the bulletin board read, “The officer baseball team had an excellent season, finishing in 2nd place. The enlisted men were not as fortunate, finishing next to last.”
When choosing peer groups for comparison evaluation, I recommend using several different base lines. Relative performance to each one will generally reveal a different and often important aspect.
Here again, context is always vital. Business Week recently ran a graphic showing the stock returns earned during the first seven years of tenure of the last five CEOs of General Electric. The returns ranged from -30.2% (current CEO Jeffery Immelt, and the worst statistic) to +140.2% (prior CEO Jack Welch had the best showing). The returns were presumably uncompounded returns, but the graphic did not so state.
The obvious implication is that Welch was a great CEO and as for Immelt? Well, the numbers speak for themselves! But do the numbers tell the truth, the whole truth, and nothing but the truth? While seven years is probably a sufficient sample size, what is missing in this comparison is a base line, a peer group for comparison purposes. How much did the overall stock market go up during Welch’s first seven years? How much did it go down during Immelt’s first seven years?
(I have no opinion on Immelt and I think Jack Welch was a terrific CEO. He shook the bureaucracy right out of GE with his no-nonsense ways and his straight talk, and insisted on transparency and accountability.)
The data as presented by Business Week (as happens frequently across the board) was designed to underscore the theme of the article and did not include enough data to support the impression it gave.
So remember: Good outcomes do not always mean good decisions, and bad outcomes do not always mean bad decisions!