Wednesday, January 17, 2007

Policy Governance - Dr. Richard Biery on the Role of Measurement

An Article on Ends and Measuring Under Policy Governance®

Performance measurement is a hot business topic. The other day I was discussing with a management consultant the issue of whether the board gives the CEO the measurement it wants used or instead, as in Policy Governance, the board expresses its desired ends or results (in terms of what good, for whom, and the allocation of costs to the components of results or recipients or both), and Management identifies the best measurements to be used in its judgment, subject to any reasonable interpretation of the board’s words. It made much more sense to her, especially since that is what she was teaching to CEOs, that the board should proactively establish the metrics it wants reported on. It is unfair to the CEO, she maintained, to set goals and then not establish the measures. After all, “what gets measured gets managed.” That certainly seems a compelling argument.

Why does Policy Governance gives the CEO the freedom to establish the measures, provided those measures make a convincing case for policy compliance? What advantages does this present? To many in management science this seems backwards. We are enjoined in all the goals and objectives literature to set measurable goals at some level immediately below broad goals. Some writers stipulate that goals are broad and it is the objectives that should be measurable. In fact, there is not agreement in the management literature on this issue, (which demonstrates why not to use such terms when explaining to the board how to give purposing instruction to the CEO – purposes are best stated as ends, which do have a clear definition).

Why indeed describe the ends and not their measurement? For several reasons. Not having to worry about measurability permits the board to concentrate on saying what it wants with as much precision as it feels is necessary to be reasonably understood. There may not be measures for what the board wants…yet. It is much more important for the board to express what it want as precisely as it feels necessary even if the measures be imprecise, than to select a precise measure of what is not precisely desired! Furthermore, the board is not necessarily the expert on the metrics or measurement of the ends it desires. It is not expected to necessarily understand the measurement science available for the ends. Management is much more expert in the measurement arts and tools and can bring what it knows, or can develop, to the job of measuring the board-stipulated ends. Certainly the board can set ends in so precise a fashion that the measurement(s) become self evident, but that does not nullify the forgoing argument.

Another reason to let Management develop the measures is that the science of measuring is constantly changing. If the board tried to keep up and keep its measurement requirement up to date, it would be busy indeed. It would also have serious arguments about the best measurements, arguments in many cases from ignorance. The beauty of focusing on the ends descriptively (but as precisely as possible to convey what the board is expecting), is that they float above the vicissitudes of measurement sciences without losing their meaning and import. (This is also an advantage of an ends-focus instead of a means-focus. Means for achieving ends change and can change unexpectedly. Since sacred cows are much more likely to be means than ends, the focus on ends by the board solves the sacred cow question.)

What kind of measures should Management strive to find or develop? Can it just pick anything that suggests the ends? The data challenge for Management is to present information and/or data that convince the board that the ends are being achieved. The better the data, the more convincing the case. This frees (and motivates) Management to continue to improve the measurement science relevant to the end in question. If the board stipulated the end and the measurement, the measurement would be locked in until the board changed it, although the measurements available may have improved.

What about continuity? If measurement science is constantly improving, how can there be the continuity necessary to demonstrate progress? This argument is a red herring. All one needs to do is think back over all the improvements that have been achieved regarding measurement in a certain area and then ask oneself if whether continuing to use the same old measurement, although outdated, would have helped continuity or if switching to the newer, more precise measurement, harmed continuity. There are several reasons why improving measurement doesn’t harm continuity. Two major ones are that the new measurement may be available from within the old data, and another is that the issue is simply not important. If, for example, one can now measure to the one one thousandths and the old measurements were to the one one hundredths, we celebrate and go on! Or if we can measure the effect of a psychiatric treatment or a social program better than the measure we used before, so much the better. We adopt the newer measure.

Board work is conceptual work. The farther the board can wisely look into the future and describe its expectations in terms of the results it seeks, the more strategic it is. Being drawn into thinking about measurement will, perforce, pull the board back to the present, because by its very nature, the available measurement science is a contemporaneous art – always improving to be sure – but not out in the future where the board should be thinking. In other words, thinking about metrics and measurements drags the board backwards (and into details). It is best to let the measurements catch up with the board’s vision.