/* */

22 January 2008

The perils of arbitrary and false precision

We find quite unhelpful the recent academic obsessions over estimative language – largely an exercise in the introduction of a numerical system which offers a degree of apparently comforting but entirely arbitrary, and therefore utterly false, precision. It seems however that we are in one of those cycles which seem to come along in the intelligence community every few decades or so, in which the numerologists and other soothsayers attempt to reshape the profession into their own desires for a more “scientific” practice.

Let us be clear. There are times when quantitative analytic methodology is vital – but there are far more situations in which it is misapplied, misunderstood, and entirely out of place. The latter comprise the vast majority of scenarios in which analytic tradecraft is called upon – not the least of which may be attributed to the highly unbounded and indeterminate nature of the problems with which we must grapple. And any time in which a quantitative basis has not been established, the insertion of numerical percentages for predictive purposes is little more than a farcical exercise in arbitrary selection. Over time, you may attune a group sufficiently in order to calibrate its judgment of these percentages in such a way as to create a consistency within that shared hallucination. However, this does not alter the underlying fallacy upon which such a house of cards is built. This is clearly shown in the number of cases in which the naive predictor is a better estimate of potential than the much vaunted group of experts’ judgment. Thus even in finance, the most precise of arenas, built upon the foundation of values, you will find predictions expressed equally alongside hedges – and the market littered with those who have failed to impose arbitrary figures on a highly indeterminate problem.

One of the greatest challenges in intelligence analysis is to understand the limits of prediction when going about the hard business of estimation. That understanding should shape the analyst’s focus on what ought to be examined for predictive possibility. These are, properly: the scope and nature of trends, drivers, and future scenario outcomes – and not the capricious shadings of difference between mathematical expressions of probability.

Estimative language has not been expressed through probability percentages for the sixty plus years of the intelligence community’s modern incarnation for good and well contemplated reasons. While the abstraction of the clean and sterile realm of mathematics is often a welcome change from the messy and hard realities of intelligence, that abstraction too frequently is used as a shield and an intellectual refuge for those unable or unwilling to embrace the challenge of actually doing intel.

Scientisim in intelligence analysis is a particularly seductive heresy. It offers the false promise of greater insight, should only additional efforts be applied more systematically, more rigorously, or with more and better data. But it has not been given unto us to see the future – no matter how carefully we might craft our equations. We may simply chart the boundaries of its outlines, and discuss the implications within the uncertainty space so described.

We have no doubt that we will revisit this discussion in short order. For now, however, we would close with an excellent reminder of the vast gulf of differences that may be concealed with that change of a single degree of significance in numerical expression. Originally produced for IBM, this admittedly dated video still serves to explain the staggering concepts of scale in a world of large numbers. (h/t to Thoughts Illustrated for pointing out its online incarnation.)


Labels: , , , ,