/* */

05 September 2008

Forecasting through games

There is a long history of modeling, simulation, and gaming within the intelligence community, dating back to the Prussian General Staff’s Kriegsspiel, wherein the intelligence of the day, such as it was, would be used to determine the enemy strength and disposition to be set for the initial conditions of the map board. (An early American adaptation of the wargame – itself arising out of the intense interest in military professionalization in the latter half of the 19th century - can now be found in the digital stacks. It is worth a glance for those inclined toward matters historical but lacking either access to the original text or the German language skills with which to comprehend it.)

One can trace a direct lineage from such explicit gaming structures through the modern evolution of many forms of exercise and drill. Such efforts are increasingly reflected in new training and education efforts within the IC, such as the recently publicized virtual incarnations of several analytic exercises at DIA. The exercise materials themselves have a far longer history in more prosaic incarnations. The tanker war exercise that is the heart of Vital Passage, for example, has been used for teaching analysis of competing hypotheses for years using nothing more than paper and pen. The new immersive formats are clearly of value in capturing the attention of those students who have not yet been caught the more abstract means of envisioning crisis. It also serves as a good transition towards the application of the methodology in more complex, non-deterministic problems – particularly given the new emphasis on using assistive software to help track larger scale issues. (We unfortunately continue to encounter a number of younger analysts –products of the civilian university - that are unable to distinguish between ACH as an analytical methodology and the software used to automate that methodology. But that is another matter, and points to a failure of instruction at certain institutions rather than flaws in computer aided analysis or exercise).

For too long, though, gaming has stagnated essentially unchanged from its earlier incarnations. It has been left to the jesters and the speculators to push the boundaries of the tool, pointing the way to new directions and new uses. The most provocative of these suggestions – as is frequently the case – came from a jester at the futurists’ court, examined the potential utility of an alternative reality game structure as a recruiting and coordination mechanism for HUMINT operations involving unwitting participants.

A more immediate implementation has also now appeared, attempting to use massively multiplayer structures for long term analysis challenges. The Institute for the Future will launch its new project, Superstruct, on 22 September, which will attack what appears to be a catastrophic scenario using an alternative reality gaming architecture for distributed participation. It is a unique approach, described further through a FAQ here, and we can already see the benefits that the transparent and free form ludic design brings to the table. (We would note this to be a distinct difference from other crowdsourced analytic projects that we have recently seen attempted). We also have high confidence in the intellect and insights of the team that is executing the Superstruct project, having followed their work for some time, as well as having attended a fascinating discussion with other ARG designers from the original "I Love Bees" team at a Second Life salon hosted by The Electric Sheep Company a number of years ago.)

We have long been on record as highly skeptical of the efforts to use the intelligence community as the instrument by which to assess the uncertainties of future climate change, and have debated the issue with others of discernment who hold differing views. Yet the IC responded to the requirements levied upon it by Congress – as it always should. The resulting assessment, and public testimony, is a model of intelligence professionalism in the face of intense politicization. We find Dr. Fingar’s responses during questioning – clearly outlining the uncertainties of the scientific data, and the limitations of the IC’s resident expertise on the topic – a perfect teaching example of effective intelligence communication.

We think however that efforts such as Superstruct may be a better venue for exploring these questions, at least until the window of likely impact falls within the long range horizons of the intelligence community’s estimative views – be that fifteen, thirty or fifty years hence. It is also a fascinatingly cross-account and interdisciplinary issues – as well as a frankly lower priority intelligence problem – that is perfectly suited to experimentation with new analytic methodologies, novel analytic outreach, and new distributed production models.

We wish the project good fortune, and look forward to the after action assessment for any lessons learned that might be applied to future analytic tradecraft.

h/t Smart Mobs, and the Business & Games Blog

Labels: , , , , , , ,

25 February 2008

Wx-ing historical

We have addressed the recent fad towards addressing climate change as an intelligence issue several times over the course of this blog. We still remain convinced that while weather intelligence – Wx – will always remain a key factor in many accounts, climate change as a long term issue is simply beyond even the horizon that can be expected of the deepest of futures intelligence looks.

We are apparently in the minority in this view. We recently acquired a copy of an unclassified 1978 research paper from Central Intelligence Agency’s National Foreign Assessment Center (now republished by the University Press of the Pacific in 2005) which examined this very issue under the title “Relating Climate Change to its Effects”. For those younger analysts unfamiliar with the misty back history of old bureaucratic battles and therefore older acronyms, NFAC was the renamed Directorate of Intelligence (DI) under DCI Turner in 1977 – a designation which lasted only until reorganization under DCI Casey’s tenure in 1981.

We think perhaps this product might have best been buried with the old name. As a paper, it is almost entirely uninspiring – a mere 8 pages of substantive text, followed by hundreds of pages of tables and statistics that are the hard copy rendition of a contemporaneous data tape, mostly consisting of temperature and precipitation measurements assembled by a university contractor on behalf of USDA. These form the inputs to a simple climate model that was intended to provide for long term predictions of weather effects given specific outcomes, such as global cooling - a key concern of the day. (However, to their credit, the designers did examine the potential for global warming as well – which speaks well of the analytic rigour of the DI under any name, even if the paper is mute testament to just how badly scientific and technical experts can be at communicating with their readership through written intelligence products.) A speculative product such as this can be expected to offer no real conclusions – rather simply serving as a possible set of boundaries within the uncertainty space of future scenarios. However, it might well have made more explicit the effects it claimed to consider within the range of those scenarios.

But again, this serves to illustrate both the waste and the foolishness of attempting a futures intelligence estimate so far into the out years. We are not issued crystal balls when we are granted entry into the profession. It also serves to illustrate the perils of the arbitrary application of quantitative analysis as a fig leaf over unsustainable judgments. The model – no doubt painstakingly assembled and hard fought at the methodological level – is by its very nature the product of 1970’s era computer science. In the face of Moore’s Law, it is therefore over 20 generations obsolete.

Let us hope that this little musing upon the history of the account gives at least slight pause to modern practitioners seeking to enshrine climate change as a permanent account for long range intelligence analysis. At a point in time when supercomputational problems that take longer than a few months run time simply are not run at all until the next generation of architecture advances its inevitable order of magnitude or more, it is after all more than a bit presumptuous to assume that any community entity would be able to beat or even match the kind of big iron thrown at these problems in the civilian science world. It also very much begs the question of what better use such resources might be put to for other intelligence accounts – perhaps in the classic roles that the IC has always employed supercomputing resources: cryptanalysis, automated signal processing, or even exploring the new boundaries of potential offered by quantum intelligence.

If nothing else, this bit of history has also more firmly reinforced our opinion that climate change issues are a matter best left to the academics. Perhaps once the Long War has been won – and given the timescale we believe will be needed to accomplish this monumental, generational task – then the community’s attention can turn more to the matter once again. And if the current crop of speculative forecasts prove correct, at that point in time the issue may properly fall within the window of an actionable long rang estimate.

Labels: , , , , , , ,

16 January 2008

New technologies for facilities characterization

We recall – and not all too fondly - the early days of attempts to create architectural CAD renderings of target facilities. This was a clumsy process, which almost always required an engineer of some description to be involved, and frankly created an end product that most consumers didn’t see as anything more than a low resolution graphic. To be sure, there was always the bragging rights of adapting a new system to a classic target, but on any given day we preferred a good graphics artist with a keen eye for perspective and proportion far better. Kind of like architects themselves, really, given their preference for covers showing artist’s concepts of buildings rather than displaying the old blueprint style plans. Newer generations of architectural software have apparently made this task easier, but have not been as enthusiastically explored.

But a new technology may change that, by bringing back a level of interaction with the system that the insulating layer of engineering specialization took from the CAD models. We are already on record as being fans of the concepts behind multitouch style screens – especially in their as on just fictional style incarnations (courtesy of the futures studies folks). There appears a lot more to be explored in the space, though, as new applications are continually popping up that offer to reconceptualize human computer interface for a variety of tasks across the intelligence profession.

The latest incarnation that has come to our attention promises a “virtual” view of factories – and in particular, aggregate views of systems data for reactions that cannot be seen directly. One can readily imagine the utility such a system would have for those engineers and analysts attempting to assemble a composite view of a competitor’s industrial processes – or an adversary’s chemical or biological weapons production facility. The key to the technology’s innovation in our view is not the S&T solution, although this could indeed be valuable, but rather the engagement with a high fidelity visualization that the multitouch screen (and its follow on evolution) could bring. We strongly believe that innovations like this are vital in opening up the more arcane collection and analytic disciplines to all source generalists – and more importantly, the intelligence end user – in ways that graphs and pictures of CAD renderings could not.

We look forward to the day – hopefully sometime very soon – when we might be able to host a briefing around such a table screen, discussing some hard target with a consumer that can literally get their hands (and heads) around the issue. We think, however, that in this our counterparts in the commercial sector might lead the way, but if nothing else the use of such visualization techniques in competitive technical intelligence might provide an excellent example to reference when building out an acquisition justification elsewhere in the community.


h / t Smart Mobs

Labels: , , , , , , ,

24 December 2007

Systems of systems analysis, with zombies

One of the harder concepts to teach within intelligence studies is the analysis of systems of systems, particularly given the complexities of real world PMESI examples. This is compounded by the natural tendency of students (and instructors with experience on related accounts) to focus on the disruption of systems – terrorism, warfare, economic crisis, etc. – far more than on aspects of resiliency that allow these systems to adapt even under severe strain. (From this originates our most serious criticisms of John Robb’s Global Guerrillas theory.)

Teaching systems of systems analysis in the foreign intelligence environment is one thing. Typically, those students are familiar – or at least interested in learning about – theories of political science, international relations, and macroeconomics that help to understand complex and dynamically adaptive structures. This is not such an easy task when teaching in the homeland security environment. The student population tends towards the far more practical aspects of more narrowly focused and concrete problems – as one would expect from a group comprised largely of cops, firefighters, and emergency medical staff. They also tend to reject what they do not perceive as inherently governmental – no matter how critical the impact of that systems failure might be. Absent a scenario in which they can more readily grasp the implications, it is a difficult if not impossible task to inculcate the perspective required to properly address the evolving all-hazard approaches that homeland security intelligence professionals must grapple with.

And herein lies the rub. Even the most carefully crafted teaching scenario can be challenged by those intent on avoiding the intellectual aspects of the exercise under the rubric of “experience”. Even recent major real world cases, such as Katrina or the California wildfires, are considered to be such anomalous Black Swan events that they are beyond the practical scope of most anticipated homeland security scenarios. This is to say nothing of a potential mass casualty or catastrophic event. This extremeness aversion (covered well within Nassim Nicholas Taleb’s discussions of Mediocristan) makes it exceptionally difficult to address major incident scenarios in a manner that truly engages the participants and forces them to consider the full spectrum of consequence management issues.

One potential answer that we have found is taken from the realm of the jesters that occasionally visit the futurists’ table. Since the fundamental premise of national event training scenarios are often rejected by participants who cannot visualize the circumstances under which they personally would be involved in the response, no matter how plausible the simulated injects, we change the premise entirely. Rather than attempting to force a willing suspension of disbelief in a plausible, realistic scenario – no matter if the participants should believe in the first place – we introduce a completely unrealistic scenario that borders on the ludicrous. However, it is precisely this over the top element that stimulates not only engaged participation but what rapidly evolves into a serious discussion within a more grounded analytical framework.

The scenario, of course, is a wide-scale zombie attack. Yes, as in the undead - not the computer type beloved of the boffins out there. The staple of countless bad movies requires little explanation –almost everyone is familiar with the rules of zombie behavior and infection. (For those that are not, a review of the “literature” is one of the more enjoyable homework assignments, no doubt.)

The underlying principle of why this scenario works to engage even the most reluctant of participants should of course be very familiar to intelligence professionals. It is identical to the purposes of the analytical technique of divergence. And it is an excellent way to keep discussion at the unclassified level when working in mixed groups of professionals that have varying degrees of access (thus preventing arguments based on “inside information” which may or may not actually support the point under contention), in a way that no scenario grounded in a real world event necessarily could.

The real key to making the scenario work for a good systems of systems discussion depends entirely on the ability of the instructor (or facilitator, in breakout session groups) to tie the discussion back to PMESI effects. This can be quite an enjoyable exercise, however, in a manner that avoids many of the traditional objections raised by those insistent on the limited focus of the classic inherently governmental perspective.

Fortunately, we recently found an excellent work of fictional speculative “history” that presents an excellent look at the higher order effects of such a scenario, called World War Z. While it goes far beyond the level we typically would focus on for a homeland security class or table top exercise, it is quite well executed and entertaining in its own right. Its interview style structure gives it unique potential value to the educator, as most of the presented chapters can be used in whole or in part to introduce the scenario. For this, we actually recommend the audio book version, with excerpts played as scenario injects or to introduce break out discussion sessions.

As an instructional exercise, this becomes certainly a far more ludic activity than we traditionally seek in the serious business of thinking about the unthinkable. Nonetheless, we see it as an excellent way to introduce some difficult high level concepts to audiences which might not otherwise want to engage them. We think the results are far better than the limited appreciation retained after a dry lecture, or a hot but entirely off topic debate over the plausibility of underlying events of a different scenario.

Labels: , , , ,

26 November 2007

Traffic analysis

Hauntingly beautiful in its own right – the following is an excellent example of the power of visualization to make sense of complex patterns within extremely large datasets; in this case, aviation flight records.

There are equally haunting displays of the exchange of information, of light, and of motion, in other areas which are so close the art and science itself. In the words of the master (still accurate, even three decades on):

“Program a map to display frequency of data exchange, every thousand megabytes a single pixel on a very large screen. Manhattan and Atlanta burn solid white. Then they start to pulse, the rate of traffic threatening to overload your simulation. Your map is about to go nova. Cool it down. Up your scale. Each pixel a million megabytes. At a hundred million megabytes per second, you begin to make out certain blocks in midtown Manhattan, outlines of hundred-year-old industrial parks ringing the old core of Atlanta. . .”

Sometimes we forget the unintentional glimpses into the very nature of the human activity that our unique profession affords us. In deep time, these may be the closest thing to the historical record of a whole range of otherwise unremarked aspects of our day.

UPDATE

It appears Zenpundit's thoughts have also drifted towards the visual, with the discussion Cognition of a Society of Visual Imagery, building upon a post at Glittering Eye. Perhaps his blog redesign (utilizing, as we understand it, the excellent talents of his wife's firm) has contributed to his greater appreciation of the aesthetic. Certainly, we his readers have benefited.

Labels: ,

30 October 2007

Epidemiological intelligence reconsidered

Some time ago, persistent virtual worlds made news for an unusual incident in which a plague spread widely through a massively multi-player fantasy game in an unanticipated fashion, due to the complexity of the system’s design. There was a great deal of speculation as the ramification of this event – some of it mirroring the longstanding discussions around self-replicating “gray goo” and other nanotechnological questions, some of it quite unique in its own right. The event even spawned serious scientific papers, and responses by other researchers.

The question of the potential utility of virtual worlds for examining epidemiological effects in simulation remains a fascinating area of study. It is a natural extension of other research attempting to track in a similar fashion the effects of viral ideas – memes – within simulated virtual populations for information operations / psychological operations studies. This is the sort of analysis which may dramatically alter the manner in which medical intelligence professionals approach their craft in the future. The watch desks of tomorrow, rather than being tied to a series of open source intelligence portals and medical information database feeds, might well also be linked to a shared situational awareness simulation, with the ability to rapidly generate scenario projections based on new reporting or analytical inference. Certainly, there has been enough cross-boundary interest in the problem that we expect to see surprising innovation in the near future. It is certainly long overdue given the need, especially in the face of recent exercises attempting to examine the impact of pandemic scenarios which were certainly less than robustly designed and executed.

In its own way, this is by no means a new problem. Among the most interesting books we ever had the pleasure to peruse on related subjects was Plagues, Poisons And Potions: Plague Spreading Conspiracies in the Western Alps c.1530-1640. We discovered this work once upon a time buried in the back shelves of the bookstore of the British Museum, and discovered it had a striking relevance for those interested in non-state actors’ motivations in biological terrorism / biological warfare incidents. The book is based largely on primary source records from Swiss city governments throughout the Savoy which were suffering through major disease outbreaks. A recurring series of cases are documented in which individuals deliberately attempted to spread disease to uninfected populations – motivated by cult conspiracy, simple hatred, and criminal profit. Given the evidence of other recent cases involving deliberate biological infection, it appears human nature remains little changed in nearly five hundred years.

We also note in a related vein the Swedes’ take on the potential dual use implications of some medical intelligence programs. While we differ with their analysis in that we are confident in the benign nature of US and allied programs, we would definitely see reason for concern in investments in such activities by some states (or their non-governmental organization counterparts) with proliferation interests.

Epidemiological intelligence remains a fascinating discipline in which the contributions of a number of different professions converge in a manner that is very rare in the community. There remains much ground for formal study to advance both the analytic tradecraft and the literature of the discipline, and perhaps to inspire similar interdisciplinary approaches in other areas of the profession.

Labels: , , , ,

20 August 2007

The need for an intelligence crucible

We have long emphasized the vital need, in the education and training of new intelligence analysts and operators, to replicate as closely as possible the conditions of stress and uncertainty under which they will be forced to perform in the real world. Too often we see bright young things emerge from the academy, exquisitely poised to debate the ever-finer points of language and probability from the comfortable remove of a classroom or library – but who utterly fall apart under the pressure of a crisis situation, or even an unusually aggressive consumer.

We have frequently cited one of the better implementations of such an environment, once upon a time considered the finishing school within a particular institution (but now sadly discontinued due to a change in staff.) The class immediately prior to the student’s capstone / thesis was dedicated to a key national intelligence issue, with a focus on the substantive elements of the account, and the unique applications of analytic tradecraft against the class of target. Each analyst was further assigned a specific segment of the account, again as might be expected in line production environment. The entire class formed a notional joint intelligence task force or joint intelligence operations center, reporting to a notional command structure (the course instructors) and responsible for providing 24/7 intelligence support (including real-time, surge requirements) to a selected group of consumers (course instructors, other cadre, and outside subject matter experts.) The class was among the hardest task facing any student in that program. It however was largely responsible for assuring a level of competence, and of personal confidence, in students which few other learning processes could achieve. As a tradition it should certainly be revived swiftly in its older home – and as an institutional legacy, it should be swiftly emulated by other newer intelligence academic programs.

We are happy to see a perhaps similar form being trialed at the Brunel Centre for Intelligence and Security Studies in the United Kingdom. Their Brunel Assessment Simulation Exercise, or BASE, is designed to provide a practical simulation of the Joint Intelligence Committee assessment process. The exercise was profiled by one of the Centre’s founders in the Winter 2006 issue of the Journal of Intelligence and Counterintelligence. We are uncertain of how closely the course models the stresses of a real world environment, let alone the complexities of interagency liaison. However, regardless of its degree of fidelity, it is likely a far better adult learning vehicle than any number of dry lectures on the UK’s national intelligence machinery.

We sincerely hope that the new programs being stood up in the intelligence studies field will take a long, hard look at the institution of the Crucible, used so successfully in a variety of applications through the defense, security, and intelligence communities. The benefits of a holistically integrated, cross-disciplinary, high fidelity simulated practicum are too clear to deny, even if somewhat alien to the normally slower pace and more placid practices of the academy. The next generation needs its Hell Week both to test their professional competence and define their fitness and self-confidence to face the unforgiving exam of real world situations in which reputations - and lives - are at risk.

Labels: ,