Crowdsourcing OSINT
UPDATED below
Now that the deadline for submissions is passed, and thus our comments cannot unduly influence responses to the DNI’s Open Source Innovation Challenge, we wish to revisit some of the strategies that emerged in response to the exercise.
The 1995 Burundi exercise sought to use a direct privatization model, in which contractor resources and analysis were directly compared to the IC’s production. Subsequent production level OSINT efforts have rarely involved such an either / or choice, but rather focused on augmenting community capabilities. The original model was largely collection focused, as were most OSINT efforts of the day (and regrettably, far too many even now). The choice of a single firm (admittedly, one of the only in the USG facing OSINT game back then) offered a degree of centralization of efforts and commonality of response. In reality, that single firm pursued an acquisition strategy which leveraged a number of other commercial vendor’s products in specialized areas, from gray literature exploitation to commercial overhead imagery re-dissemination and analytic outreach.
Thus far, we have observed that responses to the latest challenge appear to have fallen into roughly three categories. The first were the highly competitive offerings by subject matter experts and related small teams with prior intra-group connectivity and affiliations, typically executed rapidly and in a low profile manner. The second were the aspirational offerings, typically by individual practitioners, interdisciplinary academics, or smaller firms. These offerings often involved those without direct community expertise covering the identified target set, but who acknowledged a desire to participate in the field. Neither were unexpected responses.
The third class of response, however, is destined to perhaps be the most controversial. It has been described as a “crowdsourcing” approach. To date, we are aware of a singular such effort out of Mercyhurst College’s intelligence studies program, which has also been something of an outlier in the field. Now, the term crowdsourcing brings immediately to mind its alternative label, the LazyWeb – and we are also reminded of the comments by the bright folks over at Oracle’s think tank AppLabs, in which the subtext of such efforts to leverage the wisdom of crowds are revealed.
However, one can rarely call the highly motivated students at Mercyhurst lazy. And while application of this new aggregation model for open source acquisition – we would hesitate to call it production, at least as we currently know it – is indeed innovative, it raises as many questions as it might produce answers.
In this, the crowdsourcing model reminds us of an earlier effort at Mercyhurst to trial new intelligence production approaches using another Web 2.0 technology. We do sincerely hope this effort is more successful than the last. UPDATE: And so it came to pass, with the Mercyhurst effort taking a win alongside the submission from the commercial intelligence firm iJet, out of twenty four total entries. Congratulations are in order - and the first round for the winners is on us.
The most critical issue that we see in crowdsourced OSINT strategies is the problem of denial and deception. One of the enduring tenants of OSINT tradecraft is that the sources consulted ought not ever know the use to which the information will be put. The divorcing of content from use context goes a long way towards reducing the problems created by sources which may attempt to influence rather inform (at least in terms of deliberate active messaging tailored for IC audiences). Rigorous analysis must still be applied to identify and eliminate the effects of source bias and implicit messaging directed at other audiences, but it is far harder for an adversary to coordinate a passive deception campaign seeded into open sources if they are unaware of the OSINT effort, its key intelligence questions, and its collection methods.
Crowdsourcing seems particularly vulnerable to denial and deception given that it relies on explicit calls for participation. Further - beyond mere knowledgability of project topic and intended audience - the publication of the specific indicators sought by the project coordinators essentially provides a roadmap for potentially successful deception themes and associated messaging, as well as the essential elements of information to be protected by adversary operations security and other denial measures. While source validation measures may provide some defense against such deception, it is unlikely to defeat a well crafted campaign executed through appropriate cover organizations and other agents of influence.
Timelines do play a role – short deadline production efforts are less likely to attract deliberate deception. However, if crowdsourced OSINT becomes commonplace, it may serve an adversary’s interests to establish latent architectures which would enable rapid response active measures campaigns designed to exploit the lack of time available for validation and other testing. One could particularly see such a structure evolving in advance of planned actions which an adversary foresees would provoke a high profile international crisis. The information advantage that could be offered in such a situation should such deception efforts influence a targeted decisionmakers' response would be priceless – especially in the critical first hours of a 3 a.m. moment that developed without earlier warning.
One could easily envision an experimental research series which would evaluate the potential susceptibility of crowdsourced OSINT to denial and deception. We hope to see some young researcher take up this effort in the near future.
We would also be remiss if we did not note that another contemporaneous effort – the Gray Goose project - has emerged to address a similar real world OSINT problem using a very different production strategy, one that might be termed rapid community of interest formation relying upon self-affiliation of interested subject matter experts. This effort bears greater examination in depth, particularly as it deliberately – and hopefully more productively - channels behaviors we have previously observed in surge intelligence responses to other crisis events. It also appears to be at least in theory more resistant to denial and deception, but that is a discussion for another day.
While the DNI’s challenge has overall generated a great of discussion, it remains to be seen whether that energy translated into truly innovative finished OSINT products. We eagerly await further conversations on the topic at the conference later this week, along with what we hope will be a future overview level assessment and compilation to be published under the DNI’s auspices. From the perspective of intelligence studies theory, it has been a most fascinating exercise to observe, and no doubt much will continue to come of it.
Now that the deadline for submissions is passed, and thus our comments cannot unduly influence responses to the DNI’s Open Source Innovation Challenge, we wish to revisit some of the strategies that emerged in response to the exercise.
The 1995 Burundi exercise sought to use a direct privatization model, in which contractor resources and analysis were directly compared to the IC’s production. Subsequent production level OSINT efforts have rarely involved such an either / or choice, but rather focused on augmenting community capabilities. The original model was largely collection focused, as were most OSINT efforts of the day (and regrettably, far too many even now). The choice of a single firm (admittedly, one of the only in the USG facing OSINT game back then) offered a degree of centralization of efforts and commonality of response. In reality, that single firm pursued an acquisition strategy which leveraged a number of other commercial vendor’s products in specialized areas, from gray literature exploitation to commercial overhead imagery re-dissemination and analytic outreach.
Thus far, we have observed that responses to the latest challenge appear to have fallen into roughly three categories. The first were the highly competitive offerings by subject matter experts and related small teams with prior intra-group connectivity and affiliations, typically executed rapidly and in a low profile manner. The second were the aspirational offerings, typically by individual practitioners, interdisciplinary academics, or smaller firms. These offerings often involved those without direct community expertise covering the identified target set, but who acknowledged a desire to participate in the field. Neither were unexpected responses.
The third class of response, however, is destined to perhaps be the most controversial. It has been described as a “crowdsourcing” approach. To date, we are aware of a singular such effort out of Mercyhurst College’s intelligence studies program, which has also been something of an outlier in the field. Now, the term crowdsourcing brings immediately to mind its alternative label, the LazyWeb – and we are also reminded of the comments by the bright folks over at Oracle’s think tank AppLabs, in which the subtext of such efforts to leverage the wisdom of crowds are revealed.
However, one can rarely call the highly motivated students at Mercyhurst lazy. And while application of this new aggregation model for open source acquisition – we would hesitate to call it production, at least as we currently know it – is indeed innovative, it raises as many questions as it might produce answers.
In this, the crowdsourcing model reminds us of an earlier effort at Mercyhurst to trial new intelligence production approaches using another Web 2.0 technology. We do sincerely hope this effort is more successful than the last. UPDATE: And so it came to pass, with the Mercyhurst effort taking a win alongside the submission from the commercial intelligence firm iJet, out of twenty four total entries. Congratulations are in order - and the first round for the winners is on us.
The most critical issue that we see in crowdsourced OSINT strategies is the problem of denial and deception. One of the enduring tenants of OSINT tradecraft is that the sources consulted ought not ever know the use to which the information will be put. The divorcing of content from use context goes a long way towards reducing the problems created by sources which may attempt to influence rather inform (at least in terms of deliberate active messaging tailored for IC audiences). Rigorous analysis must still be applied to identify and eliminate the effects of source bias and implicit messaging directed at other audiences, but it is far harder for an adversary to coordinate a passive deception campaign seeded into open sources if they are unaware of the OSINT effort, its key intelligence questions, and its collection methods.
Crowdsourcing seems particularly vulnerable to denial and deception given that it relies on explicit calls for participation. Further - beyond mere knowledgability of project topic and intended audience - the publication of the specific indicators sought by the project coordinators essentially provides a roadmap for potentially successful deception themes and associated messaging, as well as the essential elements of information to be protected by adversary operations security and other denial measures. While source validation measures may provide some defense against such deception, it is unlikely to defeat a well crafted campaign executed through appropriate cover organizations and other agents of influence.
Timelines do play a role – short deadline production efforts are less likely to attract deliberate deception. However, if crowdsourced OSINT becomes commonplace, it may serve an adversary’s interests to establish latent architectures which would enable rapid response active measures campaigns designed to exploit the lack of time available for validation and other testing. One could particularly see such a structure evolving in advance of planned actions which an adversary foresees would provoke a high profile international crisis. The information advantage that could be offered in such a situation should such deception efforts influence a targeted decisionmakers' response would be priceless – especially in the critical first hours of a 3 a.m. moment that developed without earlier warning.
One could easily envision an experimental research series which would evaluate the potential susceptibility of crowdsourced OSINT to denial and deception. We hope to see some young researcher take up this effort in the near future.
We would also be remiss if we did not note that another contemporaneous effort – the Gray Goose project - has emerged to address a similar real world OSINT problem using a very different production strategy, one that might be termed rapid community of interest formation relying upon self-affiliation of interested subject matter experts. This effort bears greater examination in depth, particularly as it deliberately – and hopefully more productively - channels behaviors we have previously observed in surge intelligence responses to other crisis events. It also appears to be at least in theory more resistant to denial and deception, but that is a discussion for another day.
While the DNI’s challenge has overall generated a great of discussion, it remains to be seen whether that energy translated into truly innovative finished OSINT products. We eagerly await further conversations on the topic at the conference later this week, along with what we hope will be a future overview level assessment and compilation to be published under the DNI’s auspices. From the perspective of intelligence studies theory, it has been a most fascinating exercise to observe, and no doubt much will continue to come of it.
Labels: analytic outreach, analytic tradecraft, denial and deception, Intel x.0, OSINT, privatization of intelligence
<< Home