Intelligence Failures Are Inevitable. Get Over it

The Iraq War marked the first time that secret intelligence was used in the public domain to justify America going to war. After no “weapons of mass destruction” were found, some partisans laid the blame squarely on the U.S. intelligence community, calling it an “intelligence failure.” Since then, this term has become a regular part of the vocabulary in national security debates and, with addition of the leaks by Private Manning and Edward Snowden, the work of intelligence has evolved from something that takes place behind closed doors to something that seemingly hits the headlines every week. Many academics, reporters, officials, and average citizens neither fully understand what this complex term encompasses nor apply it properly.

To take one recent example, some questioned if failure to foresee Russia’s invasion of Ukraine was an intelligence failure. A few Congressmen were quick to apply the label. One of the major issues in the academic study of intelligence, rather than examinations in politicized committee hearings or newspaper editorials, is that a full presentation of the facts of a situation — such as who knew what, who was told, when, in what sort of language, and what they did with it – is not made until years after the fact. We will not know for many years if Crimea represents an intelligence failure. However, there is more information available on the case of the Iraq War. Twelve years later, though much more information is available, the record still remains incomplete. Nonetheless, it provides an instructive case for exploring what intelligence failure is and is not.

Intelligence failure can be broadly defined as “a misunderstanding of the situation that leads a government (or its military forces) to take actions that are inappropriate and counterproductive to its own interests.” Intelligence failures are inevitable because there are limits to what intelligence can accomplish. Its proponents are guilty of overselling its capabilities and its consumers and observers are guilty of misunderstanding them. To understand what intelligence failure is it is necessary to understand the different sources of it, identify where failures occur in the “intelligence cycle”, their causes, and other factors affecting it. Understanding the complex process of transforming information into intelligence also means understanding that it would be more shocking not to have intelligence failures. They are inevitable because intelligence is there to reduce the occurrence or effect of unanticipated events, not wholly eliminate them as a possibility.

Known Unknowns and Unknown Unknowns

The first step in the intelligence cycle is to develop directions as to what information is necessary, known as “requirements,” and how it will be collected based upon the needs of the eventual consumers of the intelligence. Based upon this guidance, intelligence agencies begin collection. Collection is performed by all means available to intelligence agencies, including human intelligence, signals intercepts, imagery, and scientific and technical measurement, among others, and relates to one or more types of intelligence, including military, political, economic, or cultural intelligence. No stone is left unturned.

Understanding the complex process of transforming information into intelligence also means understanding that it would be more shocking not to have intelligence failures.

Collection failure can be seen as the “unavailability of information when and where needed” or part of Richard Betts’ “Pathologies of Communication” – the lack of timely collection of information. Collection failure occurs because the information necessary to respond successfully to the situation was not collected when needed.

Relating to the Iraq War, according to John Morrison there was a failure to set a requirement for collection of political intelligence to place in context technical information on Saddam’s non-conventional weapons programs. Collection was instead focused on gathering information about the existence of the programs themselves. This led to failure to consider the question in light of Saddam’s“political system, fears, and intentions.” Collecting such information could have offered an explanation as to why Hussein would refuse to disavow WMD programs and cooperate with UN inspections despite not having active programs. Setting this requirement may have served to balance the dominant presumption that Iraq had WMD, the justification for the war which turned out to be false.

Planning and directing intelligence requirements and collection must still deal with the fact that, in the now-famous words of Donald Rumsfeld, “There are no ‘knowns.’ There are thing we know that we know. There are known unknowns. That is to say there are things that we now know we dont know. But there are also unknown unknowns. There are things we dont know we dont know’ (North Atlantic Treaty Organisation, 2002). CIA’s Sherman Kent, similarly expressed that there are “Things which are knowable but happen to be unknown to us, and…things which are not known to anyone at all.” Collection failure can occur when necessary information, known to exist and known to be needed, is not available due to some limitation. Or it can occur because it was not even considered, dismissed, or there were no indicators it was necessary to gather such information.

Reasoned Guesswork and Flawed Humanity

Station three in the intelligence cycle is the processing of collected information, followed by the fourth step of analysis and creation of intelligence products, such as reports, findings, and estimates. This is followed by the fifth step, dissemination — distributing the intelligence product to decision-makers in a proper format in a timely manner.

The analysis stage begins with an evaluation of the truth of the collected material and the validity of the process which acquired it. For some forms, such as imagery and signals intelligence, this will be easy; for others, especially human intelligence, it is more onerous to establish. There may be the wish of the collecting officer to have his source believed, giving rise to the need for an independent appraisal of the information.

Accuracy in reporting or quoting the information from the source to the analyst must be ensured. It must be checked that the source has actual access to the information or an acceptable explanation of how it was obtained. It must be considered if the source has some ulterior motive for providing the information or if they are involved in a counter-intelligence operation. Their previous track record must also be considered. If the veracity of the information is not thoroughly tested and established, anything that follows may be derived from false information and will be fruit from a poison tree.

Actual analysis of the information, after its validity has been established, involves appraising the value of the information in its own right, deciding how much weight should be assigned to it, and compiling it into “meaningful strands” based upon what it relates to. These strands, compiled from all available sources of related information, are then further used to develop estimates of a particular international situation or set of circumstances. They can range between rapid, low-level appraisals up to the collective wisdom of the entire intelligence community, as with National Intelligence Estimates. The product is then disseminated and/or briefed to decision-makers in an appropriate format.

If the circumstances were unambiguous, then intelligence services could hand over “indisputable facts” and leave them to it. However, there are few circumstances in international politics which possess such clarity that decision-makers can decide on their own with “raw intelligence.”

Collecting information is difficult enough; deciding what should be done in light of it and how it should be presented creates even more problems. Kent felt that estimates consist of “knowledge, reasoning and guesswork.” The pitfalls of analyzing information to attempt to piece together accurate appraisals, predictions and advice based upon information of varying types and of contestable accuracy or value often in a short amount of time in order to develop an intelligence product are apparent.

UK and U.S. analysis of Iraq’s nonconventional weapons relied heavily upon unreliable human intelligence sources which were not properly vetted and, in some cases, have since proven to be fabrications. Information was also being quickly disseminated to policymakers without being properly validated or analyzed first. In his study of the major U.S. and UK post-mortems on Iraq, Richard Jerviscites many examples of Analysis Failure: The intelligence community’s judgments were stated with excessive certainty; no general alternative explanations for Saddam’s behavior were offered and a lack of imagination to develop these alternative views; failure to challenge dominant assumptions regarding existence of WMD programs, and; a situation whereby assessments were based on previous judgments without carrying forward their uncertainties, among others. These examples illustrate just some of the places where analysis went wrong.

Analysis Failure is a broader category. It includes “Tendency to concentrate on ‘usual suspects’ for ideological or practical purposes,”  “Opinion governed by ‘conventional wisdom’ without supporting evidence”, “’mirror imaging’, in which unfamiliar situations are judged upon the basis of the familiar,” and the “Paradox of Perception”—failure to properly balance pre-conceived notions based upon previous and historical experience against an unbiased look at information and failure to balance sensitivity of warnings between insufficiency and alarmism. It also includes failure in “effectively communicating with Decision-Makers” Betts’ other entry under “Pathologies of Communication.”

In sum, analysis failure can be located in the processing, analysis and production, or dissemination stages of the intelligence cycle. Analysis failure occurs because information is not properly validated; it is improperly dismissed; the wrong degree of emphasis is placed on collected information; opinion developed through collected information is skewed by practical, ideological, or some other cognitive bias, or; where intelligence products do not effectively communicate the necessary information collected to decision-makers, either through an ineffective portrayal of the information collected or untimely dissemination to them.

Writing in defense of the wrongly-concluded 1962 NIE — labeled a highly-dangerous intelligence failure by some — on the likelihood of Soviet nuclear missiles being stationed on Cuba, Kent explained the process of estimates thus:

“If NIEs could be confined to statements of indisputable fact the task would be safe and easy. Of course the result could not then be called an estimate. By definition, estimating is an excursion out beyond established fact into the unknown a venture in which the estimator gets such aid and comfort as he can from analogy, extrapolation, logic, and judgment. In the nature of things he will upon occasion end up with a conclusion which time will prove to be wrong. To recognize this as inevitable does not mean that we estimators are reconciled to our inadequacy; it only means we fully realize that we are engaged in a hazardous occupation.”

If the circumstances for which decision-makers require intelligence were unambiguous, then intelligence services could hand them “indisputable facts” and leave them to it. However, there are few circumstances in international politics which possess such clarity that decision-makers can decide on their own with “raw intelligence.” Today the problem may be too much information as opposed to not enough when one considers the mass amounts of imagery and signals information collected.

Schmitt and Schulsky argue, “The heart of the problem of intelligence failure, [is] the thought processes of the individual analyst.” Attempts to counteract the human element through systemic or procedural reforms, rather than fixing these flaws, may actually serve to build overconfidence in them afterwards through a belief that the problem has been solved and will not recur. However, history shows the same mistakes continue to be made. At bottom, intelligence analysis is still a flawed human process. So long as there are ambiguous situations, there will inevitably be analysis failures.

Sins of Strategic Intelligence

In the final step of the intelligence cycle, the product, having been disseminated, has been received by its consumers. They may have further questions or desire more information. This leads to the development of new requirements, leading back to the first stage of the intelligence cycle, which begins anew.

Decision-makers, a mix of high-level elected officials, executive appointees and military officials, then use the intelligence produced to inform decisions. Making decisions vital to protecting national security is inherently difficult, with or without accurate intelligence. Historically, intelligence failures are most frequently located with decision-makers. Policymakers often make one of Loch Johnson’s “seven sins of strategic intelligence”—ignoring intelligence that which does not conform to their view of a situation. If intelligence is misunderstood or ignored, the policy decision-makers pursue may lead to intelligence failure.

Decision-maker failure is the “subordination of Intelligence to policy” and may influence the initial planning and direction stage of the intelligence cycle or be exhibited in how or if decision-makers use intelligence to make policy. It also occurs when pressure from decision-makers is brought to bear on the intelligence cycle. According to Betts, “In the best-known cases of intelligence failure, the most crucial mistakes have seldom been made by collectors of raw information, occasionally by professionals who produce finished analyses, but most often by the decision-makers who consume the products of intelligence services.”

Given the complex nature of the work of intelligence and all that can go wrong, it would be more extraordinary if intelligence failures did not occur.

Decision-maker failure often occurs as a result of “Politicization” one of the central problems of intelligence. In the words of Robert M Gates as Director of Central Intelligence:

“Politicization can manifest itself in many ways, but in each case it boils down to the same essential elements: almost all agree that it involves deliberately distorting analysis or judgments to favor a preferred line of thinking irrespective of evidence. Most consider ‘classic politicization to be only that which occurs if products are forced to conform to policymakers views.

Decision-maker failure often occurs because of some form of politicization by policymakers where intelligence is deliberately distorted, ignored, or selectively applied. According to Johnson, “No shortcoming of strategic intelligence is more often cited than the self-delusion of policymakers who brush aside or bend facts that fail to conform to their Weltanschauung [worldview].”

The case of Iraq provides a clear example of politicization of intelligence. The Pentagon’s Office of Special Plans (OSP) was specifically established by the Bush administration to develop assessments based upon the assumption that Iraq had nonconventional weapons and its products were used in preference to other Intelligence Community assessments that contradicted that hypothesis. A “Red Team” analysis conducted by CIA’s Weapons Intelligence, Non-proliferation, and Arms Control Center (WINPAC), designed only as an initial “devil’s advocate” view in the face of evidence Iraq did not have WMD, was also selectively used by the administration despite strong evidence from the rest of the Intelligence Community its conclusion was wrong. Assessments by agencies such as U.S. Department of Energy and the State Department’s Bureau of Intelligence Research provided strong, clear arguments against the central argument made in assessments preferred by the Bush administration, but they were pushed aside by decision-Makers.

In Western democracies, decision-makers are either elected officials or senior civil servants. Politics is their job and it affects all aspects of it, including making security decisions based upon intelligence. Politics affects every form of government, from totalitarian dictatorships to communist collectives. Attempting to totally remove politics from war and national security is an impossible task. So long as politics are involved in security decisions, there will inevitably be intelligence failures caused by politics.

Bureaucracy, Deception, and Counterintelligence

Systemic factors which may lead to intelligence failure include internal bureaucratic obstacles and failure to share information or cooperate with other agencies. This can be described as, “lack of prompt and full sharing of intelligence information within intelligence agencies, between different agencies, and between federal, state and local government levels.” These systemic factors may occur at any phase of the intelligence cycle and lie within collection, analysis or with decision-makers. If collection resources or collected information is not shared it cannot be properly analyzed and will not make its way to decision-makers, leading to intelligence failure.

The WMD Commission found that there was not enough cooperation or information sharing between agencies, specifically in regard to reports that called into question the credibility of the information by the well-known source “Curveball,” whose assertion that he had been involved in active WMD programs in Iraq became a central piece of evidence in the Bush administration’s push for war. Curveball (Rashid al-Janabi) himself has since admitted his evidence was a fabrication. If more than one agency had access to these reports, there may have been more questions asked. However, bureaucratic obstacles and security standards stood in the way. There is a lack of information sharing between U.S. intelligence and law enforcement agencies, seen to have separate functions, which may have led to vital information not being shared and considered.

External factors are always present. They include the intrinsic difficulty in identifying targets and the fact that targeted states or organizations cannot be expected to remain passive to intelligence operations against them. Sometimes it is difficult or impossible to collect some forms of information, or, as Kent put it, “Something literally unknowable by any man alive.” Knowing the intent of an adversary before even they have formed it is an example of this impossibility. An intelligence failure for one state is often the result of an intelligence success by another. External factors most often affect collection, but they may also affect analysis and decision-makers at any stage of the intelligence cycle, especially if the particular intelligence failure is the active effort of another foreign intelligence service.

Saddam Hussein and his regime were not inactive players in their fate. There was a systematic denial and deception campaign by the regime and their failure to cooperate with and eventual ejection of UN inspectors supported the belief that Iraq had something to hide regarding its nonconventional weapons programs. Wherever evidence was not available to prove or disprove hypotheses, Saddam and his regime could be blamed for blocking attempts to collect the necessary intelligence. In his debriefing by the FBI, Saddam admitted that he wanted to maintain the façade of possessing WMD to counter enemies, especially Iran.  Of course, had Saddam been open and cooperated with UN inspections, the likelihood of war would have been greatly reduced. However, intelligence is “a game between hiders and finders, and the former usually has the easier job.”

Intelligence failure is a complex subject which can stem from any or all of these factors. Intelligence, despite bottomless budgets, advanced capabilities, and the undoubted commitment of highly intelligent individuals, remains, at bottom, “informed guesswork.” Those who slap the label of intelligence failure on an event without understanding the full complexity of the work of intelligence are doing a disservice. Every unanticipated event or attack is not the “result” of an intelligence failure. While intelligence works to reduce the number or probability of these kinds of events, it is a losing battle. To always know what is going to come to pass would not be intelligence; it would be omniscience. It is easy to see what signs were missed after an event has occurred. We must accept that intelligence failures are inevitable. Indeed, given the complex nature of the work of intelligence and all that can go wrong, it would be more extraordinary if intelligence failures did not occur.

[Photo: Flickr CC: The U.S. National Archives]



  1. Pat Filbert 12 March, 2015 at 03:04 Reply

    What you spent your entire article describing is not “an intelligence failure” but a “leadership failure.” Intelligence responds to the senior leader or decision makers requirements based on the mission that leader is focused on accomplishing.

    Stating that “an intelligence failure” occurred for anything in recent history is a misnomer, pure and simple. 9/11 is often touted as an “intelligence failure” because no one identified extremists would hijack planes and then fly them into buildings as a type of cruise missile. The lack of information on when various countries detonated nuclear warheads, when Russian went into Crimea, etc have a lot to do with a lack of ability to collect on those events as well as a lack of direction by leadership as well as an inability to see things that simply have never occurred before that would require briefing the boss (9/11 as a case in point).

    There is a very inadequate number of collection sensors that are all focused on a tier system of collection because they are low density/high demand. In other words, we will never have enough and when when some methods are outed we have to develop new ones and associated methods to use them for collection.

    Claiming that “intel failed us again” as the central idea means that intel is responsible for making decisions and that is both misleading and wrong. First rule of leadership: “everything is your fault.” Trying to blame intelligence is like saying, “well, we couldn’t get the troop transports or the fuel or the food there, that’s a ‘logistics failure’ that allowed us to lose.” In fact, it was a leadership failure; a failure to provide clear guidance, focus, and vision to the planners (encompassing all elements supporting the decision making process) resulting in poor leadership–singling out intel as the “one and only reason” is misleading and wrong since without prioritized requirements that the Commander or Senior Decision Maker sets, intel will never be able to provide the finished intelligence required.

    And lets not forget, as you pointed out, egos and personalities got involved resulting in a near dismissal of intel stating there was no WMD in Iraq because “…it didn’t fit what the President wanted.”

    • Chris Miller 12 March, 2015 at 17:44 Reply

      You are correct that virtually all of our post-9/11 misadventures have been ‘leadership’ or ‘policy failures’. We never hear much about those because the very people who are at fault (Congress; White House) are the ones who control investigations and the intelligence oversight function. As Mark Lowenthal points out, we hear of intelligence failures and policy successes, but never policy failures and intelligence successes.

      It is true that the finger has been wrongly pointed at intelligence, especially on the Iraq WMD debacle. There are occasions in our history where intelligence has ‘failed’, but, as I described, this comes in various forms and for various reasons. I covered this in the known unknowns and unknown unknowns and the external factors. These are ‘failures’ in a softer sense of the word, but only because some information is intrinsically very difficult to collect and some is actually impossible to collect. These are also ‘failures’, but they are more forgivable than ‘Decision-Maker Failure’ or politicization of intelligence. As Richard Betts has pointed out since 1978, the overwhelming majority of ‘intelligence failures’ are due to failure on the part of decision-makers (who are part of the intelligence cycle). Few are due to ‘analysis failure’ and even fewer due to ‘collection failure.’

      My point in this article was to point out that there are many places in the intelligence cycle where things can and do go wrong and we must expect that failures will takes place in such a complex process, as in all complex processes. Of course the stakes are very high in this game (life and death), but if intelligence always knew everything that was going to happen before it happened and responded flawlessly, that would be omniscience, not intelligence.

Leave a reply

Your email address will not be published. Required fields are marked *