Embracing Imperfection: Israel-Hamas and the reality of intelligence estimation
DECEMBER 2023
“How did Israeli intelligence fail to stop major attack in Gaza?” (BBC)
“Hamas’s attack was an Israeli intelligence failure on multiple fronts” (The Economist)
“Hamas’s murderous attack will be remembered as Israeli intelligence failure for the ages” (The Guardian)
As the term “intelligence failure” resonates again through the corridors of media outlets and government agencies around the world, recent events in Israel and Gaza are a haunting reminder of the imperfections of our craft.
We’ve been here before. In 2003, the Iraq War exposed the vulnerability of intelligence to political pressures as decision-makers built a compelling case for military intervention based on flawed assessments. Two years earlier, US intelligence and security agencies noticed suspicious activities but failed to connect the dots before the events of September 11th. In 1982, British officials underestimated the Argentine military’s capabilities and intentions concerning the Falkland Islands. These are all examples of widely known “intelligence failures” of our time.
We don’t know how much Prime Minister Netanyahu knew before 7 October, but it was likely only fragments of a puzzle and not the complete picture. We must remember that intelligence is not, and cannot be, a precise science. No matter how sophisticated, intelligence assessments can only offer a reduced image of the threat or risk landscape. Analysts try to focus this image based on relevance, using a variety of sources to guide leaders through a cacophony of information towards informed and timely decisions. This is known as an “all-source” approach, where we stitch together what we know from human sources (HUMINT), intercepted signals and communications (SIGINT), and open-source information (OSINT) to create a reasonable estimate. Despite all this effort, we still only ever have a straw to peer through.
Faced by this reality of uncertainty and challenges to information reliability, analysts can use structured analytical techniques to minimise the risk of failure and achieve more scrupulous assessments. For instance, a SWOT analysis (Strengths, Weaknesses, Opportunities, Threats) helps us to assess the relative vigour behind a particular scenario or entity of concern. An ACH (Analysis of Competing Hypotheses) provides a critical assessment of scenarios against known intelligence, while plausibility cones help us to illustrate the most likely outcomes. Other methods such as red teaming and devil’s advocacy can also reveal blind spots and biases, particularly with protective intelligence.
But we also limit the value of intelligence if we narrow the question down to the “most likely” scenario. A probability yardstick, such as that of the UK’s Professional Head of Intelligence Assessment (PHIA), helps to put assessments into perspective. However, a well-reasoned consideration of “best-case” and “worst-case” scenarios, to include potential indicators or warnings of changing risk, will always separate a good assessment from a great one. There is little reason to believe that Israel, with one of the most capable intelligence functions in the world, was not sitting on a first-rate assessment of the “most likely” scenario before 7 October – but whether the military was prepared for the “worst case” and monitoring for a rainy day is another matter.
Both the Israel-Hamas and Russia-Ukraine conflicts also underline the sophisticated role of visual media manipulation in today’s environment. Analysts of all trades face the daunting task of traversing false narratives amplified by bots and coordinated networks, highlighting the importance of close relationships with credible, independent sources. Meanwhile, our executives face more pressure than ever to know and act when geopolitical events are liable to trigger harmful speech, which so easily permeates the workplace.
Continuous improvement in our ability to diversify analytical methods and check sources of information is only half the battle. In the private sector, it is even more vital to learn the audience and frame intelligence assessments within the organisation’s context and goals. Verbal, written, and visual communication all have their strengths and weaknesses; if we do not present sufficient detail in a way that resonates with the consumer, an intelligence estimate can easily go ignored, even if it’s onto something major.
Those who consume intelligence also have a key role to play. Decision-makers should embrace a culture that learns from the past, values dissent, and encourages analysts to voice differing opinions. Constructive criticism and cognitive diversity are essential for producing well-rounded assessments and challenging assumptions or “group think.” It is also just as crucial to protect those who suggest alternative ways of thinking from repercussions or backlash if an assessment later proves to be incorrect after swaying a decision.
Offering clear guidance on intelligence requirements and decision-making needs pays dividends. Part of this means giving analysts the right access to security managers, risk professionals or executives, and facilitating two-way dialogue with realistic expectations. If we champion after-action reviews and share feedback on the utility of intelligence reports, we open the door to further improvement – and a better chance of noticing something further left of the next bang.
Embracing imperfection in intelligence is not a sign of weakness, but rather a recognition of the complexity of our mission and a commitment to always improving. This responsibility to commit lies with both intelligence analysts and the decision-makers they serve.
Jack Nott-Bower MSyI, Associate Director and Head of Training and Consulting.