The second biggest trap in information risk management.

A model CISOs can use to measure and report the level of assurance being delivered by their technical security products and controls as part of a robust cyber defense.

· 12 min read
The second biggest trap in information risk management.
💬
All models are wrong, but some are useful.
–George Box, as quoted by Douglas Hubbard

The Trap

In CISOs are in the assurance business, I wrote about one of the biggest traps I've faced: the executive who asks, "are we secure today?" Over time, I worked closely with my stakeholders, within and outside the C-suite, and moved that conversation on to one of, "against whom are we secure today?"

This paradigm shift was critical in redefining success for my program: from the binary to one of degrees considering a nuanced threat model that implied a panoply of controls was needed to defend effectively against a variety of adversaries who employ a variety of tactics and are motivated by a variety of outcomes.[1]

Success, of course, begs to be measured. Soon, my stakeholders were asking for a consumable view into the health of the program. While I had a robust and ever-growing catalog of metrics[2], it was not conducive to a boardroom setting: it was too complex to be used as a discussion aid for a crisp, concise assessment of whether we knew where our gaps were and whether there were plans in place to close them. To bridge this divide between our security engineers and our executives, we[3] created a view into program health inspired by Maslow. It looked something like:[4]
controls pyramid
Similar to Maslow's heirarchy of needs, it placed "deficiency needs" at the bottom of the pyramid. These were capabilities that were required to defend against multiple, diverse adversaries, and without which no modern information protection program could be objectively judged to be healthy[5].
maslow needs graph

Moving up the pyramid, we transitioned into higher-order needs, capabilities we believed were necessary in order for us to defend against the specific threat actors[6] targeting our company, but that might not show up in a common framework like NIST or PCI. Another powerful parallel to Maslow is that, like his hierarchy of needs, there wasn't a specific, correct order or prioritization in which our controls pyramid needed to be built and fortified. This meant that, at any point in time, we could be taking a brick anywhere in the pyramid and investing to increase its level of maturity depending on our current assessment of need intensity.[7] Superimpose an easy-to-read visual indicator of the status of a particular brick and, voilà, a board-room-ready one-pager was born.

The story does not end here, however. While the pyramid view was a great visual aid for briefings to senior staff, it did little to help our engineers, to whom CMM-style ratings felt more opaque and artificial than helpful in focusing their continual improvement efforts. It also had the unintended consequence of reinforcing a dangerous trope in information security: the "turnkey" solution. There is a familiar product cycle that plays out in our industry, where technology products[8] are offered as a quick and easy solution to a seemingly difficult, possibly nascent problem. Whereas it may be true that a product can help, the conversations that often occur – both at the leader and the engineer level – often distill down to,

"do we have a(n) {product or product category X}?"

This turns out to be an easy question to answer (yes or no), but like the "are we secure?" trap before, it obscures a more relevant question, which is,

"have we implemented an effective {product or product category X}, and is it delivering the outcomes we want against the intended adversaries today?"

To combat this effect, and to enable the people-and-focus factor that breeds long-term success in running a security product, our technical and governance teams employ a model that assesses product health with an eye toward change and adaptability. A security product in our shop can provide varying "levels of assurance"[9], from which a team's goals can be set; a portfolio of products can have their collective health measured and reported[10] in a simple, one-pager view that is fit for management consumption. Perhaps most importantly, I've discovered that this model has helped create a vocabulary that is easily adopted by governance and security product owning teams alike, irrespective of their alignment in an organization. So I'm sharing them with you here![11]

The Model

🗿Level 1: you're current!
🔌Level 2: you know it's up!
⚖️Level 3: your supply is meeting your demand!
📛Level 4: your admins are definitely your admins!
🔔Level 5: you are using all the features!
📟Level 6: you're logging!
⚙️Level 7: your trim tab is engaged!
🦾Level 8: it works in anger!


  1. The threat model is still one of the most requested iPad art pieces I've done.threat model quadrants ↩︎

  2. Ask my team members about our epic, marathon-like second-Wednesday-of-the-month metrics review sessions. In an interesting meta-twist on the concept of measurement, the sessions themselves moved through levels of maturity, moving from ad-hoc to managed to optimized over time. ↩︎

  3. I'm pretty sure BA gets a shout-out (or the blame) for at least half the early prototyping with me on this one. Thanks, BA! ↩︎

  4. Labels and color-coding for illustration purposes only! ↩︎

  5. image from Wikipedia ↩︎

  6. and their TTPs ↩︎

  7. It's perfectly normal to be working on love, shelter, DLP, threat intelligence, and self-actualization at the same time. ↩︎

  8. Firewalls! Web application firewalls! CASB! Oh, my! ↩︎

  9. See this post for the Underwriters-Laboratories-inspired genesis of "assurance" as the point of focus for me as a CISO. ↩︎

  10. one of the rare times that a heatmap view might be the best visualization, with appropriate apologies to Hubbard ↩︎

  11. with a caveat: this is not intended to replace a broader, more recognizable industry framework like NIST or HITRUST or ISO, but rather to complement it as a tool to simplify and improve communication of goals and what-does-good-look-like targets between governance teams and technical product owning teams. ↩︎

🗿Level 1: you're current!

While it happens from time to time, having the newest release of an operating system[1] or relational database[2] does not often make or break the viability of the product itself; adoption of the latest release is sometimes urgent, but more frequently driven by end-of-life and end-of-support cycles (you do still always want to be eligible to receive patches, after all). Security products do not typically follow this pattern; think about your anti-virus/anti-malware suite and how often significant advances in protection (e.g. behavioral heuristics, DEP/ASLR enforcement, etc.) are backported into significantly older branches. In order to defend effectively against emerging threat actor techniques, being current is a basic requirement[3]. In order to provide Level-1 assurance, a product must be running the latest GA release, or N-1. Protip: it's good to define whether "N" in this context means major or point release when setting goals and negotiating service levels.


  1. unless you were just dying to have ZFS and your OS vendor just released it ↩︎

  2. unless you were just dying to have native JSON as query output ↩︎

  3. a deficiency need, in Maslow terms ↩︎

🔌Level 2: you know it's up!

While it may be obvious when something like a firewall fails, many security tools are designed to be passive or fail "open"[1], or to otherwise limit their impact to user experience when they are unavailable. Having effective monitoring – and SOP's that are well-understood and reliably followed – is a sign that basic needs are being met and that major surprises will be avoided. In order to provide Level-2 assurance, a product must be monitored by staff that is trained and empowered to recover from a failure and restore service effectively.


  1. IDS/IPS or netflow monitors come to mind, as do "feeds" for things like IOC's or signature updates ↩︎

⚖️Level 3: your supply is meeting your demand!

One of the more humbling moments of my career was the day I told our CIO that we were reporting the health of a particular metric incorrectly: we had the right numerator (# of items covered by a control) but had managed to flub the denominator (# of items needed to be covered). The a-ha moment came when, over the course of a couple of weeks, multiple members of my team told me that they noticed this particular control absent on their PCs – too many to be a coincidence, or to support the percentage coverage that we thought we had. Our actual coverage turned out to be several percentage points lower, which meant that the likelihood of an adverse event was underestimated in our models. Getting the numerator and the denominator right, with a high degree of confidence, became an obsession. Beyond the value of restoring coverage to what was deemed acceptable, the more durable outcome of that event was reinforcing the need clearly define and communicate what it meant to be covered. "What is in your denominator?" became a rallying cry. Do virtualized desktops count the same way as physical ones? Does a business-to-business VPN with a customer count as an network ingress/egress point? What constitutes a "privileged" account? In order to provide Level-3 assurance, a product must meet or exceed a negotiated coverage level, with clear definitions of all inputs and exclusions to the calculation.

📛Level 4: your admins are definitely your admins!

This is a great example of a non-deficiency need that may be met before a lower-level need is fully satisfied. Simply put, in order to provide Level-4 assurance, administrative access to a product must be protected with multi-factor authentication.[1] From an assurance perspective, knowing that someone who has compromised an administrator's workstation, or managed to intercept their credentials in a lower-value system, cannot reuse those credentials to subtly tweak[2] its operation without being detected significantly raises everyone's confidence level that the product is healthy at any point in time. In this day and age, this is not only a requirement, but it's generally easy to meet – either through native product capabilities[3] or through a jump-server type solution[4].


  1. correction to a common misconception: multi-factor is something only you have, something only you know, or something only you are. The "only" makes a huge difference. ↩︎

  2. or wantonly destroy, I suppose ↩︎

  3. sometimes via RADIUS or LDAP lookups ↩︎

  4. handy link ↩︎

🔔Level 5: you are using all the features!

Back in my days as a UNIX sysadmin, I used a lot of open-source software to customize my company's firewalls. If our firewall didn't have a feature I needed, I found a project that implemented the feature[1] and added it to our firewall build. The late 1990s saw the rise of application-level gateways and demand for fine-grained, action-specific network security controls; simply allowing or disallowing access based on what layer 4[2] port it was using was no longer enough. Instead of switching firewalls altogether, I compiled FWTK and made it work with ipfilter. Later, I needed more features and replaced the http-gw portion of FWTK with squid and learned that, if I wanted to intercept encrypted traffic, I'd have to enable a feature that was disabled by default at compile time. This involved becoming very familiar with all the flags that you could pass to squid's configure script, e.g.:

$ ./configure --disable-icap-client --disable-wccp --disable-snmp --disable-eui --disable-htcp --enable-linux-netfilter --disable-ident-lookups --enable-ssl-crtd --with-openssl

Having intimate, expert knowledge[3] of what my products could and couldn't do became a passion and a necessity: I couldn't be very effective at my job if I didn't squeeze the value out of every feature; budgets were tight, and my time was my greatest asset. Knowing what my products could do occasionally made me a hero – sometimes, our security tools wound up solving interesting non-security problems.[4] As I transitioned into governing information risk, I wasn't surprised that there was a diversity of approaches to deploying products. However, I found myself returning time and time again to this particular behavior, of poring over install scripts and man pages, and finding it to be highly correlated to effective management of a security product's value. In order to provide Level-5 assurance, a product needs to go through at least biannual, coarse, features-and-capabilities reviews jointly between the technical team running the product and the information risk team governing the program. Doing so ensures that needed features remain properly configured[5], and that emerging capabilities[6] are evaluated promptly.


  1. or, on rare occasions, wrote it myself... frightening. ↩︎

  2. I learned OSI as "All People Seem To Need Data Processing." Thanks, JPD, for that mnemonic. osi-1- ↩︎

  3. a good litmus test was whether I was confident calling BS on level-1 product support (for commercial tools) or posting a problem to a mailing list (for open source) ↩︎

  4. one of the more notorious instances, in retrospect, was coopting a very fast portscanner written by a not-yet-notorious hacker type to find available JVMs for our middleware layer in the pre-F5, pre-WebSphere days ↩︎

  5. yes, configuration drift happens, deal with it ↩︎

  6. a happy byproduct of achieving and maintaining Level 1 ↩︎

📟Level 6: you're logging!

There's a familiar parable that every new information security analyst eventually encounters. Their firewall has been logging some pattern of blocked traffic, and then the pattern stops. What happened? They eventually reach the conclusion that either the attacker gave up... or they finally got what they wanted and have laddered up to the next stage of their campaign. The epiphany that happens next, that ubiquitous telemetry is needed from all elements of our defenses,[1] is a vital advantage in the race between our defenders and our adversaries and informs the definition of this level. So does the need for a robust security apparatus to support crime scene investigations.[2] In order to provide Level-6 assurance, a product needs to be delivering appropriate logs to a location that is being monitored with established service levels. Protip: what constitutes an appropriate log is not always obvious and should be an active negotiation between the SOC[3] team and the product owner(s).


  1. not just security tools, natch ↩︎

  2. or crash sites, if you prefer the airplane black-box metaphor to the CSI/DVR metaphor. ↩︎

  3. Security Operations Center ↩︎

⚙️Level 7: your trim tab is engaged!

Image source: digital.libary.unt.edu
When operating a sophisticated piece of machinery like a web application firewall or a data loss prevention tool, there are trim tabs to manage as well. While this fits towards the self-actualization end of the needs pyramid, conducting regular tuning exercises to adjust the product's performance ensures that the tool is well-adjusted to current attack patterns and not simply historical traffic. It's good to call out the difference between this activity and the bi-annual configuration review in Level 5. While the latter is a coarse adjustment, ensuring value is extracted from available features, the outcome of maintaining Level-7 product health is a finely-tuned machine: rules or signatures that are no longer needed are pruned to free up CPU cycles for those that are, buffers and state table sizes are adjusted to appropriate safety levels for current traffic patterns, low-value noise is reduced from logs, etc. In order to provide Level-7 assurance, a product's rules, reports, and features must be reviewed tuned regularly by the technical team running and the information risk team governing the program.

🦾Level 8: it works in anger!

There was a time that I thought having seven levels was adequate. Surely, the sum total of the outcomes each level would result in a fit-for-purpose product implementation! Right?

No plan survives first contact with the enemy.
    –Helmuth von Moltke

Then came the proactive health check on one of our controls. We were reporting "green" on all seven levels. The attackers scored a decisive victory nevertheless. Our defense was not reacting, even though we were configured correctly, had enough coverage in the places where it mattered and healthy logs from all the right sources were spooling into our SOC. What happened? Our defenders expected the logs to cluster one way so their alerts were tuned for a high threshold of hits from a single source; the hits came a few at a time from a lot of different sources. As a result, the "attack" continued for hours without generating a ticket, as expected. We went back to the drawing board, adjusted our alert thresholds, and added Level 8. In order to provide Level-8 assurance, a product's capabilities need to be targeted by the Attack Simulation team at least annually, with all significant gaps addressed.

The Coda

Any structured model will inevitably encounter flaws, especially in edge or corner cases. Some are major design flaws, such as my initial omission of level 8 above when I first proposed levels-of-assurance to my own teams. Others can be more subtle and stem from author bias, intentional or subconscious. As I continually assess the overall maturity of my program, I've used multiple sources to help me form opinions. The Department of Energy has a model for this; as does Forrester; so does the Center for Internet Security. It's not a surprise that each of these has its own bias, whether it's indexing heavily on governance in one case, or very lightly on secure software development in another. The levels-of-assurance model above likely suffers from both.[1] As a tool for creating clarity and alignment on the definition of our target state, however, this model has become indispensable to me. It has provided the foundation, the structure, and the lingua franca – about what "good" looks like, about precisely what we measure, about why certain controls exist, about expectations of stimulus/response in various attack scenarios – for the conversations that needed to regularly happen so that we could continually provide a formidable cyber defense.


  1. Though, in reviewing the incidents of my administration, I am unconscious of intentional error, I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors. ↩︎