A recruiter recently asked me, "who are the best CISOs you know?" I'm convinced it was a trick question, and that I failed the test, but the interesting thing about CISOs is that it is challenging at best to evaluate our performance (there, I said it). My annual objectives tend to focus on measurable risk reduction, in no small part because if CISOs are to take credit for absence of breaches on our watch, we must be culpable for any that happen. Yet, the general consensus in our industry is that everyone is susceptible to a breach, given a sufficiently determined adversary.
When I think about people who have influenced my thinking about technology and how to manage its risks, there has been a series of world-class security minds with whom I've had the fortune to associate professionally, and who have opened my eyes to what computers can do in capable hands (if you've wrestled FreeBSD with me, defeated IVR password reset programs using investor call recordings, or calculated the risk reduction from deploying a certain operating system vendor's ridiculously effective free security tool, you know who you are). When I think about CISOs - leaders who have influenced my organizational awareness and helped shape my approach to risk management - the people whose ideas I keep circling back to don't have "CISO" anywhere in their resumes.
The first is Dan Ariely (full disclosure: Mr. Ariely sits on an advisory board for my employer). Ask ten CISOs about their biggest challenge, and all ten will riff on a common theme: getting their users to reliably do the right thing. Ariley's brilliant TED Talk illuminates a fundamental component to unlocking that equation: given a choice, people will do the most convenient thing, which is usually nothing. That means that, by opting people into the right behavior by default - in the TED talk, it's organ donation; at my previous employer, it was 401k plan participation - the most successful strategy to getting a large population to do the right thing is to make that the default. It never ceases to amaze me how frequently we in Information Security fail to follow this simple recipe. It also never surprises me when I've observed measurable reductions in risk by putting Ariely's opt-in model into action. Time and again, providing users with just-in-time awareness at the moment they're about to make a risky go/no-go decision and opting them into the low-risk choice (don't send that email, don't copy that file to a portable memory stick, don't follow that email link to a website so new that our security vendor hasn't even categorized it yet) while making the alternative easy to execute if needed has led to measurable reductions in high-risk behavior without alienating users to the point of revolt. I talk at much greater length about this topic in another post.
My second great influence has been Peter Sandman. Ultimately, every CISO's success or failure will be evaluated on their performance in a crisis - did their presence and leadership help turn a breach into a near-miss, or manage a mega-breach down to a non-terminal event? This, ultimately, is where Dr. Sandman has consistently provided clarity of thought that I've learned to appreciate through the years. It is his "Risk = Hazard + Outrage" equation that has replaced the "Risk = Impact x Likelihood" (or other ALE-based calculations) in my approach to strategic planning, and not just because ALE is impossible to derive in InfoSec, or because you never get a second chance to extrapolate Ponemon numbers to your CFO. His First Rule of Crisis Communication further crystallizes the approach modern CISOs need to adopt:
Outrage, not hazard, drives reputation. Even significant hazards are usually tolerated when outrage is low, and even insignificant hazards are usually rejected when outrage is high.
This insight is critical because, in the case of a breach, the first two questions company leaders will need answered are, "who did this to us?" and "why did our controls not anticipate this?" What they're really asking is, "is this event outrageous?" The CISO's job ultimately comes down to running a program where the outrageous is extremely unlikely to happen.
Sandman's simple distillation also makes it quite clear why security programs driven primarily by compliance tend to struggle over time. Their threat model can be distilled down to a single actor: the auditor. A note about the threat model illustration to the right: it is adapted from many conversations I've had with the brilliant people at Threat Forward and distills a complex set of risk management factors into two core characteristics. First, is the threat someone with whom we can manage interactions? In other words, can we schedule our time with the threat actor, and will they meet us on mutually acceptable terms? Second, is the threat using tactics, techniques, and procedures (TTP's) with which we're familiar? Are they following a playbook we've seen, or are they developing custom tools or exploits that may exercise an edge case in our defenses and, potentially, defeat them? In the case of regulatory auditors, that threat falls squarely into the upper-right quadrant: managed interactions, known TTP's.
With that in mind, what does a risk management program that incorporates Sandman's First Rule look like? For one, it anticipates a much broader population of potential threat actors. There are different types of defenses that one needs to manage the outrage factor when an insider is attacking, versus those you might need when you're being DoSed by a politically motivated group, or those you might need when you're being targeted by organized crime or a nation-state. Different organizations will be subject to different levels of outrage if they are bested by one of these adversaries. As adversaries' capabilities evolve, the CISO needs to set priorities, and ensure that the defenses needed to fend off the adversaries whose success would be most outrageous mature at the same, or higher, rates of speed.
There are certainly many truly great CISOs out there - my peers who are managing risk, optimizing spend, making distributed computing safe for millions of people. To the recruiter who asked about the best CISOs I know, I suppose the ultimate response is: they'll call you when they need you. For now, I'm keeping Ariely and Sandman as my co-pilots. They've gotten me this far.