The Budget Trap

· 7 min read

... or "Erick Hates Sports Analogies"

In my Security Culture Manifesto, I hypothesized about a weak-at-best correlation between an organization's security spend and its desired security outcome, i.e. breach resistance. It seems that, for every headline-grabbing breach about a potentially overmatched victim company, there's a counter-example of a company spending tens or even hundreds of millions of dollars annually... and still being breached. In my manifesto article, I argued that culture plays a role that's equal in importance to budget in delivering a robust information protection program. Today, I want to explore the flip side: the elusive, optimal information risk management budget.

CISOs appear to be obsessed with their budgets. On several peer-level mailing lists to which I belong, the most common recurring topics seem to be "who do you use for penetration testing" and "what percent of IT spend is security's budget?" To some extent, there is a logical reason to want to know if your spending is in line with your that of your counterparts: the outrage factor for being too low is difficult to ignore. Conversely, I expect few CISOs would begin voluntarily shrinking their annual spend if it turned out they were operating on a more generous base than their peers since, as they say, perfect opsec is hard and we CISOs all feel the pressure to close the gap with an ever-improving set of adversaries. Effective controls rarely pay for, operate or continuously tune themselves; improvement requires smart investment.

Ultimately, there is no correct answer, no repeatable, reliable way to "solve for x" and determine the optimal budget for an information risk program. There are simply too many unknowns in the threat landscape and, unlike natural threats like earthquakes or lightning strikes — for which quality data is reliable and readily available — the "knowns" in Information Security are unsteady and difficult to use as predictors of the future. In some cases, the data — like "who is attacking, and how do they work" — is far from unassailable, and risky to use as the basis for strategic decisions. In others, the data veers from the ambiguous to the flat-out counter-intuitive, such as WhiteHat Security's 2013 website security statistics report. In Whitehat's own words,

... other bits seem completely counterintuitive. For instance, organizations that do perform Static Code Analysis or have a Web Application Firewall appear to have notably worse performance metrics than those who did neither.

Budget is undeniably important, and since it's often just as hard to give accurately approximates my aptitude for sports analogies.
credit to the CISO for not being breached as it is to blame the CISO if a breach occurs, some other lingua franca must be used when CISOs are evaluated. How should a C*O judge their CISO's performance? When all else fails, some people (not me) like to use sports analogies (I don't particularly like sports... really).

*Imaginary Ken Burns cross-fade effect, change of narrator*

Perhaps more than any other sport, baseball is one whose history is defined by its hallowed numbers. The triple crown. DiMaggio's consecutive game hitting streak. The magical home run record (Ruth's 60... Maris's 61... the Asterisk Who Shall Not Be Named). Led by Bill James' sabermetrics movement of the 1980s, baseball has seen a renaissance in data-driven decision making that shook off decades of qualitative analysis of home runs and ERA as predictors of success, replaced it with markov chains, ultimately went mainstream in Michael Lewis' Moneyball, and was taken to spectacular new heights with Pitch f/x technology in the past decade that has allowed for automated detection of the tiniest minutiae of every single thrown ball in a game. Since wins are the statistic that matters most, the sabermetric community has spent considerable energy coming up with models to predict the factors that contribute more, or less, to winning baseball games. At an individual player level, a fascinating metric has been developed: Wins Above Replacement, or WAR:

WAR offers an estimate to answer the question, “If this player got injured and their team had to replace them with a freely available minor leaguer or a AAAA player from their bench, how much value would the team be losing?” This value is expressed in a wins format, so we could say that Player X is worth +6.3 wins to their team while Player Y is only worth +3.5 wins, which means it is highly likely that Player X has been more valuable than Player Y.

*OK, I'm back*

What does this have to do with information security budgets? I suggest that a decent way to evaluate a baseball executive (CISO), or a front-office program (information risk management) is to see whether a) spending more yields considerably better results than spending less, and b) whether certain programs optimize their spend better than others. To follow this conceit, looking at data from, it is possible to create a scatter plot of the 30 major league teams, their pitching budgets*, the WAR of their pitching staff, and their success rate at winning games. Combined, this analysis can help visualize correlations and trends between spending and success. (Note: a tip of the hat to JK at McKinsey, who first introduced me to the notion of plotting these two axes together.)

In the graph below, the size of the bubbles is proportional to the number of regular-season wins the team had in 2014, while the X and Y axes correspond to the salaries of its pitching staff, and the collective WAR of those pitchers.

What is the data telling us here? There appears to be a "safe zone" clustered around the median pitching spend where teams have a fair chance of success without overspending; to manage the "outrage" factor with your fans and owners — unless you're Cleveland and have paid Cy Young winner Corey Kluber practically the league minimum... or the Chicago National League Ballclub, whose fans bizarrely come back no matter the team's record — spending below this zone has a strong correlation with having a below .500 season. Conversely, spending considerably above this zone, unless you're Philadelphia, probably means you were in the playoff hunt in September... but ROI was sub-optimal, and your GM could have paid far less for similar results. Based on this data alone, the GMs that are probably feeling best about their programs from a talent management point of view would have to be Seattle, Kansas City, and Washington: they're getting considerably higher returns for their pitching dollar than their counterparts.

Back to information protection... why does comparing budgets with peers tell us relatively little about how we should run our own programs? One big reason is because Information Security's version of "wins" or "Wins Above Replacement" is not quite as simple as baseball's. Maturity of a program matters, as does coverage. A program that has built a robust Incident Response capability, and is monitoring data loss prevention capabilities that look not only for algorithmically-easy-to-detect data like credit card numbers, but also for company secrets and other intellectual property will require a naturally higher spend than a program that is closer to its infancy in this respect... but will also yield more security "wins". A program that is spending millions on zero-day attack detection tools will have a higher run-rate than one running an effective, free (to license and run, not free to manage... of course) tool from Microsoft. A program in a highly regulated industry (banking, healthcare, etc.) will have a built-in baseline "reserve" for compliance that would be absent from the budget waterfall graph of a tech startup or an R&D-heavy field (petroleum). Some industries are just dipping their toes into outsourced threat intelligence in 2015, while others are years into having in-house teams fed by multiple commercial, third-party sources (I've spoken to one CISO who currently consumes over 20).

My suggestion to CISOs who ask about budget benchmarks is to go ahead and get that data... and then dig deeper. Being in the "safe zone" matters — we must all start by hitting our free throws, being good at blocking and tackling, etc. (I am fairly sure I just piled on one or more different sports) — but it's a wide zone: in the baseball analogy, slightly less than half of the teams were within the green circle. Our C*Os' budget pressures are too great to give security special accommodations. Rather than gauging outrage by whether we spent enough (our "effort"), we should measure ourselves by whether we're delivering a program that's effective (our "results"), relative to our peers in similar industries. We're not the only ones coming to our C*Os with investment plans and, like our counterparts across our enterprises, we have to be able to demonstrate that investing in our programs is wise: because we have a good track record of delivery, and because we are optimizing our spend to fortify the areas of our programs that are most critical to defending our systems against our likely threats.

July, 2015, update: fivethirtyeight has posted 30 years' worth of winning-percentage-vs-team-salary data with this choice analysis:

spending usually helps, but incompetent spending gets a team nowhere

*It's worth noting that the data, while directionally accurate enough for this essay, is not quite perfect; for example, Boston's budget is skewed by the midseason departure of a highly-paid star, while the Cardinals' rotation — and thus its budget total — includes not only that star's partial season, but also Daniel Descalso's 1-out relief masterpiece against the Cubs on May the 12th. Adjusting for that differential puts STL into the "safe zone" above, while BOS moves closer to the median line.