legal defensibility, jeremiah, ROSI, threat model

Taking the bait

In a recent blog post, Jeremiah Grossman asks,

I also often wonder what it will take to influence a shift information security spending habits from one of tradition to efficacy.

Challenge accepted. I will now proceed to use this space, and Jeremiah's post, to explain how we, as a profession, will do just that. It begins with a parable on precision.

![80% of C\*O's in my imaginary survey prefer this level of precision or greater in their debriefs. Image courtesy of xkcd \#688.](http://erick.rudiak.com/img/xkcd-self-description-panel-1.png)

Several years ago, I oversaw a data loss prevention tool rollout. I challenged the team to come up with salient metrics that we could track to measure the success of the project. The team lead didn't disappoint. A couple months into the rollout, he came up with this gem: applying the then-latest cost-of-a-breach data to the number of records the tool was preventing from being sent to the Internet unencrypted (inarguably a Bad Thing), we were saving the company hundreds of millions of dollars a month. Extending to this argument's logical conclusion, the DLP tool was generating savings at a truly staggering rate, equivalent to our entire market capitalization every 12 months (ish). Suffice it to say, our executive briefs relied on something a bit more mundane, like the number of times our DLP capability helped us win client business, and the overall downward trend in incidents over time, which we attributed either to the effectiveness of the product's user coaching module, or to our driving the bad guys underground which, admittedly, was an less desirable outcome but one that we felt was still positive on balance.

Ultimately, our business sponsors funded our project because we were able to clearly connect the dots between three key points:

  1. We articulated a credible threat model;
  2. we evaluated our current approach to the threat and found it inadequate, creating risk of failing the "legal defensibility" litmus test, and stoking outrage if a future, adverse event occurred on our watch;
  3. we demonstrated an adequate, appropriate response to the threat and proposed how we would measure our results.

While different enterprises will vary in their threat models and outrage triggers (frequent readers will notice that I borrow from Peter Sandman a lot), adhering to these three principles has allowed many information security practitioners to shift from tradition to effectiveness. I have observed several enterprises graduating from striving to encrypt as much data at rest as they could get their hands on (an expensive control that is effective primarily against intentional theft from a highly protected enclave, i.e. enterprise data centers), to a more balanced approach of encrypting that which regularly leaves their premises (laptops, smartphones, optical media, etc.) and methodically destroying the rest prior to its release.

Likewise, application security initiatives, like the one Mr. Grossman is pitching, can be funded given the right framing for the conversation. Mr. Grossman's blog post suggests that companies are over-spending on antivirus at the expense of application security. He builds an argument based on a series assertions:

  1. The global AV market is $8B, according to numbers from a Wired article;
  2. viruses propagate via two main vectors: email and drive-by exploitation, i.e. silent installs triggered by visiting a site serving up infected content, either directly or via malicious links;
  3. drive-by downloads are, in turn, often/sometimes/usually the result of SQL injection;
  4. his company's stats say that 1 in 9 websites has a SQL injection vuln. "We should also assume that if there is 1 SQL Injection in a given website, then there is [sic] really 10." (Yes, this really is a direct quote, emphasis added by yours truly);
  5. each vulnerability will require 40 developer hours to fix at a rate of $100/hour, and issues that are fixed in code remain fixed.

Adding these up leads to the culmination of the argument: the top 500,000 websites will need to spend $2.2B to combat their virus issues, while the entire application security market (of which Mr. Grossman's company is a part) is currently seeing a paltry $0.5B in annual revenues. Shouldn't we be funneling some of those AV funds to AppSec, which would create a virtuous cycle of less viruses, less spending on AV, and ultimately more spending on AppSec? Based on this argument, I'm afraid, there isn't a Fortune 500 C*O who would answer in the affirmative. Here's why: we haven't connected the dots between the threat model, the indefensible current position, and the measurable results from an appropriate investment. Here are the questions I'd expect one of my business sponsors to toss at Mr. Grossman, in no particular order:

  1. Doesn't my AV spend buy more than just signature-based detection nowadays? I thought my AV suite had a firewall and some lightweight DLP baked into its cost?
  2. I remember the last virus outbreak we had, it cost us $XXX,000 in lost productivity based on our time entry system's records. I'm protecting myself when I buy AV, you're asking me to protect my brand and reputation when we talk about the website, can you help me put a price on that? Or are you asking me to protect the Internet at large, which is already spending a sunk cost of $8B (plus the countless folks using free AV), from the dangers posed by my website?
  3. That $8B market for AV you mention... that's consumer plus corporate, the latter is only $3.4B, according to the same article you quoted. Does reducing the denominator by more than half matter?
  4. My regulators require me to have antivirus in order to stay in business, I won't be able to offset my AV spend by making this AppSec investment, will I?
  5. Between email, all drive-by exploits, and drive-by's related to SQL injection, can you please break those two down for me in terms of frequency?
  6. Wasn't I just reading how some recent drive-by download threat propagated via Wordpress sites was a result of leaked passwords (i.e. not SQL injection) and a prominent security evangelist was alerted to the compromise by his readers' antivirus software?
  7. If your numbers indicate that 11% of websites have at least one SQL injection, help me make the leap to each having 10 on average.
  8. Why does it take my developers the same amount of time to fix the 10th SQL injection issue as it did to fix the first? Are there no reusable fixes for these sorts of problems?

And so on.... The connection between the threat and the remediation is poorly established, and the specious use of extrapolation only compounds the magnitude of the rounding errors made in the assertions of the number of issues and the cost to fix each one. The extrapolation issue is a particularly difficult one in information security: even the more robust attempts at ROSI (Return on Security Investment) calculations out there ultimately succumb to the issue of wild swings based on estimations of ALE (Annualized Loss Expectancy). Unlike earthquakes and lightning strikes, for which unimpeachable data exists, information security trends suffer from underreporting, environmental factors and subjectivity that call into question any sort of estimate being done on a case-by-case basis.

And yet, there is a considerably more effective way to fund application security assessments. Here's how that conversation should unfold:

CIO to CISO: are you confident that Anonymous won't compromise our new site after it launches?

CISO: yep, we spent $Y to simulate their most common attack methods and found that our application successfully defended itself in all but three modules; those fixes are in development and will be tested two weeks prior to launch.

By connecting the threat model, the current defenses, and a measurable investment in improving defenses, we can achieve what Mr. Grossman espouses: information security spending that is optimized for effectiveness rather than tradition.

You're welcome!