Time to stop blaming the victim, part deux

· 5 min read

It started a couple weeks ago with the almighty Bruce Schneier expressing a rather controversial opinion:

...[that] training users in security is generally a waste of time, and that the money can be spent better elsewhere

While I actually agree with Schneier's conclusion based on the current state of many/most security awareness programs, I cannot subscribe either to his view of the future (where users get trained annually and acquire their primary security mores from their coworkers) or to his drawing of an analogy between failed security awareness and ineffective behavior modification campaigns in the medical field. Since I am thoroughly enjoying my current stint in healthcare, this is a good time to remind everyone that the opinions expressed here are mine and mine alone, and have nothing to do with my gracious employer.

So, what is to be done about security awareness? We can start with doing nothing at all and why it won't work: it's legally indefensible; a company can no more expect to defend itself against an OSHA suit if it didn't train its staff on how to operate its equipment than against allegations of culpability for identity theft if it didn't train its staff on how to use its computing systems. In that analogy, the extra challenge in information security is that the Internet turns a warehouse forklift operator's job into something that looks more like a Nintendo game, with flaming fireballs launching from all angles. Because of that added factor, what a lot of folks said in response to Schneier was that we either need more awareness or better campaigns (e.g. more engaging content) to reach our goals. On the face of it, each of these suggestions has merit, as each one should yield an incremental improvement... but in what?

What is the goal of security awareness? Ultimately, if it's aligned with the goal of an enterprise's information risk management program, it's to minimize the likelihood of losses of customer data and company IP (and associated losses of brand and reputation). An awareness program is important because the history of our industry has shown us that one of the most common causes of these losses is when employees' actions (e.g. opening malicious attachments, visiting websites which are spreading malware, etc.) lead to system compromise. This, in turn, has led to regulatory requirements to ensure that no company is so negligent as to fail to inform its users of the dangers lurking online. The thing about awareness programs that so many of us in the Information Security industry continue to miss is that, in order to succeed -- in order to effectively avoid the loss scenarios above -- our users have to make the correct decision 99.99% of the time, despite staggering odds:

  • They are faced with thousands of opportunities daily to click links, visit websites, and read emails
  • Their incentives are skewed highly in the direction of productivity (click through warnings, install that codec, open that attachment) to achieve their reward or complete their task
  • The bad guys are getting increasingly good at disguising their campaigns, while the users are expected to check an increasingly complex matrix of indicators of potential malice
  • The good guys are surprisingly inept when it comes to making good and evil behavior look obviously different to the end-user

Add it all up, and what becomes more and more clear is that, if enterprises are depending on thousands of users to make the right decision 99.99% of the time, we cannot be attacking the numerator of the "right decisions/all decisions" equation. This, unfortunately, is what most security awareness programs try to do: increase the numerator through more or better awareness. What's clear to me is that we should be doing more to attack the denominator.

And this is where I believe we can actually take a cue from healthcare. Schneier argues the following points:
One basic reason is psychological: we just aren't very good at trading off immediate gratification for long-term benefit. [...] Another reason health training works poorly is that it's hard to link behaviors with benefits. [...] If we can't get people to follow [simple food safety] rules, what hope do we have for computer security training?
Here's the thing: healthcare has figured some of these things out. Opting users into the right choice is extremely effective; Dan Ariely has a great example in the area of organ donation, and Hewitt Associates showed the same for 401k contributions. How can we do this for information security? Most of us do this already today by blocking or quarantining suspected spam at our borders, and blocking various categories of sites that are clearly dangerous from being accessed by our users, typically using some combination of Bayesian analysis and crowd-sourced intelligence about the contents of the Internet. Many perimeter security vendors offer the ability to assist users with decisions based on realtime information about a link's reputation; Google has taken this model even further by crowdsourcing executable binary analysis.

What else can we do when our front-line filters eventually let something bad through and our users might have to face a difficult security decision? Continue to attack the denominator! Google's Safe Browsing API warns users before they make a potentially disastrous decision, and many enterprises are now beginning to use a similar philosophy by intercepting sites exhibiting fast flux behavior and asking users to opt out of the good default, i.e. not visiting a site with a known bad or an unknown reputation. Awareness at the time of need is a great way of attacking that denominator: instead of asking our users to make thousands of good choices every day, presenting them with a small number of decisions (with a secure default) just as they're about to do something very risky, like following a link to a website that was just created in the last 24 hours and has no consensus reputation score on the Internet, can truly make a difference. In healthcare, this is analogous to laser-etching on bottle caps to remind the patient exactly when they need to refill their prescription to remain adherent. It's also reminiscent of an idea presented by David Laibson on a recent Freakonomics podcast focused on fighting the obesity epidemic:

Imagine you had a bracelet that a kid wore, or an adult wears, if I had this it would affect my behavior. And as I go into a meal it starts registering 200, 700, 900, 1,400 calories. And as it tells me I’m eating more and more in this feeding episode it starts to change it color from green, to yellow, to orange, to red, and then it starts pulsing red, red, red, red once you get way out of range.

I've become a big fan of this approach as a supplement to a healthy, "traditional" awareness program, for a couple of reasons.

![Pretty darned effective awareness campaign.](/img/wipersheadlights.jpg)
It greatly improves our odds of success, in no small part by reducing the time delta between stimulus and response (in the same Freakonomics episode, Laibson calls suggests the cutoff for efficacy is below "10 minutes"). While there hasn't been a lot of academic research done into the effectiveness of security training, what's [out there](http://iacis.org/iis/2012/49_iis_2012_215-224.pdf) validates our suspicions: 20% of the basics are forgotten within 60 days of awareness education delivery. Awareness that is mandatory and disconnected from the risky behavior it is intended to address is unlikely to have the 99%++ success rates that we need. It serves a strong purpose in the compliance arena but, ultimately, it also has the unintended effect of blaming the user for failing to follow a complicated protocol when they become the victim of an attack. What I like about just-in-time awareness for the riskiest behaviors is what I like about the signage (photo above) on the interstates in St. Louis when it rains: it reminds me of the rules I need to follow just when an infraction is most likely to occur, and not when I'm least likely to internalize them. That is what happens in late December of every year, when we are lucky to spot one of those irrelevant-looking newspaper articles reminding us of the [new rules](http://tv.msnbc.com/2013/01/02/the-wackiest-laws-enacted-in-2013/) going into effect on New Year's Day.

So, to all my friends in IL, don't click any links in emails you didn't solicit, and remember to avoid popping wheelies or ordering sharkfin soup. I sure hope you get this message in time.