And now, for something completely different...

The role of the CISO is ultimately to own security culture of the organizations which we serve. Adopting the vocabulary and winning business methods of our companies and industries can create a profound linkage and demonstrate alignment that rises above the CISO's traditional role as defender.

· 11 min read
And now, for something completely different...

For nearly a decade, I was fortunate enough to serve as CISO for a company whose forté was making healthcare safer and more affordable for millions of people. I was surrounded by brilliant, driven people who were intensely focused and mission-driven. I learned quickly that interrupting that focus was a suboptimal strategy. Still, meeting agendas can be packed with diverse topics and when the CISO's turn comes up, we often request our audience to pause their train of thought – something to do with "the business"[1] in all likelihood – and shift their attention to something completely different: information security! No leader wants to think of their next board presentation or senior staff pitch as an episode of Monty Python.[2] Knowing how expensive context switches can be[3], I've never been able to shake the nagging suspicion that I could better connect my work to our business.

How do I employ healthcare concepts to improve the way I present new risk management topics to key stakeholders without forcing a context switch away from?

Naturally, there were the obvious parallels between healthcare and information risk management that I could draw upon. Yes, we all run some sort of anti-virus software and, yes, patching and oral hygiene resemble each other as routine maintenance tasks. Over time, I discovered additional, equally profound similarities between our challenges in technology and those in keeping our patients healthy: figuring out when to proactively intervene with at-risk patients, choosing where and when to seek care, and the complex puzzle of maintaining a formulary of appropriate treatments each led me to discover effective, novel approaches to bettering my own craft and improving my outcomes. On top of that, invoking these parallels served an important role in reducing the context switching necessary to introduce new policies, controls, and defenses to my key stakeholders.

On stimulus and response

One of the first times I was truly struck by the difference between how our business operated and how traditional security programs operated was in the way we intervened to help people navigate high-risk interactions. In the case of our patients, that occurred when they were approaching an important decision: whether or not to renew a maintenance medication when their supply was low[4].


  1. One of the most harmful tropes I've experienced as a technology leader is the shorthand we technologists sometimes use when we refer to "The Business", a.k.a. that group of non-technology people who make decisions, control budgets, sell products, usually with minimal consideration given to the awesome things the Technology organization was up to (probably cloud or blockchain or AI/ML or upgrading the HAL9000). Not only did this euphemism barely hide a victim mindset – as effective leaders, we should earn our seat at that table by consistently adding value and being part of that cabal – but it represented a missed opportunity to learn from some brilliant people at the top of their game. ↩︎

  2. Now that PowerPoint can render animated GIFs, it's tempting to open decks with this, n'est-ce pas? Flying Circus opening roll ↩︎

  3. see here or here or here for a small sample of the chorus of research suggesting that humans are not good at multi-tasking ↩︎

  4. more on this at Forbes ↩︎

The conventional infosec way: Provide the patient with annual training reminding them of the importance of healthy computing practices.
The healthcare way: Reach out to the patient via multiple channels just-in-time to lower the odds that their supply reaches zero.

It turned out that our company routinely consulted with leading behavioral scientists[1] to determine optimal strategies to create interventions that worked when it came to guiding large populations' risk-taking behavior. One of them, Harvard's David Laibson, offered this insightful view to Freakonomics[2]:

... organisms are very good at learning when the stimulus and response are tightly linked in time. And they’re terrible at learning when stimulus and response are separated by, like, 10 minutes, let alone five years

This constraint turned out to be generally applicable to information security training as well. A 2012 Walsh University study of different methods of delivering security awareness training showed that:

After three months, all trainee participants reverted back to almost identical pretest scores.

I looked for ways to apply the effective methods from our pharmacy practice into my security awareness program. We found that, by adopting the same intervention strategy for our users that we did for our patients – giving them lightweight nudges in the right direction exactly when we knew they needed it, based on their schedule, not ours – we were able to significantly improve results for a number of high-risk interactions: phishing emails[3], data loss[4], document labeling[5], etc.

On urgency

cvedetails-histogram-2019-02-27

What, exactly, does it take to have a vulnerability that scores between 8.0 and 8.9, anyway???[6]

Another dimension of information security where healthcare turned out to have a novel perspective was in vulnerability management. For years, security practitioners have been trying to improve our methods for quantifying vulnerability severity[7], in the pursuit of giving more actionable insights to our product owners.

In theory, this would allow our engineers to assign urgency levels – which translated to committed turnaround time expectations – more accurately while helping our resource-constrained peers to work their queues in a more sensible order. To test this hypothesis, I surveyed 20 tenured information security practitioners with optometrist-style, A-or-B choices such as: if a product team only had cycles available to patch an aging-but-within-SLA[8] high-severity defect or a medium-severity one but not both, which would you prioritize for remediation?[9]

I was surprised to learn that more nuanced scoring was not only relatively low on our product-owning teams' wish lists[10]... but also didn't move the needle as much as I anticipated with seasoned professionals. Our Attack Simulation team routinely warned that, given enough time, they could turn a "Medium" into a "Critical."[11] So I was surprised when the survey data showed the security professionals – even the Attack Sim-ers – were quantitatively indifferent to the timing element in their A/B choices.

Rather, whether a remediation item was well within SLA, at risk of missing commitment, or already past the threshold where the scales tipped from tenable to outrageous, the most accurate predictor of which option the experts I polled selected was simply the initial severity rating. A "High" with 50% of its SLA runway remaining was still overwhelmingly selected for remediation even when pitted against a "Medium" that was 25% beyond its prescribed target date. This unexpected outcome made me consider how healthcare tackles the same problem. Was there a better way?


  1. see pg. iii ↩︎

  2. in a 2013 interview on fighting obesity ↩︎

  3. we found, in particular, that giving users a button to click worked way better telling them how/where to report ↩︎

  4. giving users an immediate, easy-to-action reminder as a response to a risky email drove significant, voluntary reductions in judgement errors on what was appropriate to send outside the company ↩︎

  5. we deployed a one-click solution that only required user action when they were creating a new file ↩︎

  6. source: cvedetails.com ↩︎

  7. an ordinal scale typically consisting of Critical, High, Medium, Low, etc. ↩︎

  8. service-level agreement ↩︎

  9. if you must know, it looked something like CFADsurvey2018 ↩︎

  10. they were actually quite happy to patch all the things all the time, what they really wanted was predictability of schedule and help synchronizing downtime windows ↩︎

  11. they totally can! and did! a lot! ↩︎

The conventional infosec way: Diagnosis (security team) and patient (product team) are "on the clock" together, SLA reflects when remediation is complete.
The healthcare way: Urgency of diagnosis and commitment to remediation plan are distinct events, each with its own timetable

The key insight I gleaned from healthcare was that what we had been treating as a remediation SLA[1] was actually a concatenation of two processes, each with its own "clock": a diagnosis[2] and a treatment plan[3]. The first decision we make when a loved one appears sick is whether they need to:

  • get to the ER right away[4],
  • go to the Urgent Care as soon as it opens[5], or
  • make an appointment with their family doctor or specialist when the next walk-in date is available[6].

In technology, this initial determination of the venue and urgency for diagnosis finds the security team traditionally front-and-center.[7] The next decision is, based on the diagnosis, what course of treatment the patient commits to – this can be everything from immediate surgery[8] to indefinite, cautious monitoring with no other intervention; it's ultimately up to the patient to commit to and execute a plan based on the provider's guidance, and a good provider normally follows up periodically to ensure adherence.

Holding my team separately accountable for their performance against a self-imposed SLA involved a fundamental shift in the way we worked; in RACI terms, we moved ourselves from the "C"[9] column into the "A"[10] column. An unexpected benefit was that we cracked the code on a debate that had raged for years within my leadership team: how to distinguish between urgency and severity in our communication of risk to key stakeholders[11]. Following the healthcare approach gave us the clarity of thought to be better governors. It also forced us into a provider mindset, which meant adjusting our bedside manner and helping our partners understand our diagnostic approach in a digestible manner (see Fig. 1) while applying a highly data-driven model behind the scenes.

outrage-o-matic

Fig. 1[12]Strictly illustrative, not actually anyone's idea of a robust risk taxonomy.

On complexity

One of the most interesting assignments I had in the past few years was to help craft a strategy for reducing the complexity of our technology environment. Complexity is an challenging dimension to quantify. I found myself identifying with Justice Potter Stewart's (in)famous "I know it when I see it"[13] definition more than once during that project. Ultimately, there were many ways that complexity presented itself: cyclomatic, the number of programming languages used in production, the number of operating systems in use by major/minor version number, years remaining on vendor's support roadmap, etc. The most difficult element of that project for me was figuring out how to affect a meaningful change in the technical debt situation we were managing; after all, none of us was acting with the intent to create or increase complexity[14].

Once again, I found myself drawn to the way our healthcare business attempted to solve a similar problem. The prescription drug landscape suffers from similar challenges —FDA-2009-2018-novel-drug-approvalsFDA Center for Drug Evaluation and Research (CDER) novel drug approvals[15] new treatments are coming to market with increasing velocity, while the patent process creates incentives[16] for complexity through introduction of slight changes in delivery methods or symptom indication resulting in a dizzying framework of options for patients and physicians alike. This felt oddly similar to the situation technologists face, as product features continue to overlap through both innovation and acquisition[17], while increasingly frequent release cycles result in a dizzying matrix of compatibility and support options[18].

When I looked deeper into the mechanics of how benefit managers craft their drug formularies[19], I was struck by the number of similarities to our corresponding technology processes.


  1. this issue needs to be fixed within X days ↩︎

  2. this issue needs to be reviewed with key stakeholders within Y days ↩︎

  3. this issue needs to be fixed no later than Z days ↩︎

  4. urgency: critical ↩︎

  5. urgency: high ↩︎

  6. urgency: medium ↩︎

  7. in fairness, a highly optimized version of this process will find the product teams out in front of items like this and the security team acting as checks-and-balances ↩︎

  8. patch_all_the_thingsCredit to Allie Brosh for her amazing writing, read her blow and get the book! ↩︎

  9. consulted ↩︎

  10. accountable ↩︎

  11. for example, we all concurred that findings coming from D/R tests carried both a high level of severity, requiring immediate management engagement, with a different expectation of turnaround than, say, the latest iteration of #drupalgeddon ↩︎

  12. a Knit-o-Matic would have been far cooler! knit-o-matic ↩︎

  13. see Jacobellis v. Ohio ↩︎

  14. although apparently this happens enough that Martin Fowler had space in his technical debt quadrant for people who do ↩︎

  15. source: fda.gov ↩︎

  16. see this NIH article for a great review of the incentives to file patents for slight modifications resulting in years of additional exclusivity for blockbuster drugs before generics can be allowed to compete ↩︎

  17. for example, unlike the endpoint security landscape six years ago, most endpoint anti-virus products today offer integrated data loss prevention as well – that used to be part of the "how many 'shields' in the Windows tray" complexity metric for security ↩︎

  18. imagine supporting an N-tiered architecture where a system, to be supported, had to find the intersection of the support lifecycles of all its constituent parts!rhel_support Source: RedHatwindows_lifecycle Source: Microsoft oracle_lifecycle Source: Oracle ↩︎

  19. source: PCMA ↩︎

The conventional infosec way: Infrastructure, engineering, and development teams select products – through proofs-of-concept, lab work, white paper reviews, etc. – which go through security phase gates (e.g. design review, engineering review, penetration testing, etc.) before being approved for production use.
The healthcare way: Proposed new treatments go through clinical trials which, if successful, result in reviews by expert panels "to determine the most clinically appropriate drugs for a given drug class and indication" before being selected for inclusion based on best outcomes — in terms of both health and economics – for patients.

Where healthcare had clinical trials, technology had proofs of concept. Where healthcare had patient and therapy committees, technology had architecture governance boards. Where healthcare had indication-based pricing, technology employed positive and negative use cases in its testing. Both groups had fiscal discipline and a patients-first orientation to selecting products.

Formulary Types:

Open formulary: The plan sponsor pays a portion of the cost for all drugs, regardless of formulary status. Although, a plan sponsor may choose to exclude certain products, such as ‘lifestyle’ drugs, from coverage.

Closed formulary: The plan sponsor will only cover drugs listed on the formulary. Non-formulary drugs are not covered unless approved through a formulary override process.

Tiered formulary: Plan sponsors offer different copays or other financial incentives to encourage participants to use preferred formulary drugs, but will still pay a portion of the cost of the non-preferred drug. For example, when a plan sponsor offers a three-tier benefit design, it may cover non-preferred, non-formulary products on its third tier with a higher copay.

Why, then, were we[1] experiencing an aggregation of technical debt in our technology formulary that wasn't similarly mirrored in our healthcare formulary? Why were we seeing a growing number of products in our technical stack – along with the associated maintenance costs – while our patients were seeing the benefits of tightly curated, best-of-breed choices for their particular conditions?

Ultimately, the difference was: whereas our partners helping our patients were running a tiered formulary[2], the freedom and flexibility of operating technology with a low-friction path from innovation to clinical trial to production created strong incentives for an open formulary. Identifying this difference was the breakthrough that not only allowed me and several other leaders to agree to adopt and drive change to reverse the trend on technical debt, but also gave us the vocabulary to help explain the change to our teams and partners in a way that already rang familiar.

Coda

The role of the CISO is ultimately to define, synthesize, and syndicate the security culture of the organization which we serve.[3] In that regard, our role is also to minimize culture shock when we work with our partners, something that we can all too easily introduce when we ask them to context-switch from their worlds into information security. Adopting their lingua franca when describing information security is an easy win. Adopting their demonstrably successful business practices can create a profound linkage and demonstrate alignment that rises above the CISO's traditional role as defender. I was fortunate to do this in healthcare. How about you and your industry?[4]


  1. and every other company I spoke with, ever ↩︎

  2. see "Formulary Types" sidebar ↩︎

  3. see here for more on this topicculture venn ↩︎

  4. seriously, though, I wanna know! Hit me up on LinkedIn or send me a note at my @gmail.com address, it's obvious ↩︎