A paean to the red team

· 9 min read
A paean to the red team

...or "Erick rethinks his position on sports analogies"

 
In "Defenders think in lists, attackers think in graphs, as long as this is true, attackers win", Microsoft's John Lambert summed up, in one well-written headline[1], the complex challenge faced by many information protection organizations. Our constituents and partners, facing many common IT pressures of their own, yearn for a recipe to seeing green on their information security scorecards — "tell us what to do and we'll do it" is a common, collaborative tone to hear... but is not quite the graph-oriented thinking that truly good defenders need to exhibit. Meanwhile, our senior executives want to know the answers to some tough questions: would that attack have worked against us? Against whom are we secure today? What makes you so sure? Finally, our key IT leaders, on whom we're relying to set strong tone-at-the-top, are inundated with cautionary audit summaries and pitches for the next must-have product — hardly the inspiration for a genuine passion to transform information protection in the face of ever-improving attackers.

In this post, I will explore why, over the years, I've learned to genuinely appreciate the unique role that the red team (and, by extension, the blue team) plays in not only answering key could-it-happen-to-us questions, but truly changing the fundamental culture of a broad information protection organization[2].

Red team? Did you mean penetration test?

On the spectrum of levels-of-assurance, red team falls to the right of vulnerability assessment (definitely list-based), and even penetration testing — which has many more elements of graph-oriented thinking, but ultimately becomes complementary to a red-team program due to its fundamental need to adopt a world-view shaped by a natural, healthy, asset-value-to-the-enterprise bias. While any good penetration test will look and feel similar to a red-team engagement, pentesting programs generally follow a scheduled prioritization of targets based on the defenders' perceptions of risk and value. This, in turn, leads to procedural differences (e.g. walkthroughs, scoping meetings, credential acquisition, etc.) that make the engagement feel more like a one-on-one drill[3] than a full-team scrimmage[4]. It also leads to intentional choices about coverage that favor core applications — a choice we make for all the right reasons (no CISO wants to explain to their stakeholders why it's been years since their flagship applications were subjected to a test), but not one that necessarily yields a faithful representation of the attacker mindset where seemingly irrelevant coordinates on the attack surface (the charitable race site at JPMC, the HVAC vendor at Target, etc.) become the first node on the graph.

Red teams, on the other hand, generally adopt a bias for modeling a particular attacker (high-skill/low-skill, high-noise/low-noise, internal/external, etc.) and follow a less structured process to identify targets. While red teams are still bound by certain guard rails (denial-of-service is almost always off limits, as is reckless endangerment of employee or client data), they do enjoy more freedom than pentesters: red-team engagements, while not entirely unannounced or unscheduled, still skew towards the "unmanaged interactions" end of our threat model[5]; there's also greater freedom to pivot — begin attacking different branches of the graph — when necessary — whether it's because the red team encountered resistance (more on this later), or because a more interesting target revealed itself in the course of testing.

Ultimately, a robust information program will treat pentesting and red-teaming as yin and yang, complementary pieces of a full-spectrum approach to threat modeling and assessment.

The art of the scrimmage.[6]

Another element of a red-team engagement that feels materially different than a penetration test is the participation of the blue team, the defenders. Whether it's a managed SOC or an in-house team, the idea of routinely scrimmaging against the opponent to evaluate the effectiveness of an organization's paper plan is not novel — but one that is still too often overlooked in information security. Ironically, most mature organizations have already come to this same conclusion in their Business Continuity and Disaster Recovery (BC/DR) programs: that tabletops are valuable, but simulations (or even truly shutting off production and running on D/R systems "in anger") provides a level of assurance that is invaluable.

If you've participated in organized sports, you've experienced the spectrum of flavors that scrimmages can take on. There are certain plays that you want to run over and over again, to build muscle memory of always executing correctly out of a certain formation. As a college volleyball player, we ran serve receive drills for each of the six possible rotations[7] until everyone's footwork was perfect. In application development, we build and develop test cases — call them QA or smoke tests or unit tests — to ensure that certain stimuli always yield the correct response. The red-team/blue-team analogy for this is what we've dubbed "microRT" or "µRT"[8]: routine simulations of activity (reconnaissance scanning, exercising certain data loss paths, accessing honeytokens[9], etc.) that can be automated (with adequate variability to discourage defenders from cheating[10]) and whose response time can be measured and scorecarded.

The main attraction of the red-team/blue-team dynamic is, of course, the major scrimmage[11]. While for many, the excitement is in the real-time unfolding of the campaign, I've learned to equally appreciate the "instant replay"[12] that comes after the scrimmage is finished. Because our goal is, ultimately, to continually improve our defenses, I've imposed several requirements on my red and blue teams for any campaign. One is that we review, telestrator-style[13], the exact sequence of the campaign, focusing on what worked (and didn't) work for each team. Junctions in the attack graph where the red team had to pivot because resistance was encountered, i.e. because a countermeasure worked, are valuable to point out — they validate investments in defense whose ROI is otherwise difficult to pinpoint. The act of identifying which countermeasures are more effective on paper than in practice, and why, is also immensely valuable — was it a misconfiguration? a coverage gap? a capability gap? etc. So is the realization that, unlike the neat, linear progression of theoretical attacks espoused in common industry parlance[14], the majority of campaigns I've observed involved traversing a graph that is far more jagged in its nodes and edges.

An actual telestrator image[15].
Potential countermeasures to the campaign.
An imaginary campaign.

Another requirement is that red and blue team present the campaign readout together. This nuance has been important in establishing a fundamental guiding principle: while the red and blue labels are helpful in quickly illustrating to outsiders what roles certain individuals are playing, the true team we're all playing for is The Company. There can be no individual or team successes for red or blue — only joint victories or joint failures on the mission to improve our defenses through the feedback loop of constant scrimmaging.

Red team as culture change agent.

In my Security Culture Manifesto, I wrote of the importance of having strong tone-at-the-top from not just the CISO but a wide array of leaders. Too often, information security leaders skip this step - we lead with the what (we need a tool that does X) or the how (we need a team to deploy, tune, monitor, and act on output coming from tool X), forgetting that the why (the threat model, the attacker we're trying to disrupt) is intuitive to us but not necessarily so for our peers and partners. Simon Sinek's fascinating-for-its-simplicity TED talk espouses the importance of successful leaders establishing "the why" first. In my experience, there are few techniques more effective at viscerally demonstrating the "why" of building a strong defense than inviting key IT leaders behind the curtain of the red-team/blue-team scrimmage.

The easiest, lowest-effort means of accomplishing this goal is to simply take the telestrator/instant-replay artifacts in hand and take the show on the road: other IT leaders (especially those whose assets were targeted) receive the same brief as the CISO[16]. Hearing and seeing, firsthand, the successes and failures of our defenses during real attack scenarios has consistently resulted in higher levels of personal investment and accountability by those leaders. Tone at the top improved, and with it so did accountability for ownership and success of the countermeasures required for those leaders' operating environments. Viewed through the lens of an attack "instant replay", a concept like internal segmentation went from academic and nebulous to crystal-clear in the minds of several key leaders I had to influence in order to enlist adoption for a strategic change to our network design.

Taking it up a notch, live (or mostly live, using videos for long-duration elements of a campaign such as account brute-forcing or network reconnaissance) demonstrations for leaders can be an exciting, white-knuckle ride. Offering "ridealongs" for key leaders (essentially putting them in the passenger seat — or letting them take the wheel — for a campaign) is another creative approach I've seen succeed in spreading positive information protection culture. By all accounts, this has been as rewarding for the red team (in a large IT shop, it can be uncommon for engineers to spend an hour on their own turf engaging a VP ever[17], let alone on a regular basis) as it has for the leaders who have climbed onboard.

Epilogue

In any/all of these scenarios, there is one rule I've imposed without room for negotiation: if there is ever a specific user that becomes a campaign's victim (i.e. someone whose account or computer is targeted for compromised by design, someone whose password is shown on-screen, etc.), that victim can only be the CISO, i.e. me. Yes, I've observed the little blue light on my laptop camera suddenly blink on. I've also listened to MP3 playbacks of social engineering scams involving what wasn't actually my high school mascot, and watched as a mysterious, hidden hand entered seemingly reckless search terms into my browser[18] during a town hall. In every case, the joke was funny because it was on me. And in every case, we improved our information protection culture because we made it clear that the CISO's team was united with the rest of IT against the right adversaries: those who seek to harm our company and all the people depending on it.

P.S. While I instantly loved the story one of my former red-team leads told me of how, in Russia, the red team actually represents the defenders, I've been unable to find any documented evidence to corroborate this otherwise poetic anecdote... and I've tried.


  1. ... one that is subsequently undermined by an ironically tone-deaf denouement of, yes, a list of 5 tasks defenders can follow to manage their graphs. ↩︎

  2. For a fascinating look into red-team/blue-team dynamics outside of information security, reading up on Gen. Paul Van Riper's 2002 Milennium Challenge campaign is a good start [see here and here for competing perspectives]. ↩︎

  3. does this asset defend itself? ↩︎

  4. do this organization's collective defenses operate effectively? ↩︎

  5. Threat model redux: ↩︎

  6. See here for more of my general antipathy to sports analogies. ↩︎

  7. Yes, we ran a 5-1. ↩︎

  8. Tips of the hat to WRC and JMS who originally articulated these ideas to me, and MC who subsequently picked up the mantle and ran with it. ↩︎

  9. While often credited to Lance Spitzner, honeytokens have a far longer lineage dating back not just to information security work done by Cliff Stoll (Cuckoo's Egg is a must-read for appreciating the genesis of our industry) but also to the paper towns used by mapmakers for years to prevent copyright thieves. ↩︎

  10. Hmmm... just like in sports. I may need to rethink my antipathy to this analogy after all. ↩︎

  11. Seriously, the sports analogies just plain work. ↩︎

  12. That's it, I officially no longer hate sports analogies. ↩︎

  13. Is it unreasonable to dream that, some day, I could have John Madden guest-host a red-team campaign readout? ↩︎

  14. The Lockheed kill chain is the most common example of this oversimplification. ↩︎

  15. Courtesy Wikimedia. ↩︎

  16. Pro tip: once your red and blue team leads get fairly adept at this presentation, have them switch roles, i.e. the blue team lead describes what actions red took to further his/her campaign, while the red team lead counters with how the defenders utilized their telemetry to identify and contain the attack in progress. There is great power in this readout being delivered in the third, as opposed to the first, person. ↩︎

  17. Private to JJ: you have been a fantastic sport and a shining example on your ridealongs. Thank you! ↩︎

  18. Private to MC: did you know, as you were typing, what a search for "embarrassing photos of Erick Rudiak" would bring up? ↩︎