Erick Rudiak. Songwriter. Singer. Human. - Erick thought about it a lot, and still posted all this here.

September 12

My romance with DAFAD (oh, and a show!)

Hey, Saint Louis people: I'm (finally) doing a show in town! Wednesday, October 1st, I will be playing the inaugural show of a new songwriters-in-the-round series hosted by the too-good-to-be-true Steve Perron at The Lone Wolf Coffee Company. Stick that on your calendar with this handy link!

Among the many things I'll never be able to repay my mom for were the (literally) thousands of hours she spent teaching me piano. I learned my scales and my keys, how to pick notes and chords out by ear, and to enjoy performing far more than practicing. Fast-forward to my post-college malaise and I used all that musical training to teach myself just enough guitar to start using that ear training and learn some covers. Since I played most everything by ear, I had to learn all the chord shapes; since I was lazy, I only wanted to learn them once... so alternate tunings were not part of my repertoire. It wasn't until years later that I learned to appreciate the amazingly rich palette of colors that someone like Peter Mulvey paints with and I figured it was time to expand my musical vocabulary and mess with my strings. Still, I hazily remember being drawn to DADFAD largely because I wanted to minimize my level of effort in this endeavor, and the open chords and harmonics sure sounded great even when my fingers weren't on the fretboard. That didn't help much when my songwriter compass pointed me to a minor chord to start the chorus to Good In Bed, of course, so I learned the shapes for minor chords, dominant sevenths, and a handful of things I still can't reach in standard tuning. I'm still married to EADGBE, but it's fun to get out of town, like I did last week in Dallas at the always-stellar Opening Bell, and see DADFAD on the side, like I did in the video below.

Posted by erickru | Leave comment (0) »

March 12

Shine on, Lone Wolf, shine on.

Moving to a new city three years ago meant trying out all the open mics, hoping to find a "listening room" where the open mic was the attraction, rather than a distraction (ha!) from what most of the clientele wanted to do (eat, drink, be merry, etc.). It took a while, but it finally became crystal clear that, for St. Louis, The Lone Wolf on Tuesday nights was the only choice. The thing about open mics is, it can be the perfect night, the perfect venue, the perfect location, and the perfect crowd... and it will still be a train wreck without the right host. In fact, The Lone Wolf had a Tuesday night open mic a couple years back, which I attended and enjoyed; a couple times, there were even some other folks there besides me and then-host Matt M. Finding the right leader has made all the difference for the current incarnation of The Lone Wolf's open mic brilliance. The fact that this night has literally become a destination (I'm always impressed at how many itinerant musicians stop by) is a tribute to the marvelous job Steve P. is doing with this affair. What does an awesome open mic host do when they're being awesome?

  • Make it about the crowd - it's fine to open up with a song or two and get folks in the mood, but the host should not perform considerably more than the average visitor.
  • Manage the room - Steve has laminated flyers on every table that gently remind everyone about the culture of the open mic, which means he (almost) never has to remind the room to listen.
  • Maintain equity - as much as possible, everyone should get the same amount of time. Whatever the rules are (X songs, Y minutes, etc.), they must be enforced. Cliques and anarchy can be poisonous to an open mic.
  • Tweak the sound - not all instruments are created equally, and not everyone maintains the same mouth-to-mic distance. The awesome host really pays attention to levels for each performer and helps them sound their best.
  • Celebrate the performers - everyone has their own backstory and it's not necessarily obvious when they get onstage. You never know what obstacles someone had to overcome to walk up to the center of that room.

Congratulations to Steve and the whole Lone Wolf crew on your first anniversary. You have a genuine hit on your hands.


Elspeth Veronica live at the Lone Wolf's First Anniversary Extravaganza Bash.

More from my many happy nights out there: Erick's Lone Wolf hit parade.

Posted by erickru | Leave comment (0) »

December 24

My favorite CISOs aren't CISOs

A recruiter recently asked me, "who are the best CISOs you know?" I'm convinced it was a trick question, and that I failed the test, but the interesting thing about CISOs is that it is challenging at best to evaluate our performance (there, I said it). My annual objectives tend to focus on measurable risk reduction, in no small part because if CISOs are to take credit for absence of breaches on our watch, we must be culpable for any that happen. Yet, the general consensus in our industry is that everyone is susceptible to a breach, given a sufficiently determined adversary.

When I think about people who have influenced my thinking about technology and how to manage its risks, there has been a series of world-class security minds with whom I've had the fortune to associate professionally, and who have opened my eyes to what computers can do in capable hands (if you've wrestled FreeBSD with me, defeated IVR password reset programs using investor call recordings, or calculated the risk reduction from deploying a certain operating system vendor's ridiculously effective free security tool, you know who you are). When I think about CISOs - leaders who have influenced my organizational awareness and helped shape my approach to risk management - the people whose ideas I keep circling back to don't have "CISO" anywhere in their resumes.

The first is Dan Ariely (full disclosure: Mr. Ariely sits on an advisory board for my employer). Ask ten CISOs about their biggest challenge, and all ten will riff on a common theme: getting their users to reliably do the right thing. Ariley's brilliant TED Talk illuminates a fundamental component to unlocking that equation: given a choice, people will do the most convenient thing, which is usually nothing. That means that, by opting people into the right behavior by default - in the TED talk, it's organ donation; at my previous employer, it was 401k plan participation - the most successful strategy to getting a large population to do the right thing is to make that the default. It never ceases to amaze me how frequently we in Information Security fail to follow this simple recipe. It also never surprises me when I've observed measurable reductions in risk by putting Ariely's opt-in model into action. Time and again, providing users with just-in-time awareness at the moment they're about to make a risky go/no-go decision and opting them into the low-risk choice (don't send that email, don't copy that file to a portable memory stick, don't follow that email link to a website so new that our security vendor hasn't even categorized it yet) while making the alternative easy to execute if needed has led to measurable reductions in high-risk behavior without alienating users to the point of revolt. I talk at much greater length about this topic in another post.

My second great influence has been Peter Sandman. Ultimately, every CISO's success or failure will be evaluated on their performance in a crisis - did their presence and leadership help turn a breach into a near-miss, or manage a mega-breach down to a non-terminal event? This, ultimately, is where Dr. Sandman has consistently provided clarity of thought that I've learned to appreciate through the years. It is his "Risk = Hazard + Outrage" equation that has replaced the "Risk = Impact x Likelihood" (or other ALE-based calculations) in my approach to strategic planning, and not just because ALE is impossible to derive in InfoSec, or because you never get a second chance to extrapolate Ponemon numbers to your CFO. His First Rule of Crisis Communication further crystallizes the approach modern CISOs need to adopt:

Outrage, not hazard, drives reputation. Even significant hazards are usually tolerated when outrage is low, and even insignificant hazards are usually rejected when outrage is high.

This insight is critical because, in the case of a breach, the first two questions company leaders will need answered are, "who did this to us?" and "why did our controls not anticipate this?" What they're really asking is, "is this event outrageous?" The CISO's job ultimately comes down to running a program where the outrageous is extremely unlikely to happen.

Compliance
Sandman's simple distillation also makes it quite clear why security programs driven primarily by compliance tend to struggle over time. Their threat model can be distilled down to a single actor: the auditor. A note about the threat model illustration to the right: it is adapted from many conversations I've had with the brilliant people at Threat Forward and distills a complex set of risk management factors into two core characteristics. First, is the threat someone with whom we can manage interactions? In other words, can we schedule our time with the threat actor, and will they meet us on mutually acceptable terms? Second, is the threat using tactics, techniques, and procedures (TTP's) with which we're familiar? Are they following a playbook we've seen, or are they developing custom tools or exploits that may exercise an edge case in our defenses and, potentially, defeat them? In the case of regulatory auditors, that threat falls squarely into the upper-right quadrant: managed interactions, known TTP's.

A more robust threat model
With that in mind, what does a risk management program that incorporates Sandman's First Rule look like? For one, it anticipates a much broader population of potential threat actors. There are different types of defenses that one needs to manage the outrage factor when an insider is attacking, versus those you might need when you're being DoSed by a politically motivated group, or those you might need when you're being targeted by organized crime or a nation-state. Different organizations will be subject to different levels of outrage if they are bested by one of these adversaries. As adversaries' capabilities evolve, the CISO needs to set priorities, and ensure that the defenses needed to fend off the adversaries whose success would be most outrageous mature at the same, or higher, rates of speed.

There are certainly many truly great CISOs out there - my peers who are managing risk, optimizing spend, making distributed computing safe for millions of people. To the recruiter who asked about the best CISOs I know, I suppose the ultimate response is: they'll call you when they need you. For now, I'm keeping Ariely and Sandman as my co-pilots. They've gotten me this far.

Posted by erickru | Leave comment (0) »

August 13

Jeune fille in a white sundress

I've been pretty lucky - more than most - to have had a series of mentors adopt me over the years and give me the tools and the confidence to do more with whatever raw materials I've had at my disposal than I would have squeezed out on my own. For a couple years in the late '90s, Ralph Covert was one of those people.

Surely, Ralph was an acquired taste for some, and I've run into to more than one of his former songwriting students who ached for unvarnished criticism but got unbridled accentuation of the positive instead. I soaked it all up. Ralph had a way of making you feel like you were in on the joke, and that the biggest difference between your musical career and his was his head start. One night at Fitzgerald's, Ralph noticed that I was on a date and had me come up to play a song for 500 people before a set break (I chose Baltimore Sun). Another night in Old Town, Ralph silently mouthed "the one true secret of songwriting" to a group of songwriting students hanging out after class just as the Brown Line El noisily rolled into its stop at Argyle.

It's no wonder that so many of those moments get relived in my own songs nowadays - the "terrible examples" in I Will Rule Your World, the "as he spoke, a train went rattling by" line in I Don't Watch The News, the "jeune fille in a white sundress" reference in Cupid (below) - all those little song-children of mine are lucky to have a pretty neat musical godfather. Thanks, Ralph!

Posted by erickru | Leave comment (0) »

May 23

Dallas!

Good times at the Opening Bell open mic this week. Kudos to Steve for running a tight ship - every city should have an originals-strongly-encouraged event like this. Good memories right here:



Practice Divorce
Creator Song

Posted by erickru | Leave comment (0) »

May 05

Time to stop blaming the victim, part deux.

It started a couple weeks ago with the almighty Bruce Schneier expressing a rather controversial opinion:
...[that] training users in security is generally a waste of time, and that the money can be spent better elsewhere

While I actually agree with Schneier's conclusion based on the current state of many/most security awareness programs, I cannot subscribe either to his view of the future (where users get trained annually and acquire their primary security mores from their coworkers) or to his drawing of an analogy between failed security awareness and ineffective behavior modification campaigns in the medical field. Since I am thoroughly enjoying my current stint in healthcare, this is a good time to remind everyone that the opinions expressed here are mine and mine alone, and have nothing to do with my gracious employer.

So, what is to be done about security awareness? We can start with doing nothing at all and why it won't work: it's legally indefensible; a company can no more expect to defend itself against an OSHA suit if it didn't train its staff on how to operate its equipment than against allegations of culpability for identity theft if it didn't train its staff on how to use its computing systems. In that analogy, the extra challenge in information security is that the Internet turns a warehouse forklift operator's job into something that looks more like a Nintendo game, with flaming fireballs launching from all angles. Because of that added factor, what a lot of folks said in response to Schneier was that we either need more awareness or better campaigns (e.g. more engaging content) to reach our goals. On the face of it, each of these suggestions has merit, as each one should yield an incremental improvement... but in what?

What is the goal of security awareness? Ultimately, if it's aligned with the goal of an enterprise's information risk management program, it's to minimize the likelihood of losses of customer data and company IP (and associated losses of brand and reputation). An awareness program is important because the history of our industry has shown us that one of the most common causes of these losses is when employees' actions (e.g. opening malicious attachments, visiting websites which are spreading malware, etc.) lead to system compromise. This, in turn, has led to regulatory requirements to ensure that no company is so negligent as to fail to inform its users of the dangers lurking online. The thing about awareness programs that so many of us in the Information Security industry continue to miss is that, in order to succeed -- in order to effectively avoid the loss scenarios above -- our users have to make the correct decision 99.99% of the time, despite staggering odds:
  • They are faced with thousands of opportunities daily to click links, visit websites, and read emails
  • Their incentives are skewed highly in the direction of productivity (click through warnings, install that codec, open that attachment) to achieve their reward or complete their task
  • The bad guys are getting increasingly good at disguising their campaigns, while the users are expected to check an increasingly complex matrix of indicators of potential malice
  • The good guys are surprisingly inept when it comes to making good and evil behavior look obviously different to the end-user
Add it all up, and what becomes more and more clear is that, if enterprises are depending on thousands of users to make the right decision 99.99% of the time, we cannot be attacking the numerator of the "right decisions/all decisions" equation. This, unfortunately, is what most security awareness programs try to do: increase the numerator through more or better awareness. What's clear to me is that we should be doing more to attack the denominator.

And this is where I believe we can actually take a cue from healthcare. Schneier argues the following points:
One basic reason is psychological: we just aren't very good at trading off immediate gratification for long-term benefit. [...] Another reason health training works poorly is that it's hard to link behaviors with benefits. [...] If we can't get people to follow [simple food safety] rules, what hope do we have for computer security training?
Here's the thing: healthcare has figured some of these things out. Opting users into the right choice is extremely effective; Dan Ariely has a great example in the area of organ donation, and Hewitt Associates showed the same for 401k contributions. How can we do this for information security? Most of us do this already today by blocking or quarantining suspected spam at our borders, and blocking various categories of sites that are clearly dangerous from being accessed by our users, typically using some combination of Bayesian analysis and crowd-sourced intelligence about the contents of the Internet. Many perimeter security vendors offer the ability to assist users with decisions based on realtime information about a link's reputation; Google has taken this model even further by crowdsourcing executable binary analysis.

What else can we do when our front-line filters eventually let something bad through and our users might have to face a difficult security decision? Continue to attack the denominator! Google's Safe Browsing API warns users before they make a potentially disastrous decision, and many enterprises are now beginning to use a similar philosophy by intercepting sites exhibiting fast flux behavior and asking users to opt out of the good default, i.e. not visiting a site with a known bad or an unknown reputation. Awareness at the time of need is a great way of attacking that denominator: instead of asking our users to make thousands of good choices every day, presenting them with a small number of decisions (with a secure default) just as they're about to do something very risky, like following a link to a website that was just created in the last 24 hours and has no consensus reputation score on the Internet, can truly make a difference. In healthcare, this is analogous to laser-etching on bottle caps to remind the patient exactly when they need to refill their prescription to remain adherent. It's also reminiscent of an idea presented by David Laibson on a recent Freakonomics podcast focused on fighting the obesity epidemic:
Imagine you had a bracelet that a kid wore, or an adult wears, if I had this it would affect my behavior. And as I go into a meal it starts registering 200, 700, 900, 1,400 calories. And as it tells me Iím eating more and more in this feeding episode it starts to change it color from green, to yellow, to orange, to red, and then it starts pulsing red, red, red, red once you get way out of range.

I've become a big fan of this approach as a supplement to a healthy, "traditional" awareness program, for a couple of reasons.

Pretty darned effective awareness campaign.

It greatly improves our odds of success, in no small part by reducing the time delta between stimulus and response (in the same Freakonomics episode, Laibson calls suggests the cutoff for efficacy is below "10 minutes"). While there hasn't been a lot of academic research done into the effectiveness of security training, what's out there validates our suspicions: 20% of the basics are forgotten within 60 days of awareness education delivery. Awareness that is mandatory and disconnected from the risky behavior it is intended to address is unlikely to have the 99%++ success rates that we need. It serves a strong purpose in the compliance arena but, ultimately, it also has the unintended effect of blaming the user for failing to follow a complicated protocol when they become the victim of an attack. What I like about just-in-time awareness for the riskiest behaviors is what I like about the signage (photo above) on the interstates in St. Louis when it rains: it reminds me of the rules I need to follow just when an infraction is most likely to occur, and not when I'm least likely to internalize them. That is what happens in late December of every year, when we are lucky to spot one of those irrelevant-looking newspaper articles reminding us of the new rules going into effect on New Year's Day.

So, to all my friends in IL, don't click any links in emails you didn't solicit, and remember to avoid popping wheelies or ordering sharkfin soup. I sure hope you get this message in time.

Posted by erickru | Leave comment (0) »

April 23

Unplugged vs. actually unplugged

Holding the attention of a medium-sized room can be tricky, especially in an open mic setting. The venue has a PA, of course - few people would come to an open mic without... a mic. For a lot of good reasons (PA volume, spotlight avoidance, etc.), the audience tends to form a U-shape with the stage at the top of the "U" to maximize distance from the PA and, by extension, the performer. A couple days back at The Wolf, this was exactly what transpired. I tried to neutralize the effect by completely unplugging, and for the most part, I thought it worked... except on the stoic gentleman with the silver hair who gave exactly zero <insert-plural-expletive(s)-here>. Off-camera, though, I was thoroughly entertaining that crowd with my wacky I'll-show-you-unplugged-this-ain't-MTV antics. Really. Here it is, for the record.

The Girl With The Shoulderblade Tattoo, unplugged:


I Don't Watch The News, "unplugged":


Posted by erickru | Leave comment (0) »

November 18

Deja vu

Every six months or so, I find myself having the same discussion with someone in the InfoSec community:

Someone: _____ is crazy, they want to run a sensitive security app on a virtual machine. We have to have bare metal for our security apps!
Me: Why?
Someone: Duh! It's theoretically possible to jump between guests. If someone has a guest adjacent to ours, they might get into our security app.
Me: so... I need to go to the CIO and start getting all of our critical client data onto bare metal, right?
Someone: well, no, I wouldn't go that far. We just shouldn't put our critical security apps on VMs, that's all.
Me: ....

This scene plays out all too regularly, and always ends up in a discussion on the number of non-theoretical attacks against virtualization, the size of the remainder of the attack surface (OS, middleware, apps, etc.), the type of adversary who would use a 0-day VM exploit on our organization, and inevitably leading to a more rational discussion about server sizing, support options, and the other non-religious factors that typically go into an operational deployment discussion in a large enterprise.

All of this made recent research by Dr. Ari Juels and a team of co-conspirators from academia so interesting to me. Finally, the headlines screamed, scientists are proving all those hypotheticals and we will have a real opportunity to discuss the risks of virtualization on an even playing field, based on research from RSA's Chief Scientist. Alas, the details have turned out to be less exciting. From Dr. Juels' post:
the attack results in complete compromise of one form of encryption in GnuPG. As demonstrated, the attack is fairly narrow: It targets one vulnerable application in a particular class of virtualized environment. (GnuPG relies on a cryptographic package called libgcrypt that lacks well-established side-channel countermeasures.) Itís also fairly involved, requiring heavyweight use of machine learning, among other things.

Reading the original paper is illuminating as well. In it, the authors give credit to the crypto community for anticipating these attacks:
Recent versions of some cryptographic libraries attempt to prevent the most egregious side-channels; e.g., one can use the Montgomery ladder algorithm for exponentiation or even a branchless algorithm. But these algorithms are slower than leakier ones, legacy code is still in wide use (as exhibited by the case of libgcrypt), and proving that implementations are side-channel free remains beyond the scope of modern techniques. [...] Future Xen releases already have plans to modify the way interrupts are handled, allowing a VCPU to preempt another VCPU only when the latter has been running for a certain amount of time (default being 1ms). This will reduce our side-channelís measurement granularity, but not eliminate the side-channel.


It's worth noting that there is an important nuance at play in these discussions. Most often, we debate hypothetical attacks on systems when it comes time to apply a patch: vendor X releases a patch with some corresponding information about not having seen attacks in the wild. This is a very different brand of hypothetical than the discussion above. For years now, the information security community has seen that attackers are reverse-engineering exploits from vendor patches themselves - and that's relevant even if you believe that the vendor hasn't seen evidence of the attack in the wild.

In the case of any security vulnerability, the questions to ask are: who's able to exploit this (the threat) and what remediation options do I have (the countermeasure)? In most cases, if reconfiguration or other low-impact hardening (disabling unnecessary services, enabling built-in security features of available software, etc.) is an available option, that is the preferred option over patching, which is high-risk and high-impact. My immediate reaction after reading through the details on Dr. Juels' & Co.'s research: breathe a sigh of relief that I'm not running an old version of libgcrypt, update my lockdown playbook for Xen to include checking configurable values for interrupt handlers, and go back to worrying about the thing I was worried about before reading these articles.

In this case as in the many others in which the attacks-on-hypervisors argument is evaluated, we come back to our basic threat model: are we going to spend considerable capital protecting against an attacker with a exploit that is still entirely theoretical? Or can we invest those security dollars locking down our systems (i.e. selecting available hypervisors and algorithms that have been designed and proven attack-resistant) and more effectively defending against or detecting attacks from threats with a greater opportunity to exploit them: authenticated users, network neighbors, privileged administrators?

P.S. Private to BW: thank you for the good back-and-forth on this topic last week.

Posted by erickru | Leave comment (2) »

August 31

Five

Ever since I stumbled upon five.sentenc.es, it has become a bit of an obsession for me, and I really do count before I hit "send" on most of my work emails. I recently realized that I've mentally adjusted the method ever so slightly in my day-to-day: five is no longer a hard-and-fast, pass/fail rule. The goal is: 1 sentence for every level down the company org chart, beginning with the CEO, whose target is... one sentence. That means my CFO and EVP of HR should rarely get more than two sentences from me, three for my SVP of Sales, etc. Confession: yes, I do cheat; using semicolons; but never more than once in any sentence.

Posted by erickru | Leave comment (0) »

June 17

Nostalgia, revisited

Crazy to think it's been more than a year since I played my Farewell-to-Chicago show. I finally had the opportunity to go back and splice apart some video for y'all. YouTube, take it away!

Twins
Box of Kittens
Immigrant Blues
Sister Mary Catherine, Rock-and-Roll Nun
The Girl with the Shoulderblade Tattoo
Elspeth Veronica

P.S. I'll be updating this post with more videos from that delightful night as post-production progresses.

Posted by erickru | Leave comment (0) »