We CISOs all do this as part of our ubiquitous "security awareness" programs. We vigilantly remind our users about the dangers of being online, of how we need their help in defeating the hackers and corporate spies that are out to get us, and how information security is their responsibility. Unfortunately, we're not doing this because it's effective, or even because we think it might be effective. We do it, because if we didn't and then an adverse event occurred, we would be held in contempt, both literally as well as legally. In other words, it would be our fault. Unfortunately, the corollary to that assertion is that, by putting unrealistic expectations in front of our users, we are essentially shifting the blame: if it's not our fault, then it must be theirs. And this is wrong.
The reasons why this guidance misses the mark are multifold. Have you ever tried to hover over a link on your iPad or Galaxy S? More importantly, URL shorteners like bit.ly, t.co, and goo.gl have created a layer between our users and the sites they visit that is both trusted (they encounter these links hundreds of times daily on twitter and Facebook) and intentionally opaque. Colbert is being funny here, but he nails the underlying information security issue: that shortened link is liable to take you just about anywhere. Meanwhile, the attackers are taking advantage of our users' familiarity with shortened links and the fact that, 99.9% of the time, nothing (visibly) bad has happened to them when they followed one*. In the threat model of us versus the attackers, we've set up a game with two outcomes:
- Tens of thousands of our corporate users diligently check every link they're thinking of following before they click (I'm using LinkPeelr in the screenshot above)... and make the right choice every time or...
- The bad guys get what they want.
Late last year, Wired and Ars reported on a credit card skimming case in California. They got a choice quote from the CEO of a cybersecurity company who utilizes "sophisticated psychological analysis [... to] help ensure that our customers derive maximum business and security benefits without imposing any extra burden on users." Their advice:
Everyone should always check any device in which they insert/swipe a credit/debit/ATM card, or to which they touch their card, to see if it looks like it may have been modified/covered.
We can see that the behavioral incentives here are stacked against the user making the right choice: their #1 goal is to check out quickly and be on their way, but we're admonishing them to check the point-of-sale terminal and make an expert decision on whether it has been tampered with prior to providing our credit cards. There are two reasons users constantly fail to heed this advice without destroying the fiber of our financial system. First, the advice is hard for most people to act upon correctly. Second, the credit card companies have indemnified the users: the risk to the user of making the wrong choice at a POS terminal is usually nominal - $50.
While it feels instinctively naive to expect our users to consistently follow this rule of thumb for their daily credit card transactions, we persist in asking them to do the same for their thousands of daily interactions with the Internet. In the spirit of never offering problems without solutions, here are some ideas:
-
Browser vendors: let's build functionality like LinkPeelr's directly into the browser and safely check those 302's to their logical end. As of this evening, a whopping 2,618 users (including me) have installed LinkPeelr in their browser. Adding up all the other extensions in this space, we're still, in the parlance of a good friend, waaaaaaay to the right of the decimal point on the percent of people who have access to this space-age, link-expansion technology.
-
Network administrators: if protecting those assets is important enough, let's agree to take the occasional support call from a user who can't get to a website and enforce IP reputation checks when people are accessing the Internet. It's not prohibitively expensive, addresses the root of the problem (the malicious website) rather than the cause (user clicking an unverified link), and it's way better than doing nothing.
-
Sysadmins: let's also agree to segment our networks so that a single compromised device doesn't immediately lead to checkmate for our wily attacker. Nobody hesitates to require multi-factor authentication for critical in-house systems after they've been breached: it's silly to wait.
-
Bad guys: please don't roll your own URL shorteners. This is why we cannot have nice things.
-
CISOs: let's stop blaming the victims. Relying on every last user to make the right choice every single time is truly absurd. We will, of course, continue to educate our users; it would be negligent not to. But we also need to be aggresively transparent and educate our stakeholders, connecting the dots for them between the inconvenient controls we're proposing and the protections they will deliver. Let's help our business sponsors make informed decisions and arrive at the correct balance between user experience and safety for our organizations. Their ability to own the risk is far greater than that of the thousands of folks on their lunch breaks who just want to figure out what the heck Colbert is talking about, and whom we're currently holding accountable today.
* At least not until someone gets busted for unwittingly participating in a DDoS. The Anonymous/RIAA kerfuffle made this all too real in January:
The trick snagged those who happened to click on a shortened link on social-media services, expecting information on the ongoing #opmegaupload retaliation for the U.S. Justice Department’s takedown of popular file sharing site Megaupload. Instead they were greeted by a Javascript version of LOIC — already firing packets at targeted websites by the time their page was loaded.