LJ Archive

Paranoid Penguin

Mental Laziness and Bad Dogma to Avoid

Mick Bauer

Issue #179, March 2009

Peer pressure is no substitute for common sense.

Gentle readers, I try not to rant at you, really I do. You turn to my column for practical, reliable tips on getting complex security-related tools to work, and judging from the e-mail messages you send me, most of the time I deliver that.

But, I'm human, and now and then, I get really tired of dealing with mental laziness and dogma. It's not because I'm some sort of purist—quite the contrary. Rather, it's because it's impractical. Each of us security geeks has a limited amount of energy and political capital, and we can't afford to squander it on positions we can't back up with compelling, plausible risk and threat drivers.

Similarly, although I've got tremendous sympathy for nongeeks who strictly use computers as tools, and who find it (rightly) unreasonable to have to know as much as a system administrator just to be able to print their spreadsheets, Internet use has its price. If you're going to comingle your data with that of practically the entire rest of the world, you need to think about risks now and then, and you need to take the time to learn some simple precautions.

So this month, I need to vent just a little bit about some nagging bits of information security dogma to which security practitioners sometimes cling, and some examples of mental laziness in which end users (especially “power users”) sometimes indulge. Your opinions may differ (widely) from mine, and if you take strong exception to any of this, I encourage you to post comments to the Web version of this article or e-mail me directly.

In Defense of Dogma

Before I begin the rant proper, let me acknowledge that to a point, dogma can be useful, in the same way that a parent may now and then find it useful to tell a cantankerous child “the answer is no, because I said so”.

Life is short, information security is complicated, and we don't always have the luxury of explaining every rule to every user's satisfaction. Sometimes, it seems to me, it's perfectly appropriate to say, “You can't do that because it violates corporate security policy.” The real question is, “Is that a defensible policy?”

So, perhaps my point is not that there is no place in the world for information security dogma, but rather it's that dogma existing only for its own sake is useless. If we can't back up a policy, practice or other security requirement with compelling, risk-based justification, we will fail.

This month's column, therefore, is about some wrong ideas that have somehow ended up being treated as immutable truth among some of my peers, but whose rationales are questionable and tend to cause more harm than good. And, because I don't want anyone to think I'm unduly biased against my colleagues, I'll give equal time to the aforementioned examples of end-user mental laziness as well.

Bad Dogma 1: Changing All Your Passwords Monthly Is Really Important

Consider hapless Hapgood, a typical corporate computer user. At work, Hapgood has to keep track of six different user accounts, each with slightly different password-complexity rules: system A requires a minimum of eight characters containing uppercase and lowercase, punctuation and numbers; system B allows only seven-character passwords, doesn't allow punctuation and so forth.

Due to corporate security policy, within any given 60-day period, Hapgood must change all six passwords—a couple of them twice. If Hapgood starts choosing passwords that are easy for him to remember but not very hard to guess (for example, his own name with a capital H and zeroes instead of Os), can you really blame him?

I wouldn't. But, which do you suppose is more dangerous: choosing a bad password, or leaving a good password alone for, say, 90 days instead of 30?

Naturally, that depends on what you're worried about. If you're worried about brute-force password attacks in which an attacker cycles through all possible passwords for a given user account, then the more randomized the password, the less likely it will turn up in the password “dictionaries” many attackers employ. In that scenario, short password lifetimes will lower the chance that any given password will be cracked before it expires. But, the password shouldn't be very easily cracked if it's sufficiently complex to begin with. So as it happens, enforcing good password complexity rules is a better protection against brute-force password attacks.

What if you're worried about Hapgood being fired, but connecting back into the network via a VPN connection and logging back in to his old accounts, in order to exact revenge? Won't a 60-day password lifetime minimize the amount of havoc Hapgood can wreak?

This question is best answered with two other questions. First, why should Hapgood still have access for even one day after being fired? Second, if Hapgood's accounts haven't all been de-activated within 60 days, what's to stop him from simply changing his passwords once they expire?

Obviously, in this scenario, password aging is the wrong control on which to fixate. The terminated-employee conundrum can be addressed only by good processes—specifically, the prompt and universal disabling of every terminated employee's account.

There's a third risk people hope will be mitigated by password lifetimes—that a password may be eavesdropped over the network, read off the sticky note attached to someone's monitor or keyboard or otherwise intercepted. This risk is probably more credible than brute-force attacks and user attrition combined.

But even here, if attackers can abuse someone else's access privileges for 29 days without fear of detection, there's probably something seriously wrong with how you're doing things. Furthermore, it may be possible for such attackers to install a keylogger, rootkit or other malware that allows them to intercept the new password, once the intercepted one expires and its rightful owner changes it.

Passwords should, of course, have finite lifetimes. User name/password authentication is a relatively weak form of authentication to begin with, and requiring people to refresh their passwords from time to time certainly makes the attacker's job a little harder. But, compared to password complexity rules and good walkout procedures, password aging achieves less and affects end-user experience more negatively.

Bad Dogma 2: All Digital Certificates Should Expire after One Year

On a related note, consider the digital certificate, which consists of a couple key pairs (one for signing/verifying, another for encrypting/decrypting), identity information (such as your name and organization) and various Certificate Authority signatures. Conventional wisdom says that every digital certificate must have an expiration date, the shorter the better, in case the certificate's owner unexpectedly leaves your organization or the private key is somehow compromised. The consequences of either event could include bogus signatures, illicit logins or worse.

This worst-case scenario assumes two things. First, if the certificate's owner leaves your organization, it may take a while for the certificate to be revoked (and for news of that revocation to propagate to the systems that use certificates). Second, it assumes that the certificate's passphrase can be guessed or brute-force cracked easily.

But, both of these are solvable problems. If you're deploying a Public Key Infrastructure in the first place, you need to configure all systems that use certificates either to download automatically and use Certificate Revocation Lists (CRLs) from your Certificate Authority, or better still, configure them to use the Online Certificate Status Protocol (OCSP). Many events can lead to a certificate's need to be revoked besides reaching some arbitrary expiration date, and managing your certificates diligently and using CRLs or OCSP are the only reliable means of reacting to those events.

Regarding certificate passphrases, setting passphrase complexity requirements is generally no harder for digital certificates than for system passwords. The situation in which it can be most challenging to protect certificate passphrases is when machines use certificates (for example, Web server SSL/TLS certificates), which usually requires either a passphrase-less certificate or a certificate whose passphrase is stored in clear text in some file to which the certificate-using process has read-access privileges.

The bad news is, in that scenario, renewing the server's certificate every year doesn't solve this problem. If it's possible for people to copy a server's certificate once, it's probably possible for people to do so every year, every six months or as often as they need or like. The solution to this problem, rather, is to protect the certificate at the filesystem/OS level, especially its passphrase file, if applicable.

Does that mean certificates shouldn't have expiration dates? Of course not! I'm simply saying that, as with password aging, if this is your only protection against user attrition or certificate compromise, you're in big trouble anyhow, so why not employ a variety of protections that allow you to relax a little on expiration dates, as you ought to be doing those other things anyhow?

Bad Dogma 3: E-Mail Encryption Is Too Complicated for Ordinary People to Use

For as long as I've worked on information security in large corporations, I've been told that e-mail encryption is only for geeks, and that business users lack the technical skills necessary to cope with it. I've always found this sort of amusing, given that it's usually us geeks who accuse business people of having too-short attention spans.

But, is using PGP or S/MIME really that much more complicated than using, say, LinkedIn? I know which I would rather spend time on! (I am, however, an admitted geek.)

How much of the inconvenience in e-mail encryption really falls on end users? Nowadays, I would argue, very little, especially if your organization can support a PGP key server or can incorporate S/MIME certificates into an MS-Exchange Global Address List.

In practice, key management tends to be the biggest headache with e-mail encryption—specifically, getting a valid/current digital certificate or PGP key for each person with which you need to communicate. But, this need not be a big deal if you set things up carefully enough on the back end and give your end users local settings that allow their mail client software to search for, download and update their local copies of other people's keys transparently.

One can go too far, of course, in coddling end users. I've seen organizations issue keys without passphrases, which makes those keys trivially easy to copy and abuse. I've seen other organizations issue passphrase-protected keys, but then send people their new key's initial passphrase via unencrypted e-mail! Obviously, doing things like that can defeat the whole purpose of e-mail encryption.

My point, really, is that modern e-mail encryption tools, which typically support GUI plugins for popular e-mail readers, such as MS Outlook and Squirrelmail, are exponentially simpler to use than the command-line-driven tools of old. Given a modicum of written documentation—a two-page instruction sheet is frequently enough—or even a brief computer-based-training module, nontechnical users can be expected to use e-mail encryption.

This is too valuable a security tool for so much of the world to have given up on!

There, I'm starting to feel better already! But, I'm not done yet. On to some mental laziness that never fails to annoy and frustrate.

Mental Laziness 1: Firewalls Protect You from Your Own Sloppiness

Your DSL router at home has a built-in firewall you've enabled, and your corporate LAN at work has industrial-strength dedicated firewalls. That means, you can visit any Web site or download any program without fear of weirdness, right?

Wrong.

In the age of evil-twin (forged) Web sites, cross-site scripting, spyware and active content, you take a risk every time you visit an untrusted Web site. Your home firewall doesn't know or care what your browser pulls, so long as it pulls it via RFC-compliant HTTP or HTTPS. Even Web proxies generally pass the data payloads of HTTP/HTTPS packets verbatim from one session to the other.

This means the site you're visiting may transparently push hostile code at your browser, such as invisible iframe scripts, ActiveX or JavaScript applets (depending on how your browser is configured), or your data may redirected via cross-site scripting and request forgery.

Firewalls are great at restricting traffic by application-protocol type and source and destination IP address, but they aren't great at detecting evil within allowed traffic flows. And nowadays, RFC-compliant HTTP/HTTPS data flows carry everything from the hyptertext “brochureware” for which the Web was originally designed to remote desktop control sessions, full-motion videoconferencing and pretty much anything else you'd care to do over a network.

With or without a firewall, you need to be careful which sites you frequent, which software you install on your system and which information you transmit over the Internet. Just because your nightclub has a bouncer checking IDs at the door doesn't mean you can trust everybody who gets in.

Mental Laziness 2: Firewalls Need to Block Only Inbound Traffic

In olden times, firewalls enforced a very simple trust model: “inside” equals “trusted”, and “outside” equals “untrusted”. We configured firewalls to block most “inbound” traffic (that is to say, transactions initiated from the untrusted outside) and to allow most “outbound” traffic (transactions initiated from the trusted inside).

Aside from the reality of insider threats, however, this trust model can no longer really be applied to computer systems themselves. Regardless of whether we trust internal users, we must acknowledge the likelihood of spyware and malware infections.

Such infections are often difficult to detect (see Mental Laziness 3); and frequently result in infected systems trying to infect other systems, trying to “report for duty” back to an external botnet controller or both.

Suppose users download a new stock-ticker applet for their desktops. But, unbeknownst to them, it serves double duty as a keystroke logger that silently logs and transmits any user names, passwords, credit-card numbers or Social Security numbers it detects being typed on the users' systems and transmits them back out to an Internet Relay Chat server halfway around the world.

Making this scenario work in the attacker's favor depends on several things happening. First, users have to be gullible enough to install the software in the first place, which should be against company policy—controlling who installs desktop software and why it is an important security practice. Second, the users' company's firewall or outbound Web proxy has to be not scanning downloads for malicious content (not that it's difficult for an attacker to customize this sort of thing in a way that evades detection).

Finally, the corporate firewall must be configured to allow internal systems to initiate outbound IRC connections. And, this is the easiest of these three assumptions for a company's system administrators and network architects to control.

If you enforce the use of an outbound proxy for all outbound Web traffic, most of the other outbound Internet data flows your users really need probably will be on the back end—SMTP e-mail relaying, DNS and so forth—and, therefore, will amount to a manageably small set of things you need to allow explicitly in your firewall's outbound rule set.

The good news is, even if that isn't the case, you may be able to achieve nearly the same thing by deploying personal firewalls on user desktops that allow only outbound Internet access by a finite set of local applications. Anything that end users install without approval (or anything that infects their systems) won't be on the “allowed” list and, therefore, won't be able to connect back out.

Mental Laziness 3: If Your Machine Gets Infected with Malware, You'll Know

Some of us rely on antivirus software less than others. There are good reasons and bad reasons for being more relaxed about this. If you don't use Windows (for which the vast majority of malware is written), if you read all your e-mail in plain text (not HTML or even RTF), if you keep your system meticulously patched, if you disconnect it from the network when you're not using it, if you never double-click e-mail links or attachments, if you minimize the number of new/unfamiliar/untrusted Web sites you visit, and if you install software that comes only from trusted sources, all of these factors together may nearly obviate the need for antivirus software.

But, if none of that applies, and you simply assume that in the case of infection, you simply can re-install your OS and get on with your life, thinking you'll notice the infection right away, you're probably asking for trouble.

There was a time when computer crimes were frequently, maybe mostly, motivated by mischief and posturing. Espionage certainly existed, but it was unusual. And, the activities of troublemakers and braggarts tend, by definition, to be very obvious and visible. Viruses, worms and trojans, therefore, tended to be fairly noisy. What fun would there be in infecting people if they didn't know about it?

But, if your goal is not to have misanthropic fun but rather to steal people's money or identity or to distribute spam, stealth is of the essence. Accordingly, the malware on which those two activities depend tends to be as low-profile as possible. A spambot agent will generate network traffic, of course—its job is to relay spam. But, if in doing so it cripples your computer's or your LAN's performance, you'll detect it and remove it all the more quickly, which defeats the purpose.

So, most of us should, in fact, run and maintain antivirus software from a reputable vendor. Antivirus software probably won't detect the activity of malware it didn't prevent infection by—there will always be zero-day malware for which there is no patch or antivirus signature—but it will be infinitely more likely to prevent infection than wishful thinking is.

Conclusion

Thus ends my rant. Now that I've got it out of my system, next month, it's back to more technical stuff. Until then, be safe!

Mick Bauer (darth.elmo@wiremonkeys.org) is Network Security Architect for one of the US's largest banks. He is the author of the O'Reilly book Linux Server Security, 2nd edition (formerly called Building Secure Servers With Linux), an occasional presenter at information security conferences and composer of the “Network Engineering Polka”.

LJ Archive