Tuesday, May 13, 2008

The problem with our defense technology, part 1

At best, our defensive technical controls do nothing but scrape off the chunky foam of crud floating on the surface of the Internet. At worst, they represent exercises in futility we do primarily so we don’t look stupid for not doing them. Consider the tsk-tsking that goes on if an organization gets hacked and it's revealed they don't have a adequate encryption or haven't patched some workstations. That's what I mean by stupid. Of course, if anyone gets hacked, there will be tsk-tsking anyway. Anyway, what have we got?

Basic technical controls
I am going to start with basic security technology, which represents the universal, low-water mark for security controls. Basic security tools are what everyone implements to achieve “acceptable security” because that’s what Management and the auditors expect. Usually when you want a tool that isn’t on this list, you have to fight for resources because it’s an unusual control that wasn’t budgeted for or worse, doesn’t directly satisfy an audit requirement. Many of these tools have a low entry cost, but often entail a burdensome maintenance cost. In some organizations, these maintenance burdens outweigh the defensive value of the control.

If there’s any universal, ubiquitous security control, it’s the use of passwords. In fact, passwords are decent, cheap way to provide basic access control. Manufacturers build passwords into nearly everything, so it’s safe bet you’ll have them available to protect your systems. Where passwords veer off into something stupid we have to do is in the area of frequent password changing. The reasoning for around password changes is out of date, as on old fallacy about the time to crack a password. Gene Spafford explains it better than me, "any reasonable analysis shows that a monthly password change has little or no end impact on improving security!" Passwords can give some utility in exchange for relatively little overhead, provided you aren't mired in an audit checklist organization.

Network firewalls
In the past, the interchange most commonly heard regarding security went along the lines of: "Are you secure?" "Yes, we have a firewall." "Great to hear." Luckily, we've progressed a little beyond this, but not far. Most firewalls I examined as an auditor were configured to allow all protocols outbound to all destinations. Add to that, the numerous B2B connections, VPNs and distributed applications. Then there's the gaping holes allowing unfiltered port 80 inbound to the web servers.

When I was a kid, my family lived in Western Samoa. At the time, the local water system was pretty third world. My mom would tie a handkerchief around the kitchen water spigot. Once a day or so, she'd dump out a big lump of mud and silt, and then put on a clean hanky. After being filtered, she boiled the water so it would be safe for us to drink. That handkerchief? That's how I feel about firewalls. And people rarely boil what passes through their firewalls.

So, I'll have agree with Marcus Ranum, and the folks at the Jericho forum that firewalls are commonly over-valued as defensive tools.

Blacklisting Filters
Anti-virus, intrusion prevention, anti-spyware, web content filters... I lump all of these into the category of blacklisting filters. These types of controls are useful for fighting yesterday's battle, as they're tuned to block what we already know is evil. In the end, we know it's a losing battle. In his "Six Dumbest Ideas in Computer Security", Marcus Ranum calls this "enumerating badness." Now, I think there is some utility there for blacklisting filters. But at what cost? All of these controls require constant upkeep to be useful, usually in the form of licensed subscriptions to signature lists. These subscriptions are such moneymakers, that many security vendors practically give away their hardware just so they can sell you the subscriptions. Annual fees aside, there's the additional burden of dealing with false positives and the general computing overhead these controls demand.

Hey, raise your hand if you've ever had your AV software crash a computer? Uh huh. Now keep them up if it was a server. A vital server. Yes, my hand is raised too. But of course, you wouldn't dare run any system, much less a windows system, without anti-virus. You'd just end up looking stupid, regardless of how effective it was.

Patch Management
Best Practices force most of us to pay lip service to performing patch management. Why do I say lip service? Because organizations rarely patch every box they should be patching. Mostly by patch management, we mean we're patching workstations - smaller organizations just turn on auto-update and leave it at that. But servers? Well, probably if the server is vanilla enough. But no one is patching that Win2K box that's running the accounting system. And what about those point-of-sale systems running on some unknown flavor of Linux? Heck, what if you've got kludged together apps tied together with some integration gateway software from a company that went out of business five years ago? What about all those invisible little apps that have been installed all over the enterprise by users and departments that you don’t even know about? Are they getting patched within 30 days of release of a critical vulnerability? Bet that firewall and IPS are looking real durn good right now.

My favorite part of Best Practices is to watch the patch management zealots duke it out with the change management zealots. "We need this service pack applied to all workstations by Friday!" "No, we need to wait for the change window and only after we've regression tested the patch." (To tell the truth, I'm on the change management side, but more on that later)

Transmission encryption
Everyone knows if you see the lock on a website, it must be safe. We've been drilling that into lay people's heads for years. Yes, we need to encrypt anytime we send something over the big bad Internet. But what is the threat there really? We're encrypting something in transit for a few microseconds, a very unlikely exposure since the bad guy has to be waiting somewhere on the line to sniff the packets and read our secrets. Consider how much trouble the American government has to go thru just to snoop on our email. If the bad guy isn't at the ISP (which I'm not saying is unreasonable), then it's difficult to intercept.

Now consider this bizarre situation - you put up a web site and there is a form to put in your credit card number and hit submit. Wait, there is no lock on the site, I'd be sending the card number in the open! Oh dear. No, actually, the website has put the SSL encryption on the submission button so that only the card number gets encrypted. Of course, your browser can't show you a lock for this. Now consider the opposite - an SSL website, showing the lock and everything, where the submission button activated an unencrypted HTTP post. So now you have exactly the opposite, something that looks safe that isn't. And yes, as a web app tester, I've seen this before.

My last word on transmission encryption - I'd prefer to encrypt on my own network than on the Internet. Why? Because if someone's breached me (what was the title of this blog again?), it'd be very easy for them to be in a position to sniff all my confidential traffic. Especially the big batches of it, as things move around between database servers and document shares. So yes, if I was able to ignore the fear of looking stupid, I'd encrypt locally first before dealing with Internet encryption.

Next up: The problem with our defense technology Part 2, “Advanced” technical controls

No comments: