Wednesday, December 3, 2008


How many security people have had near total visibility into their critical regions of their networks?

For a decent-sized enterprise trying to show a profit, this is a difficult challenge.

Well, with a very large deployment of Snare agents, syslog streams off firewalls and authentication servers, some scripting magic, and a ton of backend AWK processing... I'm getting near total visibility.

And lemme tell ya... it is a frightening thing.

Tuesday, November 4, 2008

What is good pen-testing?

Over my last couple of decades in InfoSec, I've been both a consumer and a provider of pen-tests. As a consumer, I've seen quite a few tests that I've simply thrown back in the face of the consultant requesting a do-over, mostly because the data was so inaccurate that it was unusable. I'm writing this to raise the bar on what is being offered. And rather than curse the lameness, I'm lighting a candle.

Wait, what do I mean by pen-test? Well, I'm lumping a wide spectrum into this post. So when I say pen-testing, assume I could also mean vulnerability testing/scanning, perimeter scanning, web app vulnerability analysis. For me, it all falls into the bucket of technical risk analysis. Some tests are done for certification purposes (PCI, Cybertrust), but I'm talking about actual value. And that's, providing a technical risk analysis.

Let me decompose this technical risk analysis business. A risk analysis requires several pieces, namely: 1) asset identification, 2) threat analysis, 3) vulnerability analysis, 4) impact analysis and, 5) control effectiveness analysis. Most pen-testers deliver a vulnerability analysis. Good testers do the threat analysis and ask for the asset identification before they start. The best spend a lot of time upfront to help figure out potential impacts and do a decent job on control effectiveness analysis. And yes, this means you should spend a bunch of time talking and analyzing before doing. Trust me, it pays off.

Of course, it's worthy assumption that an external pen-tester will get some of these things wrong. Either their guesses about impacts to an organization ("hey, we don't rely on that service so who cares if it's DOSed") or more likely the vulnerability assessment itself. Specifically, many pen-testers run scanning tools, jam the results into a report and call it done. Well, a lotta scanning tools do basic banner grabs on listening services and assume from there. A good pen-test report always shows its work. When I was a consultant, what I often wrote in my reports was something along the lines of "We initiated an TCP connect on port 5678 to IP address XYZ and sent along a packet stream MNOP and got back result ABC. This matches vulnerability #123. We verified by doing PQR and then reviewed JKL, which leads us to believe this is a high-risk root compromise vulnerability." And of course, by showing your work, the client can reproduce your results and test later to see if the hole is closed. Bonus points for including source code for scripts in the report to facilitate easy test duplication. If you can't be accurate, at least be transparent. And if you're extremely inaccurate, the act of writing up the assumptions should be a red flag for you.

All good pen-test reports will be shared everywhere by the client. Sometimes they will go to the client's clients and almost always they will go up the management chain. Keep this in mind when you are writing the analysis. Don't be glib, don't hide your assumptions, provide concrete proof, and don't ever assign blame. Contrary to what your Eighth Grade English teacher told you, passive voice is your friend. "This server was found to contain a database dump full of social security numbers. I have no idea how it got there and who fcked up, but this what we found on this day at this time using this tool. Oh, and here's a screenshot to prove that I'm not lying."

Also, you need to include an executive overview. Not only is this the shortest part of the report, usually 3-4 paragraphs, but it is also the hardest to write. I usually follow the BLUF method when writing these: Bottom Line Up Front. Open with a "we did a scan on this date/time against these assets. We found one serious vulnerability, which we alerted staff about during the test and it was immediately corrected." - Hint: make the client look good whenever the opportunity presents itself. These people sign your checks. Continuing - "We also found several other low risks..." Then the next paragraph summarizes those risks in as plain language as you can possibly can - be sure to speak of likelihood of exploit and potential business impacts. Executives will only read the summary, yet will be making a decision to spend money based on it (possible spending money to hire you to fix them or scan more later). Be concise and remember your audience. I usually spend 6-8 hours writing the summary, and spend the entire engagement thinking about what will go into it.

If there is a compliance requirement (and bingo, there almost always is), there should be a separate section highlighting the gaps found there as well. Usually a client considers this a high-priority (we need to pass PCI!) but I've found the compliance junk provides the least value. Heck, the reason pen-testing is in most compliance requirement lists is so that a good technical risk analysis is done regularly. But anyway, it is usually a requirement, so there you go. And if it's a key requirement, then the results here should also bubble up to the top of the executive overview - "Blomo corp. appears to be 95% compliant with RHINO rules with the exception of the weak password on the mail server."

Okay, I've ranted enough. I just wanted to pop off my experiences and wants regarding pen-testing. Who knows, maybe in a few years I'll go back into consulting and do these again. Until then, I'll just keep rewarding the competent professionals with more business and keep shunning the lamers.

Wednesday, September 10, 2008

Thought experiment

Economists say, incentives matter.

Here's a thought experiment -

What if we did away with all the security regulations and rules. No more GLBA security rules, no more HIPAA privacy, etc. And for contracts and b2b relationships, no more SAS-70's, no more PCI, no more ISO certifications.

Just one new rule - each person who's confidential information is breached gets a cash settlement. For example, your credit card ended up with some hackers. Here's $250. And if we didn't warn you or tried to cover it and you later found about it the hard way... well, now we gotta pay you $2500.

That's it. Let each organization figure out how to secure themselves and what the trade-offs are. Next step in my hypothetical world - organizations would need to post bonds or have insurance to make sure they can pay people off when breached. And then the insurance companies will come up with criteria for good controls. And with all the payoffs, they'll be able to build actuarial tables to see what works, what doesn't.


Wednesday, August 27, 2008

Compromised integrity

Two words we often use in InfoSec... compromise and integrity. But their origins outside of technology are of interest to me today.


Compromise - To expose or make liable to danger, suspicion, or disrepute.


Integrity - Steadfast adherence to a strict moral or ethical code.

What do I mean by this? I mean asking yourself how what these things mean to you as a member of a community (substitute nation, organization, company, family).

In the tech world, when a machine gets compromised, there is often a battle between the security team and... well, everyone else. Security says, once it's compromised it can't be trusted again - reformat and rebuild from scratch. The techs often say "that'll take hours" or worse, "that'll take days" and of course, it's a critical system. Management agrees, after all, it's going to cost us money to take those services down. And no, there are no hot spares or quick reloads available. In the end, the risk is made clear and the machines are replaced.

But what happens when the human soul's integrity is compromised? Many security folks have been a part of internal investigations. Often evidence isn't entirely clear how much an insider may be compromised. Did they make a one-time mistake? Or are they actually malicious? Again, the security folks often recommend termination and replacement. Again, there is often some (but not nearly as much) push back - especially if that person is critical to the organization.

Those who read Geek Girl Detective know how devastating it can be to a company when someone in an important position becomes compromised. It could be the end of the organization itself. But again, is it better for an organization to implode, minimizing the damage of the compromise, or explode with a devastating shockwave of litigation?

I've been told recently that no person should be replaceable. Same is true when designing systems. However, in either case, is this always feasible? No. But it's on the table as something that should be thought about and planned for - People and computers.

Thursday, August 7, 2008

What is the bare minimum we can do and still operate as a business?

In her column, The Agency Insider, Linda McGlasson writes in a post GLBA and Security Avoidance Questions - Why Are We Not Surprised? about GLBA compliance.

Thee post is about her dismay when hearing "What is the bare minimum we can do and still operate as a business?" from many large banks. She goes as far as saying that hearing that is "the number one sign that there is something wrong with the approach many financial services companies are taking on GLBA."

Okay, granted on the surface, this statement does appear to be like sloppiness, cheapness, and/or general dereliction of duty.

But wait, let's unpack this:

Is she saying that banks should spend MORE than necessary on GLBA compliance?

Does spending more on GLBA compliance entail better security?

Audit checklists and industry regs usually do not always entail improved risk management. But hey, for argument's sake, let's just assume GLBA Compliance = adequate security.

Now, let's restate and clarify:

"What is the bare minimum amount of risk management we can do and still operate as a business?"

But what's wrong with adequate (or minimum amount). It's that tipping point where it becomes too costly to protect an asset than to lose it. That's risk management and just plain dollars and sense.

Okay, but what is that bare minimum? How do you know what that is?

Wouldn't knowing what the minimum amount of risk management needed imply a thorough examination of risk and the value of the protected services and assets?

So to truly make that statement, banks will have be doing some pretty darned good risk assessment.

And to choose the bare minimum, means they are making an informed decision about the tradeoff between business value and risk mitigation.

The only problem I see with all of this is banks should not be asking their auditors "what is the bare minimum I need to do?"

They should be asking their security people.

And they should be answering in a manner that makes sense to someone who's job it is to choose how money is spent for the overall good of the organization.

Thursday, July 31, 2008

Can you buy security? How can you make things better?

So I'd twittered a bit lately about the question of organizations "buying" security. What I really wanted to know was if an organization was taking unnecessary InfoSec risks ("insecure" in lay-man's parlance), then they could simply bypass the whole cultural change thing and just write big checks to get improve their risk management ("become secure").

I've been asking this question myself a lot lately. I've spent nearly a decade as a security consultant, trying to fix organizational security programs (both for profit and non-profits). I've also spent about that much time working inside companies as a either a network guy with security responsibilities or a pure-play security guy. In my current job, I am *the* security guy. The buck stops with me. Now, with that in mind:

When I was a security consultant, I noticed there were basically two types of customers:

  • The organizations that were already practicing pretty good risk management but wanted to improve
  • The organizations who were being forced to be "more secure"
Naturally, the first type of organization made the best kind of client. It was fun to come up with innovative solutions to push them from a B to an A. We just ate up those technically complex security challenges. And yes, we were very successful. Usually these were complex, highly-regulated organizations like hospitals, law-firms, banks and public utilities.

Now the second type of organization, the ones forced to be more secure. These are the folks who've failed an audit, experienced a breach, or have an important customer dissatisfied with their security. Many of these organizations produced online products or services, and they "simply didn't have time to worry about security."

Not surprisingly, these were often the least successful clients of my consulting career. Advice was ignored, reports were shelved, warnings were rationalized away, blame was shoveled about. And things rarely improved much beyond some cosmetic fixes. They wanted to "buy" a fix to their security problem. Sadly, a lot of money was often spent on either new hardware or expensive audit reports.

As Jerry Weinberg said "Things are the way they are, because they got that way." And that's very true in these organizations. Their risk management processes are seriously messed up. And hiring a bunch of consultants and buying a bunch of tools doesn't seem to make a dent in that.

Over time, I quickly sussed out some warning signs:
  • Not invented here syndrome. When given a suggestion for a new process or tool, you are often told "That won't work, we're special." Hint: the only things that should be unique should be your cash cows. Usually wasn't true for IT operations or infosec.

  • Ill-fitting and ignored policies. "The security policy says we can't do that. But we need to do that. So we just ignore the policy." And thus begins the precedent to ignore everything in the policy. Usually a soup-to-nuts rewrite of policy with copious re-education to regain the user's trust is needed here. Difficult to do, even more difficult to do as a consultant.

  • Lack of defined process and/or roles for critical things. These are the folks who are surviving because they have a lot smart people thinking on their feet. No time for docs, no time for process. We can figure out as it comes at us. Unless you don't know anything about security.

  • A culture of reactive fixes. Managing risk just becomes another reactive fix. After we're done patching that hole, we can get back to The Real Work. Right?
Of course, there's the obvious: evidence of breaches, failed audits, high turnover, etc

Compare this to the things I see in the organizations who are doing a decent job of managing risk:

  • Change management, especially for critical services and components (hat tip, Gene Kim)
  • A clear understanding of their environment, the risks present in that environment and an awareness of how well they're dealing with them.(hat tip, Alex Hutton)
  • Actually looking at their logs and understanding them.(hat tip, Anton Chuvakin)
  • Someone who's primary focus is security. And this person is internally credible and willing to learn more.
  • Reliable infrastructure. May not be the best, the fastest, the latest or the most flexible; but they know how it behaves and they know where everything is
Conversely, many of the aforementioned "obvious" signs can mislead here: places with strong security can experience breaches, failed audits and high-turnover.

But what about those broken organizations? Now we circle back to the original question? Can they buy thier way out of the hole? Not in my experience. External consultants fail... "Like trying to throwing sod down on cement and hoping it'll grow" as a colleague used to say. Heck, even hiring in good security folks and having them try to turn the ship is mighty tough. Of course, everyone says you gotta have management buy-in if you want to effect cultural change. And that's usually the cognitive disconnect you have in these kinds of organizations.

I could go into another long list of the types of executive paralysis that I've seen. It usually starts with "We really care about security, do what you gotta do." And it ends with endless trickle of dying projects that go nowhere.

Not to be all doom and gloom, but I've had some success starting to fix these kinds of organizations. But not from the outside - meaning the answer to my original question is "no", you can't "buy" security. But I've made change from the inside with slow and steady grinding away at the old culture. Gaining trust, garnering political will and slowly building up the layers of paint.

The two things that have been helpful in selling security culturally have been:improving system reliability and increasing organizational agility. Reliability is easy... that's the A of the CIA triad of security. The second is more interesting. At some point, security's just another characteristic of the overall health of operations. Organizations that are doing a poor job of managing risk are not very agile. They can't to market changes without increasing their risk exposure (usually exponentially). They can't scale very well and they can't hire/replace people. Look back at the warning signs and the positive attributes. A lot of them can tie directly to agility. And I find it ironic selling security as an agility improver when security is often cited as something that "slows us down". It's also ironic that the organizations that need agility the most (the software product and service producers), have it the least.

That's the end of my ramble. I'm going to think on this more. Feel free to comment.

Wednesday, July 9, 2008

Still alive

Just been super busy with work, family, life, Heidi, etc.

I've got a whole bunch of stuff lined up to post... just gimmie some time.

Thursday, June 19, 2008

Security speeches I'm working on

From Cradle to Autopsy - the lifecycle of exploited data.
A collaborative speech with an FBI friend... actually fleshed out an outline for this talk. It'd be about an hour and cover pretty much everything in security, but in an interesting narrative fashion. I'm excited about this one.

Third Party Due Diligence
Sounds boring but really critical process to master here. A large number of breaches are coming from third parties. Throw all the new regulations and requirements (like the recent FDIC FIL-44-2008), this really needs to be done right. And as far as I've seen, most third-party audits aren't being done right. Hint: It's not a checklist of controls. And it's not blindly asking for a SAS-70 Type 2. I've got about an hour long speech mapped out in my head on do this, how to intepret SAS-70, CyberTrust, ISO 27001 reports... and roll your own proccess.

Recovering from a breach - what to do, what not to do
Title says it all. I think this is an over-looked topic. Cognitively, a lot of folks don't think about breach beyond writing an incident response plan. Remember the title of this blog. And how you recover from a breach can mean the world of difference to your organization. Short version - do it right and your company's market position will actually increase (I can show proof), do it wrong and you're toast. Might see if I can pawn this idea off on a mentor-buddy for him to present.

Aligning InfoSec to Business
A common topic but people are still doing it wrong. I wanna get to down to brass tacks and explain how to speak risk management in terms that the suits will understand. And note the title - you align infosec to business, not the other way around. Yet that is how most IT security people (and IT people in general) view their job - make the business adapt to the technology. Doesn't work so well, does it? We can do better.

So you've decided to use ISO 27000, now what?
ISO 27000 is not just a list of controls that you can throw onto a checklist. The heart of the ISMS is risk analysis & treatment and executive involvement in that process. Risk management is a radically different approach than the compliance work that many people are calling ISMS. Time to learn how to do it right.

Defining a process for quantitative analysis of data breach information
See previous post. Not my talk, but the fine researchers at UW. This one will happen. And soon.

Assuming the breach
What this blog is all about. Doing security in the mindset (dare I say paradigm) that the barbarians are already past the gate and in the courtyard. Tons of stuff to write up here. Still need to get to it!

Tuesday, June 17, 2008

The Breach Data Report

Today I want to talk about the breach data report. No, not the
Verizon breach report, but the other one. The one you haven't seen yet.

Over the past semester, University of Washington researchers in the
iSchool Information Assurance program spent hundreds of hours analyzing breach data. This was a semester-long final project for a pretty senior group of graduate, under-graduate, and returning professional students.

Initially, the goal was to dig for nuggets of useful information in the breach data, much like the results of the Verizon study. However, the analysis quickly uncovered that most of the breach data out there is incomplete, inaccurate, or just plain incomprehensible.

How did Verizon get such accurate results? Well, according to them, they used data incidents they were involved in. Specifically, they say the data comes "directly from the casebooks of our Investigative Response team." So we know that this data is at least biased towards Verizon customers, which is interesting. I'll mention that I am a Verizon Business Security customer but I've never been involved in a breach investigation with their team. If I did have an incident, I don't know if they'd be the ones I'd call. The data they examined is probably complete for the cases involved. It's just that cases do not represent the entire range of possibilities.

Now our project, Defining a process for quantitative analysis of data breach information, cast a much wider net. And the results were startling. The students could only verify 30% of the reported breaches with high confidence. And many data sources had to be thrown out since they were so incomplete to be not useful.

The whole report is 56 pages long and covers processes for vetting, parsing, and querying breach information sources. The report isn't available yet, but soon will be. If you're in the Pacific Northwest, we will be having a special InfraGard meeting with the researchers to go over the results in detail.

Monday, June 9, 2008

Back from vacation and ready to rant!

How many cases of breached privacy do you need?! How many people have to lose their identity to make it cost efficient for you people to do something about it? A million? A billion? Give us a number so we won't annoy you again until the amount of money you begin spending on lawsuits makes it more profitable for you to protect information than to leak it!

Channeling Alan Alda from And The Band Played On

Thursday, May 29, 2008

A word about assumptions

This morning I was reading a chapter in Jerry Weinberg’s
Becoming a Technical Leader
. On the chapter on innovation, he poses the following puzzle:

A man hires a worker to do seven days of work on the condition
that the worker will be paid at the end of each day. The man has
a seven-inch bar of gold, and the worker must be paid exactly one
inch of the gold bar each day. In paying the worker, the man
makes only two straight cuts in the bar. How did he do it?

Stop reading now if you want to try to solve this.

The story goes on to explain the solution. The man cuts his gold into three pieces of the following lengths: 1 inch, 2 inches, and 4 inches. Very clever, because the idea is that the worker can now “make change” when getting paid.

I thought for a minute about how clever this was but then dug deeper. Why didn’t I get it? This whole puzzle hinges on an assumption. An assumption that we can foist off an unusual set of requirements upon the “user” (the worker). The assumption is that the worker will retain his wages every day and have them available to make change. Therefore, the burden of making the employer pay with exact change is removed. Nice.

Now I why I didn’t think of this? Because it’s counter-intuitive of me to introduce unexpected (and possibly contract-breaking) conditions into a solution. But in a wave innovation, Mr Weinberg did. But we can’t blame him. He’s a programmer and this is the kind of stunt that programmers are wont to do.

But back to assumptions. The lesson learned is: every problem drags along a set of assumptions. Sometimes the assumptions are as simple as “the default conditions” that we take for granted. And every solution also brings along a set of assumptions. It’s always a prudent idea to keep an eye on the assumptions. You never know what they’re going to tell you about the problem and the problem solver.

Thursday, May 22, 2008

The problem with our defense technology Part 2, “Advanced” technical controls

The next level up from basic controls, are what I’m calling the more advanced technical controls. These are the things usually used by the organizations who’d be sued if their security was breached. Again, this is the low-water mark list. And like before, most of these security controls are overrated, overly relied upon, or implemented narrowly.

Strong authentication
Strong authentication, by which we mean two-factor, by which we usually mean carrying a token thing. These are great replacement for passwords, but that’s about it. It all gets very interesting when you use a token to authenticate to a box that has significant vulnerabilities (see patch management). And for most strong authentication systems in place, I’ve found several work-arounds implemented by the system administrators just in case we get locked out. Thus begins the whack-a-mole game with the auditors and operations staff. And don’t think strong authentication will be helpful with man-in-the-middle attacks or phishes. I’m not saying throw the baby out with the bathwater, but I just remember that strong authentication is only an upgrade for a password.

Storage encryption
If your organization hasn’t encrypted all its laptops and backup tapes, someone in IT is probably is working her butt off trying to get it done. If you’re really advanced, you’re encrypting all your database servers and anything else that’s Internet reachable. Here’s a wonderful case of doing something so we don’t look stupid. Is there a problem with cold boot ram attacks against laptop encryption keys? Sure, but the law says if someone steals a laptop and it’s encrypted, I don’t have to disclose. And yes ma’am, the database is encrypted - but the password is in a script on an even more exposed web server in the DMZ. Whatever, the auditors demand the database be encrypted, so shall it be done. In any case, it’s safest to assume the breach - if an adversary has physical access, they are going to get in eventually.

Vulnerability scanners
Take patch management and now repeat with vulnerability scanning. It goes like this: scan your machines, analyze the results, find a hole (and you always will), request that IT patch the hole, request that IT patch the hole, request that IT patch the hole, insist that IT patch the hole, raise a major fuss about IT not patching the hole, IT patches the hole. And then repeat. And this doesn’t count the zillions of false positives because your vulnerability-scanning tool is banner grabbing instead of actually testing. No, vulnerability scanning isn’t worthless. Heck, anything that gives you some visibility into your enterprise is a good thing. But it will it truly give us battle hardened servers ready to take on the deadly sploits of the Intarnetz? No, not really. And depending who you ask, more trouble than it’s worth.

The vendor’s cha-ching. This is the security information management (SIM), security event management (SEM), etc. It’s the big box o’ log data. Essentially, it’s syslog on the front, database on the back, with some basic rules in-between. If you’ve paid a decent amount of money and/or time on those rules, then you’re only trying to drink from a lawn sprinkler instead of the fire hydrant. In any case, getting useful real-time information out of your logging system is a part-time job in of itself. Now there are intelligent log analyzers out there, but usually they cost around 80K a year plus benefits. Can automation? Get serious. There is simply too much data to make a decision in a timely manner. And remember, you are facing intelligent adversaries. The most useful automated intelligence you’re going to get out of logging system is a measure of the background radiation of the worms and bots. Now, again visibility is a good thing. I use my logging system for forensic detail after suspicious events. I also use it for trending and for showing management just how dirty the Internet is. But as an actual alarm system? Only if I’m lucky. And producing actionable intelligence? Not so much.

Like I was saying the other day...

Tapping a trend, or now just painfully obvious that's safe enough for anybody to say?

Antivirus is 'completely wasted money': Cisco CSO

In any case, I really didn't want to turn this into a ranty blog about all the problems with infosec. Sure enough of that to go around.

I promise to wrap up this "problem with" posts and get onto the meat of how to defend ourselves.

Tuesday, May 20, 2008

No Sith, Sherlock

U.S. corporations massively read employee e-mail:

41% of the largest companies surveyed (those with 20,000 or more employees) reported that they employ staff to read or otherwise analyze the contents of outbound e-mail.

Yeah, yeah... this has been going on for years. Heck, when I wrote Heidi Book 1, five years ago, this was old hat.

It's funny tho, people still seemed shocked by this. Not security people, of course. Usually it's the business folk and sales-critters. Y'know, the ones with the iPhones and bluetooth headsets... just basically screaming "Please snoop away!"

These are also the same people who don't care so much about protecting corporate secrets and claim not care much about their own. Of course, they would squeal a different tune if I were to do a Powerpoint preso on the personal ickiness I've seen fly across the corporate firewalls. Talk about Hawt mail.

So yeah, the trick is to show these people the link between protecting their sticky lurid personal data traces and PII. Some of the stuff I've seen is far more damaging to some people's careers than mere identity theft.

UPDATE: Great Minds Think Alike or a different spin on the same topic.

Tuesday, May 13, 2008

The problem with our defense technology, part 1

At best, our defensive technical controls do nothing but scrape off the chunky foam of crud floating on the surface of the Internet. At worst, they represent exercises in futility we do primarily so we don’t look stupid for not doing them. Consider the tsk-tsking that goes on if an organization gets hacked and it's revealed they don't have a adequate encryption or haven't patched some workstations. That's what I mean by stupid. Of course, if anyone gets hacked, there will be tsk-tsking anyway. Anyway, what have we got?

Basic technical controls
I am going to start with basic security technology, which represents the universal, low-water mark for security controls. Basic security tools are what everyone implements to achieve “acceptable security” because that’s what Management and the auditors expect. Usually when you want a tool that isn’t on this list, you have to fight for resources because it’s an unusual control that wasn’t budgeted for or worse, doesn’t directly satisfy an audit requirement. Many of these tools have a low entry cost, but often entail a burdensome maintenance cost. In some organizations, these maintenance burdens outweigh the defensive value of the control.

If there’s any universal, ubiquitous security control, it’s the use of passwords. In fact, passwords are decent, cheap way to provide basic access control. Manufacturers build passwords into nearly everything, so it’s safe bet you’ll have them available to protect your systems. Where passwords veer off into something stupid we have to do is in the area of frequent password changing. The reasoning for around password changes is out of date, as on old fallacy about the time to crack a password. Gene Spafford explains it better than me, "any reasonable analysis shows that a monthly password change has little or no end impact on improving security!" Passwords can give some utility in exchange for relatively little overhead, provided you aren't mired in an audit checklist organization.

Network firewalls
In the past, the interchange most commonly heard regarding security went along the lines of: "Are you secure?" "Yes, we have a firewall." "Great to hear." Luckily, we've progressed a little beyond this, but not far. Most firewalls I examined as an auditor were configured to allow all protocols outbound to all destinations. Add to that, the numerous B2B connections, VPNs and distributed applications. Then there's the gaping holes allowing unfiltered port 80 inbound to the web servers.

When I was a kid, my family lived in Western Samoa. At the time, the local water system was pretty third world. My mom would tie a handkerchief around the kitchen water spigot. Once a day or so, she'd dump out a big lump of mud and silt, and then put on a clean hanky. After being filtered, she boiled the water so it would be safe for us to drink. That handkerchief? That's how I feel about firewalls. And people rarely boil what passes through their firewalls.

So, I'll have agree with Marcus Ranum, and the folks at the Jericho forum that firewalls are commonly over-valued as defensive tools.

Blacklisting Filters
Anti-virus, intrusion prevention, anti-spyware, web content filters... I lump all of these into the category of blacklisting filters. These types of controls are useful for fighting yesterday's battle, as they're tuned to block what we already know is evil. In the end, we know it's a losing battle. In his "Six Dumbest Ideas in Computer Security", Marcus Ranum calls this "enumerating badness." Now, I think there is some utility there for blacklisting filters. But at what cost? All of these controls require constant upkeep to be useful, usually in the form of licensed subscriptions to signature lists. These subscriptions are such moneymakers, that many security vendors practically give away their hardware just so they can sell you the subscriptions. Annual fees aside, there's the additional burden of dealing with false positives and the general computing overhead these controls demand.

Hey, raise your hand if you've ever had your AV software crash a computer? Uh huh. Now keep them up if it was a server. A vital server. Yes, my hand is raised too. But of course, you wouldn't dare run any system, much less a windows system, without anti-virus. You'd just end up looking stupid, regardless of how effective it was.

Patch Management
Best Practices force most of us to pay lip service to performing patch management. Why do I say lip service? Because organizations rarely patch every box they should be patching. Mostly by patch management, we mean we're patching workstations - smaller organizations just turn on auto-update and leave it at that. But servers? Well, probably if the server is vanilla enough. But no one is patching that Win2K box that's running the accounting system. And what about those point-of-sale systems running on some unknown flavor of Linux? Heck, what if you've got kludged together apps tied together with some integration gateway software from a company that went out of business five years ago? What about all those invisible little apps that have been installed all over the enterprise by users and departments that you don’t even know about? Are they getting patched within 30 days of release of a critical vulnerability? Bet that firewall and IPS are looking real durn good right now.

My favorite part of Best Practices is to watch the patch management zealots duke it out with the change management zealots. "We need this service pack applied to all workstations by Friday!" "No, we need to wait for the change window and only after we've regression tested the patch." (To tell the truth, I'm on the change management side, but more on that later)

Transmission encryption
Everyone knows if you see the lock on a website, it must be safe. We've been drilling that into lay people's heads for years. Yes, we need to encrypt anytime we send something over the big bad Internet. But what is the threat there really? We're encrypting something in transit for a few microseconds, a very unlikely exposure since the bad guy has to be waiting somewhere on the line to sniff the packets and read our secrets. Consider how much trouble the American government has to go thru just to snoop on our email. If the bad guy isn't at the ISP (which I'm not saying is unreasonable), then it's difficult to intercept.

Now consider this bizarre situation - you put up a web site and there is a form to put in your credit card number and hit submit. Wait, there is no lock on the site, I'd be sending the card number in the open! Oh dear. No, actually, the website has put the SSL encryption on the submission button so that only the card number gets encrypted. Of course, your browser can't show you a lock for this. Now consider the opposite - an SSL website, showing the lock and everything, where the submission button activated an unencrypted HTTP post. So now you have exactly the opposite, something that looks safe that isn't. And yes, as a web app tester, I've seen this before.

My last word on transmission encryption - I'd prefer to encrypt on my own network than on the Internet. Why? Because if someone's breached me (what was the title of this blog again?), it'd be very easy for them to be in a position to sniff all my confidential traffic. Especially the big batches of it, as things move around between database servers and document shares. So yes, if I was able to ignore the fear of looking stupid, I'd encrypt locally first before dealing with Internet encryption.

Next up: The problem with our defense technology Part 2, “Advanced” technical controls


Over a long series of posts, I plan to explore thoughts around the next generation of information security. The title of the blog comes from a discussion with the many of my InfoSec mentors, who have implored security professionals to “assume the breach” when managing their enterprise security. Eventually, all defenses are breached. What do we do then?
I’m going to start with a quick overview of the problems. Nothing original here, just a breakdown of what’s going wrong. I’m usually the first one to tired of all the curmudgeon’s tossing bricks at our glass houses of best practices. My response is along the lines of “yes, I know. But tell me how to fix it?” Well, I do intend to propose some solutions.

Wednesday, May 7, 2008

Why I don't go to most security conferences

First, let me define security conference. By this, I mean, the conference that either has a hax0ry name or is simply an acronym. Okay, I gotta pay for a ticket, expend travel resources, and then lodging. Even if I can convince my employer to pay, I still have to burn political capital and then finagle time away from the office. TANSTAFL. So, when I see that announcement for Plopc0n 5 fly across my e-mail, I do my cost-benefit analysis and usually decide to skip it.

Why? Let's set aside the vendor hype-fests. They're too easy to bash. Besides, I can get all the vendor love I want by simply answering my constantly ringing phone.

What is at a typical security conference? Well, there's usually some forensics stuff. Cool, but that's really not my bag. And honestly, most of what speakers present as "forensics" wouldn't stand up under a halfway-technical defense attorney's cross-examination. Pass.

All right, there's a mixed bag of privacy and legal talks, which are mildly interesting, but are highly dependant on the speaker. Most of the time, the speaker's book or blog gives me the same basic information.

But what else do conferences full have? It seems that a good third of the content is "Hacking XYZ" or "New way to exploit" or some attack against physical security. BFD. I already know there are holes in my network. Most of these "new" attacks are just new variants in old attacks. Attacks that you can figure out are there just from looking at the basic design. I've read enough Ross Anderson to grok the basic idea on how things can be exploited and how they should be engineered. At best, the hacks they demonstrate are proofs of concept to something I'd already assumed I had to deal with. Thanks for that, but I don't need to attend just to see a proof of concept. I'll just grab the press release, usually released within hours of the conference demo.

I guess the biggest reason why I might be inclined to go is to network. But the last few conferences I've been to, I felt I was the only "adult" in the room. Yeah, except for a few Internet blogger friends, I'm really not compelled to spend the time away from work and family. I do hit a couple of local quarterly security conferences for the networking.

What am I interested in seeing? Radically new defensive technologies, "game changing" strategies, and thoughtful analysis of cyber-criminal operations. If I'm lucky, I'll see one or two of these kinds of pearls in several days worth of chaff. Nice, but I'm staying home for now.

BTW, if you haven't read With Microscope and Tweezers: An Analysis of the Internet Virus of November 1988, then I suggest checking it out. I bet you get a lot more out of it than the average hacking demo.