Tuesday, December 29, 2009

Everyone else is doing a predictions blog post…

I’m going to focus on the growth of semi-automated social engineering. Why? Well, first because we humans need to communicate and technology has recently exploded to facilitate this. Second, the important thing to focus in security is how things fail. Right now, things are failing (as usual) with the user. We can’t expect the user to be rational and security-minded, that’s what our job. So they are the weakest link and will continue to be exploited. Third, I approach infosec with a warfare mindset, not an engineering one. And defense engineering always follows advances in warfare. We will always be playing catch up.

Prediction One - Technology-mediated scamming of users soars past our capability to deal with it
By this I mean, I mean phishing, spearing, fake security alerts, social engineering malware. It will quickly reach the point where it will overwhelm not only our defenses but the even the context we use to describe it. There are so many attack surfaces and so little useful defenses in the hands of the average user, we’re in for a rough ride. Why will this get more prevalent? Well, because of...

Prediction Two - Better use of unclassified and "harmless" data to leverage higher access
Military wonks have been warning us about this for decades. Now we're going to see it go farther into the mainstream, especially with all the info stored in Facebook, LinkedIn, Flickr, blogs, and Twitter streams. Some of the worst stuff is being generated by our friends and family without our consent. Just ask Sir John Sawers. This will lead to...

Prediction Three - Attackers will becoming adept at exploiting unknown critical dependencies
There's dozens of these kinds of undocumented and unexpected linkages between our organizational security systems and the consumer-grade applications we all swim on a daily basis. Password resets that bounce out via email to our iPhone or Gmail accounts. Twitter links with embedded passwords that happen to match our main password. Web mail sites can be used to spread custom malware internally. They're considered low value and therefore have weak security accordingly. And what about those consumer grade systems? Well, expect...

Prediction Four - Larger attacks against "soft" targets because of items 1,2,3
Why hack Twitter, Facebook, Gmail, etc? Because that's where the money is, duh. Most of these services were designed to protect low-value assets and casual attackers. But that value is out of proportion because of the aforementioned dependencies, the value of this secondary data in escalating attacks, and the scam-value of the friend-trust relationships embedded in these systems. Which all leads to...

Prediction Five - The move of the traditional perimeter from the Untrusted Internet User to the Trusted User.
Most of the standard threat models say the normal user is somewhat trustworthy. Many say otherwise that's a bad idea. As items 1-4 become widespread, the popularly accepted models will need to evolve to simply not trusting the average user or customer in the slightest bit. For many high-risk applications, like web-banking or large e-commerce sites, we're pretty much there. Now everything will move to this level, even the common low-value / low-hanging fruit applications and services. Those of us folks who already live in that mindset, we'll be helping the rest of the world deal with the new paradigm. The standard of reasonable care will change to this new baseline and more resources will need to be expended. When will it reach that point? Probably soon. So what can we do about it?

In the near future, I see us faced with two choices: Radically alter the user experience to the point where any high-level application change (like transferring or altering valuables, changing your password or installing local software) looks like something out of a COBIT change control process (approval & authorization, separation of duties, mandatory change windows). Think "sudo" not only in our operating system, but also within our applications. We stuck a toe in the water with Vista and the users hated it. Another solution to pursue is to push more security downwards into the operational core (behavior monitoring, red flagging, white listing and application flow restrictions). Perhaps by combining these two, we can come up with something useful. I hope someone’s already working on a more intelligent warning tool that fires off meaningful alerts like “It appears that you are about to submit your credit card number to a server in Latveria whose domain was registered only two weeks ago. I think is a Phish and you should verify things before continuing.”

Those are my rough thoughts this chilly December day. I'll be thinking and working on solutions to these problems in the coming year. Let me know if you have any ideas that might help.

Tuesday, December 22, 2009

Ten ways to build/improve your infosec career

I talk to a lot of students and folks just launching their security career. This article is for you. Veterans, feel free to chime in and tell me what I missed or did wrong. On with the list.

1. Communicate in a business positive manner.

Learn to communicate on their terms, not yours. The worst problems that occur in infosec (and in technology) are communication problems. This is because techies don't speak to their customers (the users) in the language that their customers understand. It's also important to phrase things positively and not negatively. Instead of saying - "You can't use
56 bit crypto because the traffic is sniffable and now PCI compliant" in a project meeting, say "We should use newer encryption systems as customer's will expect us to do a quality job securing their data and it will reduce our legal exposure, yet it won't cost us anything to do."

2. Discover your assets
You accomplish goals if you don't know what they are. And you can't protect your assets if you don't where or what they are. After number 1, this is the second most common mistake I see infosec people make. To get an accurate read on this, you need to the grunt work. That means scanning with tools, interviewing people, reviewing documentation and examining configurations - then cross-referencing your results.

3. Do the risk analysis
Take your asset list, map the risks to them and rate them. This is your priority list. Everything you should do should circle back to this. If you've never done a risk analysis before, there are lots of different ways to skin that cat. Here's one. Here's another. And get creative. The bad guys will get creative so remember that when you're doing your analysis.

4. Never assume
If you don't see it for yourself, you shouldn't assume it was done correctly and completely. This is what audits should be about. Assumptions have a way of coming back at you in the worst way - like the confidential data you didn't know existed that is stored on the systems you didn't know were connected to the Internet. It's safe to assume one thing - you'll never know everything.

5. Don't compromise yourself
This is more than just ethics (which is important) but also about segregation of duties and who's orders you obey. The security team should never report to the IT director. IT's mission is to use technology to fulfill the business objectives. Security's mission is to use technology to fulfill the business objectives safely. Sometimes these things overlap, sometimes not. When push comes to shove, IT will let security slide to make a deadline. There are times when security can be sublimated to the greater mission, which brings us to...

6. Remember who signs your paycheck
This is a corollary flowing from items 5 and 1. Just because the organization wants to do something risky, doesn't mean you need to be a roadblock. Your job is to provide information to the decision makers about risk. If the organization is willing to take on the risk, then your job is to make sure it can be done as safely as possible. Remember, business is about risk. And you can never be 100% secure.

7. You can outsource tactical tasks, but never your strategic thinking
I've seen a lot of organizations outsource their firewalls, their log reviews and major project implementations. Sure, if you're got a very tight set of expectations locked into the contract that you can verify on an on-going basis (see #4). I've even seen organizations hire in consultants to do things like write their entire security policy or DR plan. You can bring in consultants to help with these things, but make sure you're feeding the strategy to build upon. You need to make sure that these outsiders are creating solutions that are as flexible and intimately fitting as a pair of good jeans. I've seen organizations throw down tens of thousands of dollars for cookie cutter security documentation which might get them through an audit but doesn't provide more value than that.

8. Stock your tool chest appropriately
Hat tip to shrdlu for pointing out the Alton Brown method of choosing tools. Whenever you can, choose a multitasker over a unitasking tool. You've got a limited budget and you never know what the business guys are going to throw at you (see item 6). The best deals are for things that you can use in a variety of ways to protect yourself in lots of different ways. For me, DLP is useful as a discovery tool (see 2), an access control and even as a general awareness tool (see 3). If can't afford a dedicated virtual server sitting around waiting to guest host the latest greatest VMware security tools then at least have some burned ISOs ready to go.

9. You can break the rules when you've mastered them
Until then, implement the best practices and the PCI compliance standards. They're there for a reason. And most people are getting hacked because they're forgetting to do the simple well-known stuff. This also applies to enforcing your own rules. If you truly understand the security policy, then you'll known when you can bend it (see item 6) and when you must enforce it (item 5).

10. Network
In person, online, at conferences, locally and around the world. Meet other security people and swap war stories. You'll want the advice and you need to commiseration. I try to attend at least one national conference a year and 4 local ones. Plus my blog, the Security Twits and my Brazen Careerist network. Find a mentor and be a mentor. It's important to give and to take. Even if you don't think you have something to contribute, you do (even if it's only to share your fails). And for many of us, the problem is the opposite. Stop bragging and just shut-up-and-listen. Nobody likes a know-it-all.

Christmas Bonus Item

11. The question of specialization
If you're already not along in your career, then you will discover that the consummate security professional knows everything about security. To be worth anything, you should at least be competent with the basics like the ISC2's common body of knowledge . But at some point, you'll be tempted (either by yourself or your organization) to start specializing. If you do end up specializing, my advice is to pick a couple of specialties. Not only does it make you more layoff-proof, but it's also a lot more intellectually interesting. Some of us end up specializing in being generalists (hah), which really means we end up specializing in management because we spend more time overseeing things than actually doing things. That's fine, just get very good at all these items. Heck, if you're a Heidi fan, you'll notice that our beloved geek girl detective specializes in forensics, penetration testing (social engineering, physical security, info reconnaissance) and malware analysis.

Tuesday, October 27, 2009

Why do pen-tests suck?

I was just listening to the Exotic Liability Podcast and once again, Chris and gang were lamenting the sorry state of pen-testing. While I've ranted before on the poor quality of the risk reporting in pen-tests, EL was lamenting the watered-down nature of most testing.

Specifically, they asked "Why are pentests so limited?" And that's true. In most external security testing (which includes both pen-testing and vulnerability scanning), there is often no intelligence gathering, no social engineeringn testing, and no physical security testing. Of course, no "cheating", like hitting DNS or business partners, either. Very often the scope of the attack is limited in both targets (only touch these assets and these IP addresses), and limited in time (you can only attack us during this timeframe and spend only 40 hours on the testing). Implied by these restrictions, include restrictions - no time for extensive manual testing, deep analysis, or reverse engineering.

A water-down test of your defenses means a myopic analysis of the strength of your perimeter. And remember, even in the best of the times, security testing only tells you two things: where some of the holes are and a measure of the skill of the attacker. Passing a security test never means you are secure. The more "real world" your testing, the closer you approach some kind of reasonable measure of useful information about possible holes. But why water them down?

Well, the obvious reason for the reason for these limitations is not wanting to spend a lot of money on consultants. Of course, I think this is a distractor. Having been a tester and now, one who hires testers, I can tell you a bigger reason is not wanting the liability. Consider, most testing that is going on right now is because of compliance. PCI requires vulnerability scanning. Most organizations acting as custodians for other organization's data are beholden to demonstrate "best practices" - and that includes pen-testing. And here's the real rub - many auditors and customers want to see the results of those security tests.

As a tester, I've also been told by very large e-tailers that they were limiting the scope of our engagement not because they knew we wouldn't find anything, but because they knew we would. They knew we would find too many security issues for them to feasibly fix without going out of business. And if they had a report of all those holes, well, now they're liable for fixing them.

So what's a poor organization to do? They need to hire someone to do security testing that has a strong reputation but at the same time, won't do too good a job. Credibility but not competence. Or barring lack of competence, someone who will sell them a testing service that is so cookie cutter that the scope will be automatically limited to the basic scan-and-patch kind of findings. Enter the big organizations, like Veri zon Cyb ertrust, I BM, Hac kerSafe, etc. Yes, there is some collusion there. But hey, it's all about staying in business and meeting unreal expectations. After all, most people don't actually want to pay to have their data protected. At least pay what it would really cost.

BTW, you can lather, rinse, repeat this post for entire financial audit industry. See Enron, WorldCom, Lehman Brothers, WaMu, etc.

Monday, October 26, 2009

The art and science of infosec

"The art of war and the science of war are not coequal. The art of war is clearly the most important. It's science in support of the art. Any time that science leads in your ability to think about and make war, I believe you're headed down a dangerous path. "
Lieutenant General Paul K. Van Riper

I think it's no different in infosec, especially in the senior decision-maker roles.Sure, there are cool technology to learn, awesome risk analysis models to study, complex financial calculations to crunch, but in the end, these are but tools for the practicioner, not ends in of themselves. Just because a some report said some risk should be rated high, doesn't mean it should be taken at face value. Nor should any defense be considered adequate for any length of time.

Too many security folk, especially consultants and auditors, seem to fall into the trap of having the science drive their work more than the art. I think there is a tendency to do this since many of us infosec folks started off in engineering. And yeah, in theory, engineering should be tamed by mathematics and science. But security, especially defense, has a huge human element. And this is where the art is necessary.

Optimizing specific defenses with statistical analysis is useful, but remember that attacks evolve. By the time you perfect a defensive technique, it'll be obsolete. For an example, read up on the history of the invincible Fort Pulaski.

But, it's still better than the cargo cult science of best practices in security.

What skills are useful in the art? Obviously experience and people skills. But to be more specific... well, off the top of my head: Good threat modelling (with a healthy dose of game theory), Logistics, Behaviorial Economics, Theory of Mind, what my boss calls "BS detection", Projecting integrity (not tripping other people's BS detectors), conviction and courage.

Friday, September 4, 2009

NCA Security & Technology Conference '09

I'll be enpaneling at the NCA Security & Technology Conference '09

The subject is DLP, Risk and Compliance.

Been plenty busy lately, but hopefully I'll have one or two intelligent things to say.

Saturday, July 4, 2009

Toorcamp Top Ten Things

I was very proud to both attend and be given the privilege of speaking at the inaugural hacker camp for the USA. I'm sure in years to come, Toorcamp will only grow bigger and bigger. I know there were a lot of logistical problems, but I think the staff battled with them brilliantly.

Here are my top ten moments, in no particular order:
  1. The raising of the pirate mast at HBL.
  2. Meeting lots of cool people and their cool vehicles.
  3. Finally meeting Leigh F2F. She's even more interesting and intelligent in person. My only disappointment was her hair was a normal hue (job hunting, she said). No matter her hair, I know she'll soon land in a great job.
  4. Touring the missile silo!
  5. Mudsplatter's drunken talk on messing with people's heads. Worthy of the best stand up comedy, and despite my best efforts, I learned something.
  6. Willow's ignite talk on parkour. It met my criteria of learning something unexpectedly new and interesting. I also found elements of parkour similar to what I'd learned when I studied Aikido.
  7. Giving my talk and having it pretty well received.
  8. Levitate.com and their silly publicity antics, including that emo concert which I'm sorry I missed (not).
  9. The friendliness, intelligence, and creativity of all the folks who were gracious enough to share their booze and time with me.
  10. Finally getting home and washing off all the cursed ash.

Friday, June 26, 2009

What went wrong?

Another day, another breach notice in the mail. This one to my wife yesterday.

What I want to know is:
  • What merchant breached the data?
  • How many other cards were breached?
  • How long after the breached was this detected?
  • How was this detected?
  • How long before the lapse that allowed this breach is fixed?

What are the odds that calling the 1-800 number will give us these answers?

Tuesday, June 16, 2009

IT Infrastructure Threat Modeling Guide.

Russ Mcree (now at Microsoft) has just released the official 1.0 version of the IT Infrastructure Threat Modeling Guide.

I contributed a teeny tiny little bit of reviewage to this when it was in beta, and I have to say, it looked real good. A nice first jab at the problem of looking at whole of your infrastructure risk-wise. At the time, I was already using a similar model at work, but I'm definitely going to be adding this model to the mix.

It's worth a read.

PS: Russ is a great guy and totally open to feedback. If you've got something intelligent and useful to say about the model, please do speak up.

Thursday, May 28, 2009


I'll be presenting at ToorCamp this July. I've chosen to speak on something I've never publicly talked about before, tho I've been talking a lot about it behind closed doors for a while. It's not a new idea, but I think it's an idea that worth looking at. I call it "The IED defense", but it's really about using deception and counter-intel to trip up intruders.

The coolest part is I'll be speaking here:

Tuesday, May 12, 2009

Losing your infosec innocence

A lot of people talk about how cool my job must be and really want to get into the security field. Well, not that I blame them, but there are parts of this job that are really tough. And it's usually the thorny emotional painful stuff that's the toughest.

A good part of the job is keeping secrets, because as the security officer, you're privy to a lot of behind the scenes info. Often painful info, like who's under investigation, who's about to get fired, or what huge horrible screw up is being whitewashed over. And no, we can never ever talk about that kind of thing, so it sits inside of you and stews.

Then there's the especially nasty stuff, like doing forensics and analysis on what people might have thought was private. Then you uncover a lot of icky personal private details - things you warned them not to put on corporate systems (assuming you have a solid acceptable usage policy). I'm not just talking about reading emails between husband and wife at home (cuz that's happened too), but graphic sexual messages between two co-workers having an affair. The kind of stuff that makes you feel like taking a shower afterwards. And because it's not directly part of your investigation, you may delete it and move on - hopefully pretending you never saw it to begin with. At least on two occasions in my life, I've had to do digital forensics on computers owned by recently deceased friends. A lot of this kind of baggage, I pour back into the Heidi stories.

Now, no time is worse than your first time. How did I lose my infosec innocence? Although I've been in security off and on for about 20 years, and having it directly in my job title for the past eleven, I really lost my security innocence about ten years ago. I won't got into details (because you never can), but the upshot was I developed a specialized tool (now it's a standard product) that detected installs of inappropriate software on workstations. Inappropriate doesn't mean games or pr0n, I mean hacking tools and such. My tool fingered someone a co-worker. We weren’t close friends, but someone I liked and was part of the gang who went drinking after work. It was someone who I found interesting and pleasant to work with. But also someone who really shouldn't be loading that kind of software, especially in the type of secure environment we ran.

Now, I'd been involved in firings before - hard to be in IT any length of time and not be directly in the loop as someone is marched out the door. But in this case, I had to be the policeman and the prosecutor for the case. I had to present my evidence to his boss, interview his co-workers (who I also knew) and then discuss the matter with internal audit and outside counsel. Then it was left to me to damn him and advise my superiors that he be terminated immediately. They tool it a step further and called a company meeting to discuss what had happened and why this sort of thing would not be tolerated. It was totally the correct thing to do from a security perspective and the best thing for policy and morale. But I still felt like a rat. And I still feel like a rat.

This is a hard job and a lot of what's tough about it, they don't teach you in a classroom.

Wednesday, April 29, 2009

Pay attention

The recent Verizon Breach Report hammers home once again is that people are still not taking the basic, known steps to secure their systems.


I'm not sure what the cognitive breakdown is. Perhaps it's the human mind's tendency to be attracted to the new and different while ignoring the routine. My own experience in security work mirrors this. Whenever a new security initiative drops down from on high, for the first month or two, I see staff scurry about implementing the controls and following policy. Then after the shine wears off, an interesting phenomenon happens. It's not that they forget about security. In fact, they are still fixated on it. I hear things like "Well, we can't do Project XYZ. How would that affect our security?" "Oh, if you're going to build a new server, then we need to make sure it's in line with security plans." Being sarcastic or not, at least they're thinking about security. But I suspect it's not all sarcastic. I often see very long detailed project plans about how to secure some new esoteric service - often with meticulous lockdown steps enumerated for even the most unlikeliest of attacks.

But of course, a quick check of basic processes finds that the same people who are bringing up security for every new initiative or system change are also getting sloppy with the daily routine things they're supposed to be doing. They're making extraneous firewall changes; they're using weak passwords; they're not patching; they're turning off logging to fix something and leaving them off; Oh, and they don't notice that because they're not reviewing logs either. They're busy and they'll get to all these things when they can. And then they forget.

The solution is often to install a massive administrative and technical compliance infrastructure to double-check everything that everyone is supposed to be doing. Assume the breach, even for the internal processes. Costly in time and money, but sometimes unavoidable.

Thursday, April 2, 2009

Write clear risk assessments

The conclusion of our analysis shows that the data does not contain anything we can not share with this particular third-party.


Remember Orwell's advice about double negation

"One can cure oneself of the not un-formation by memorizing this sentence: A not unblack dog was chasing a not unsmall rabbit across a not ungreen field."

Friday, March 20, 2009

Mapping the Unknown Unknowns

There comes a time in an InfoSec professional’s career when they’re forced to do a risk assessment. I know, they’re a big pain in the butt and no one ever reads them, but some people seem to think they’re kind of important1. I say if you’re going to do it, you might as well get some use of the thing.

First of all, I’m not going to explain some formal risk assessment methodology. There are far too many out other sources out there for that. What I am going to talk about is the general stance you bring to an analysis. As the poet Rumsfeld said, how do we deal with the unknown unknowns. This is where your prejudices can color an analysis and you could miss something important. Hopefully by better defining the known unknowns, we can shrink the size of the unknown unknowns. Here’s where I start:

Who is qualified to be working on this?
1. You? Do you really understand what is going on here? Were you paying careful attention to what was presented? One way to check yourself is paraphrase things back. Seriously, I can’t tell you how many times I’ve starting solving the wrong problem simply because I misunderstood what I was being told.

2. Are the people giving you data qualified to give you what they’re giving you? Nothing seems complicated to the person who don’t know what they’re is talking about.

How are people involved?
1. Generally, the more people are involved, the greater the chance of error. And hastily implemented automation can magnify that.

2. Will people have the opportunity to take reckless actions? Recklessness boils down to knowing what a reasonable person should have done, knowing the possible outcomes but going ahead and then doing the dangerous thing anyway. I’m willing to say this is somewhat uncommon in infosec because people rarely understand what a reasonable person should be doing, or the real probability of a bad outcome.

3. Speaking of reckless, how can someone’s personal misjudgment compromise the entire operation? For example, one guy surfing porn could bring down a costly lawsuit. You need to be aware if those kinds of situations exist in whatever your examining.

4. Can you truly say you understand all of the user’s intentions, all of the time? Unless you’re Professor Charles Xavier, this is another unknown that should be considered.

How is technology involved?
1. Software will always be buggy; hardware will always eventually fail; and operational and project emergencies will always occur. What happens when it does?

2. If you’ve got a track record of the technology involved, it’s helpful to look not just at the failures but the “near misses”. How many close calls were there with that tech and what could have happened if it had gone pear-shaped? Just because it worked up to now, doesn’t mean it will keep working.

3. How polluted is the technology? Is it well-maintained and well-understood? What are the dependant linkages? How many moving parts, software or hardware? How resilient is the system to delays or failures? How many outside parties have their fingers in the system? Are you sure you’re aware of all the outside parties and linkages?

Some specific known unknowns about technology
1. The systems you don’t know about
2. The data that you didn’t know existed
3. The systems storing data that shouldn’t be on that system
4. The connections you don’t know about
5. The services on those systems that you don’t know about
6. The accounts, privileges or firewall rules that you don’t know about

These are all things that you will need to account for when you’re doing a risk analysis and filling out those worksheets or forms. And hopefully the solution deals with these things in one way or another – if nothing else at least accepting the risk that these things exist and crossing your fingers.

All of this stuff can take a while to keep in your head, but I’ve extracted a few insights from this process to keep me on track:

o It will not always be obvious which technologies or processes are relevant to the security of a system. Follow the money (or data, or control).

o It is difficult to maintain a secure, operational system in a changing environment. Assume things will get broken and be prepared to deal.

o Listen to complaints. Make sure there is a way for complaints to get to you, from both the people and the systems. Even if the complaints are wrong, they’re complaining for reason. Figure out the reason.

o There will always be people in positions of trust who can hurt you occasionally

o Security policies should allow for workarounds

o Workarounds will create vulnerabilities

o There will always be residual risks

o Assume everything is insecure until proved otherwise (see name of blog)

1 Okay, I’m kidding and you know it. You can probably get through your entire career without doing risk assessments. Just keep buying firewalls and hope for the best.

Wednesday, March 18, 2009

Build vs Buy - the auditor's perspective

Sat through a comprehensive demo of IBM's Tivoli Compliance Insight Manager. Overall, the product is another SEIM, which means it aggregates logs from a wide variety of servers and lets you write queries against data. In short, if your servers are configured to see something and log it, then you can alert and report on it. That's all well and good.

Here's my problem - My requirements include pretty tight change control oversight. I need to be able to confidently tell auditors that I am aware all of unauthorized changes to my systems. Now here's where the rubber meets the road: our team developed a customized change control monitoring system that's part log scraper, part file watcher (ala Tripwire) with some dashes of config dumping-&-diffing. It's laser-focused to our environment, our apps, and the types of work (and mistakes) our Operations team does. It produces a daily report that's mostly readable that gives me a very accurate answer to the question "what changed yesterday". Even when the system has problems, the data is still captured and flags about the errors are usually thrown.

But, and this is a big BUT -when auditors see the report and see that we developed this system in-house the suddenly become very inquisitive. "Oh, it's home-grown. Well, we need to test it." It's not trustworthy. Every piece of the system is in question. Okay, that's understandable and we do our best to deal.

However, if I were to buy this IBM system (or any professional system), would the auditors feel the same way? One would hope they would have some doubts about how the system was implemented and how accurately it monitors. So far in my overview of vendor landscape of these types of products, I've found no particular product has the monitoring coverage we need. So if I were to buy a single system (and I really could only afford a single system of this magnitude), I know for a fact that I'll be missing about 20% of the changes being made on my network.

What I wonder is this: what is the real value of one of these professional change management tools? I suspect it's the trustworthiness of the brand name. I know I've been through this argument before with open-source homemade firewalls versus professional products, but at least the products go through some kind of testing (Common criteria, ICSA, etc). Moreover, that still doesn't address the concept of "best fit.” We all know that in-house works better (but can be more costly to maintain) than COTS products.

For the matter of change control, I felt that best-fit was more important since I needed (according to the auditors) to be able to confidently assert that I was aware of all changes. If I bought something off the shelf, I wouldn't be able to assert that (they're only catching 80%). I could buy something and then implement some homegrown stuff for the remaining 20%, but frankly, the effort on our part is about the same as just writing the whole thing ourselves. Plus we have the added bonus of being to adapt to infrastructure changes better than a canned product.

I wonder how many auditors out there will see the product with it's fancy dashboards and professional reports and go check the box "monitoring - compliant" and never question how well the system fits the environment? I bet a whole lot more than those who will needle me relentlessly on the effectiveness of our internally-developed system. So the real question becomes: is the cost of a canned product worth the cost of making the dimmer auditors leave me alone?

Tuesday, March 3, 2009

Snappy answers to vendor bullwash.

I hate dealing with slippery vendors, especially the ones will be handling our confidential data. Here's some snappy answers to their weasely questions.

Q1) "No one has ever asked these questions before."

A1) "Either you're not been as clear to me as you've been with others or no one else has been as thorough in their investigations as we are. Now can you please answer the question?"

Q2) "Look, BIG-COMPANY-NAME does business with us and they don't have any problems, so why do you?"

A2) See A1

Q3) "Why are you asking for that? Legally, we're only obligated to do half of that."

A3) "Because my requirements exceed that of the general compliance requirements and fall into tighter compliance requirements such as HIPAA, PCI, etc."

Q4) "Sure, we do that all the time. But look, we can't modify our agreements to show that. It's too much legal overhead, especially we use the same contract for everyone. But I promise, we'll actually do that."

A4) "How about we don't sign any agreement at all. But don't worry, we promise to pay you on time."

Q5) "Here is our SAS-70 management report. And we get quarterly pen-tests too. Aren't we great?"

A5) "I'm very impressed by all your certifications and audits. Can I see the actual reports instead of just the executive tear-off? Can I share the reports with my external auditors?"

Q6) "Oh, we don't have any third-party risk management practices simply because we don't use any third-parties. Why would we trust a third-party ever?"

A6) "Who cleans your offices? Do you run your own Internet and phone cables? Do you manufacture all your own software and hardware?"

Q7) "Oh that item in the agreement? That's just in there because the legal made us put it in. We've never had invoke that."

A7) "If it's not going to be invoked, then remove it. Otherwise my legal will insist that we treat that requirement as if it will be invoked. So we need to clarify what is going here a lot more."

That's all I could come up with off the top of my head. I'm sure I'm missing some classics. Feel free to leave your own snappy answers in the comments.

Friday, January 2, 2009

Give me something useful

Sadly, I agree a lot with what Alex Payne blogged:

Much of the tech world is obsessed with engaging in macho pissing contests, but no part more so than computer security. In the case of yesterday’s announcement, the researchers in question were more concerned with their ability to present their findings at a popular hacker conference than with guaranteeing the safety of the Internet.

While presenting data on new threats and vulnerabilities is useful in the security world, it's just not very useful to me. For the majority of us security folks, we're heads down in our cubicles every day desperately trying to swim upstream of the the new vulnerabilities, the new projects that break the organization's security model, the treadmill of compliance obligations, and educating the unwilling or unmotivated. The last thing I need is to hear more FUD. And yes, most of these big announcements were based on things I always assumed were weak to begin with (reread the title of this blog). Yeah, I blogged about this quite a while ago, but it bears repeating.

What do I want to hear about? Well, since security != operations, we often have to come up with security band aids to slap over the operational heaps-o-junk (75% of my job is doing this), so how about some ideas for tools or techniques that fix this. Specifically:

How about a comprehensive method of determining technical vulnerabilities across all my infrastructure. And the method needs to accommodate an aging, wide-spread Katamari ball of stuff comprised of a variety of Windows (2k,2k3,Xp,Vista), Linux (RH3-5,Ubuntu,Centos), a handful of Macs, and a variety of network devices (Cisco, Netgear, F5).

And maybe patch/versioning in that fluid , heterogeneous environment.

Or, maybe a just repeatable method for detecting and tracking critical information within the Enterprise.

It'd be really cool to be able to enable users to have the data they are authorized to access on any host, any time, from anywhere.

Oh, and if you're going to sell me a tool, I'm not going to pay more than $25 per user per annum per problem solved and 1 hour of work per week per 100 users. I've had lots of solutions pitched to me that solve just one problem like change management, yet cost on the order $1k per user. Get serious. Open source tools, you can convert the money to time spent installing and customizing, cuz my time is money, ya know.