Monday, December 12, 2011

Where the security rubber hits the operational road


Where have I been? Absent from this blog, that's for sure. Mostly silent on Twitter as well. What up with that? As you mighta expected, life's been busy around here. But I thought I'd give a little more detail on what that meant.

Let's start with some of the problems that had come to a head this summer.

Gunnar had a great point when he spoke of the Top 5 security influencers. For past year or so, I'd already been working extensively with the Dev and QA team - now to the point where QA is taking bug directly from vulnerability reports from WhiteHatSec and
IOActive and developing QA testing procedures from them which is now spotting more holes, faster, and resolving them more deeply.

I already work alongside the DBAs and the Ops team, but I was really running up against a wall regarding just how much I could get done.  Basically, a lot of security projects were getting sidelined in favor of large infrastructure projects and I was starting to lose visibility into the whole process.  Worst, overlapping concerns between ops and infosec, like uptime and integrity, were losing ground to not well-planned horizontal expansion and vertical infrastructure structure issues.

The other big bugga-boo in my life was audit. Enough that I'd ranted a bunch about it at Source Seattle.



Well, I knew in the coming year that we'd be shifting some of our operational and access control models to a more expansive system to accommodate some new business directives. This was going to a problem as our existing set of control objectives needed to be redone from the ground up. And this change needed to come both from infosec as well as the operations team, but so far, they had not come to the table with any big ideas.  As mentioned before, they were too swamped fighting fires and keeping the lights on. This is an even bigger problem as auditing impacts operations as much as security, especially when you're doing SSAE-16 Type 2.  Without needed changes and strong leadership, Ops and Infosec were going to sink together come the auditors next year.

Note on SSAE-16/SAS-70: They're not all worthless. It's naive to assume they're a perfect measure of security and operational efficiency. But's also naive to assume they're worthless. As with anything, you need to do the work and actually read the report - does the scope match what you need to test? Was the audit conducted by observation and testing? Or just documentation review and attestation? A telling factor is how many individual control failures are noted. If the number is zero, then likely the report is scoped too tight to be useful or the auditors didn't do anything beyond interview people and leave.

Lastly, my Corporate Masters, were having problems getting a handle on operational projects.  As I mentioned in the lead, important work was being left on the table, while other lesser, more tactical projects were getting funded.  Operations needed help articulating and justifying critical infrastructure upgrades and aligning them to key business processes.  From my infosec seat, the solution was clear - risk to business objectives was not being defined with respect to operational problems.  Simplified example, conversations like "We need a better storage system because until we do, we can't take on any new customers" were not happening in the right places. Again, the team was too busy treading water to document, assess, analyze, strategize, and communicate. 

Enter me.

Our Chief of Operations wanted to inject a big does of risk management and clear process into the technical group.  Since infosec had been already been doing that in spades for the past four years, he asked me to step in and manage a chunk of the operations team. So about four months ago, I got promoted. In addition to security, I am now in charge of the Infrastructure team.


Believe it or not, I passed on some more lucrative opportunities to take on essentially a doubling of my workload. I'm silly that way.

Actually, it's more than doubled since it's been over a decade since I've done anything deep with infrastructure. For past few months, I've been playing catch up learning about large scale virtualization systems, storage area networks, operational resiliency and IT automation.

It's also more of a chance for me to bake security directly into the existing operational processes. Get my hands dirty and see what is going wrong.  For example, instead of auditing and dictating firewall policy, the team that directly manages network security reports to me.  I can directly see the effects of new security processes and technologies, both good and ill.  I'm on the front lines not just for security incidents (which also land in ops before they are identified and escalated to security anyway) but any other interruption events (which helps me design towards better integrity and availability).  So yes, it's good.

But not all good. Because, as I stated in the beginning, I've been treading water myself. In addition to getting to know the team and the technologies, I've also had to immerse myself in learning a bunch of new things.  Specifically:
I'll blog more in the future as learn interesting new things.

So, where have I been? A: Where the security rubber hits the operational road.

Thursday, October 20, 2011

Compliance vs Security

Almost as exciting as a few other epic throwdowns, I am lecturing tonight for the University of Washington's infosec certificate program. A few quick highlights from my lecture notes, which is based on the Source talk I gave this summer.



Compliance-driven security forces you to make certain bets on the big enterprise roulette table - but I only have so many chips to play, so I prefer not to be constrained in my choices.

As a consultant I saw primarily two kinds of organizations:  Those practicing good risk management who wanted to get better and  Those forced to be more secure because of compliance or a breach.

Why is there such restrictive compliance regimens? Without repeatable, evidence-based, agreed-upon risk methodologies, you cannot rely on third-parties to make security decisions with your data that are aligned with your interests, instead of theirs.

Compliance is a multi-dimensional object... and lot more than three.  You've got width - the general rules f the standard plus a few specific new ones based on how the organization interprets it.  This is the easiest dimension.  Depth: As most compliance acceptance is based on auditor opinion, which is driven by the individuals experience.  Plus, if the standard is somewhat worthwhile, it includes the appropriateness of risk model (relevance) to your problem.   Then there's several dimensions of scope: Time (past events, present controls, future possible events) and then the general usual dimension of Physical, virtual, sofware, network... what’s constitutes a a barrier in those domains.    And of course, all of this is moving.

Security is also multi-dimensional but it has slightly different dimensions and moves differently than compliance.

 Best practices?  In other words, “This worked in our organization once upon a time, So it should work for you too.”

Where I live is the intersection of:
1. What the auditors demand we do,
2. What we need to do to keep from getting breached, and 
3. What we can afford to do.
And I'm mot going to get all of all three.

Stupid compliance failures:
- Why is the absence of a particular control is a risk? A high risk?
- How can I be 100% compliant with an open standard? With a product lifecycle of 12-18 months?
- Hey, that's a feature not a high-risk vulnerability - it all depends on your context
- Impact does not equal risk. You forgot probability. Dumbass.

Tuesday, June 28, 2011

I do it for the Lulz

I've always done for the lulz.   Security that is.  When I lecture to students up at UW, I try to warn them that don't do this job for money. 

Anyone going into infosec for the money, the prestige, or for the job security is seriously misguided.   I tell them the only reason I do it is for the lulz.  I warn them to prepare to face the humiliation of having a server hacked, the terror of knowing the bad guys will outspend and outlast you, the tedium of when nothing happens and the crunch when you need to justify every thing you've done to an auditor and the budget axe.  It's tough, it sucks, it's relentless, and I still love it.

What are my lulz?  The thrill of the chase. Heck, forget the chase, how about actually taking down some bad guys?

Besides the sexy stuff, I also get lulz making an organization safer… or even if it's just a friend of the family that needs some malware scraped off their machine.  Sure, it's tough work but it feels good to make the world a little safer, a little saner when you're done.  And knowing that you've deprived some creep one less victim.   Lulz.

I get my lulz designing new systems, making the strong, making them resilient, making them better than they were before.  And digging deep and figuring out where the holes are, where's the best place to fix things, and then working on presenting that to the people that care.  Even more fun than all the puzzles and video games in the world.


Technology, and especially information security has always been more than a job to me.  More than even a career.  It's a calling.  Don't tell my boss, but I'd do this even if they didn't pay me.  It's what I do.  I can't help it. 

And to those who say we're losing the war.  Whatever.  I've been hearing that for years.  The world hasn't ended.   I know that more systems than ever are online now and somehow we failures are still protecting a majority of them.   I know we'll be always outnumbered always outgunned.

That's what makes it a challenge.

Friday, June 17, 2011

Decompiling the week.

What an amazing week...

Fantastic time at Source Seattle.  If you didn't make it, you should really check out what you missed.

Great keynotes by Kris Herrin and Eric Cowperthwaite.  Nice getting the executive "big picture" on breaches and managing security.

Thoroughly enjoyed giving my talk and a lively audience as well.

Fascinating lunchtime discussion with Marcia Hofmann about privacy and the nature of social media.  Enough to make me re-up my membership in the EFF.  You should consider it too.

Not only a great demo by Ron Gula, but he spent time after the session doing a one-on-one with me giving an insider's tour of their software.  It was great to see a master at work.  How often do you get that kind of access to that caliber of talent?

If Source wasn't enough, I had to get me some Agora where Kirk B. pointed me at this fascinating paper on assuming a state of compromise.  Since that's what this blog is all about, you should check it out.

Now I need to sleep...

Monday, April 11, 2011

The Kobayashi Maru

Trek nerds will remember the Kobayashi Maru as a requisite test for command.  It was a simulation of a no win scenario that taught a candidate would deal with utter failure.  As Spock said, "The purpose is to experience fear, fear in the face of certain death, to accept that fear, and maintain control of oneself and one's crew. This is the quality expected in every Starfleet captain."

I'll also say that this is a quality I expect in every security leader.  Except our fear isn't death, but of breach.   Like the title of this blog, I think it's a useful exercise to assume you've been breached and plan accordingly.   For some, this is as radical idea of contemplating one's own mortality.  Specifically, I've encountered more than a few executives and tech leads who are fully willing to go their entire career expecting that they will never experience a data breach.  For me I saw as an educational opportunity to teach.   Teach them that organizations can survive a breach; it's a matter of doing the best job you can and being able to prove it.  It's also a matter of knowing where your weak spots are and what can happen.  And it's a matter of preparing for response.

If you take nothing else out of this post, take this:  perform a Kobayashi Maru test on yourself.  Test your incident response plan.   There are some great guides out there. Write a plan and test it.  Figure out some likely scenarios and run the steps and see how you do.  For scenrios, you could even replay the last few major breaches and think about how you'd do if it happened in your org.  Not how you'd defend against it (cuz I assume y'all thought of that the second you read about it) but imagine it already happened.  Now think about how far it's spread internally, what data would be leaked out, what services would be offline, what forensic data would you have?  This will likely cause you rethink some controls - are you logging enough?  Do you really have defense-in-depth?  Do you have an accurate data inventory?  Do you have all the critical personnel on speed-dial?  Do you have an organized method of contacting customers?  Figure this all out and share the data with your boss.  Tell her it's a good idea to plan for a disaster so it doesn't destroy the company.  How an organization responds to a breach is crucial factor in a security program. 

And when you want to point at other major breaches and chuckle with schadenfreude, you should think one thing - that could have been you.   You think you've got all your bases covered, you're locked down and unbreakable?  Think again.   And you know what, check again on those companies in 12 months and see how they're doing. Some are done and gone.  Others have survived, maybe even stronger.   And to those security folks there, I think they might have done a good job preparing for failure.   And I take that as a challenge. Again, Trek said it better than me.  This time it was Captain Pike, talking about the destruction of the USS Kelvin. "Your father was Captain of a Starship for 12 minutes. He saved 800 lives. Including your mother's and yours. I dare you to do better. "

Wednesday, March 30, 2011

What I've learned about Rugged in the past 24 hours

Well, I learned yesterday's post touched something in some people.  Based on the comments I got, both online and offline, I can guess a few of us are confused about what Rugged is about.   Especially those of us who've only read about it, instead of having it explained to us.   And sadly, some of us "saw it and dismissed it" as another security fad. Maybe this post can help fix that.

A lot of folks, including @joshcorman himself, stepped up to help me understand Rugged. Very nice, Lazyweb!


First, let's start with the problem (as I see it)


Software security programs have a poor Raison d'être. This is likely because it's hard to define what "secure software" is.  (heck, define "secure")  Is secure software?
  • Resistant to cross-scripting and SQL injection attack (insert attack du jour)
  • Bug-free?
  • 100% OWASP complaint (yes, I have been asked this)
  • Have no high vulnerabilities?
  • Made with high quality?

Waste of time, right?  We all know secure is a sliding scale based on value and risk.  You can't arbitrarily define security, which makes it less than useful for talking to executives and business program managers.  So how do we frame the conversation in a useful manner?  Enter Rugged.

Rugged leap-frogs over all these definitions and points to the qualia we security grognards are jumping up and down about.  It brings it down to earth with a clear and sharp image that conveys the essential intrinsic properties of "secure software"

To answer my own questions:

1) How is Rugged different than any other Best Practices?
Well, it's NOT really a best practice… more of a framing technique… or (ulp) a paradigm.  I was expecting too much of Rugged to even put it in this category, it's just not that kinda thing.  It's just a way to simplify the dynamic and intangible.  Of course, we could apply some evidence-based analysis over time to see how effective it is in helping the non-security folks understand us.

2) Convincing the developers to write more secure/stable software isn't my problem. My problem is convincing customers and managers so that they'll let/encourage the programmers to to write more secure code.
Ah, this would be Rugged's sweet spot.  Here is a meeting ground for the security team, developers and money spenders to agree on something that is useful and clear.   A way to communicate what needs to be asked for, what needs to be done and what the final product looks like.

3) Software security problems are deep and complex.
Actually, digging deep enough into Rugged, this issue is acknowledged.  And Rugged doesn't aim to solve these problems directly, but again, it gives us all something we can put hands around when wrestling with them.

4) Rugged appears mysterious and embryonic.
Hopefully we can change that.  The more we spread the word (and ask questions), the less confusion we'll see.  So I'll light a candle now:

Here's how I would summarize it as guidance from management to the developers.

"If our software is Rugged, it is built to withstand adversity, tolerate anomalies, and always do what we intend it to do. Our customers depend upon this level of unyielding reliability; in fact, they expect nothing less. It is our responsibility to meet these expectations."

How's that sound?

Tuesday, March 29, 2011

Would someone please explain this Rugged thing to me?

I'm steeped in a huge SSDL project here at work - looking to move security in our development processes to the next level.  Lots of heavy lifting doing evaluation, analysis and reorganizing.  I'll throw in a shameless plug for WhiteHat Security who's helping us a ton.

Now, one of the things that came up in my search to see how to improve things was the Rugged Software movement.   Early on in the process, I foolishly mentioned it to our CTO as something to look at.   Why did I saw this was this foolish? Well, because at the time, I had only a cursory understanding of Rugged.   He went off and dutifully checked into Rugged only to find the bare documents on the website.  Indeed, it was a movement, but apparently not much else.. at least at that stage of the game.  He came back to me confused and wondering why I had brought it up to him.  What was he supposed to do with this Rugged thing?  Oops, I had just wasted some credibility and an important ally's time.   A mistake I wasn't going to repeat.

Well, here we are months later, and I'm afraid I still have only a cursory understand of Rugged.  Apologies to Josh and the other creators of Rugged, but I just don't see anything there worth passing on yet.  Maybe it isn't aimed at our developers? I don't know.  It wasn't clear.  I'll be the first to admit I've not attended any conference talks on Rugged (I admit here, I don't make to many conferences) and I don't attend many webinars or online thingies (they're often hard to follow).  I have googliated a bit and haven't found much beyond a few news articles.  On the other hand, I have found tons of advice and guidance on practical secure development frameworks like BSIMM.

Overall, my big questions / confusions are:

1) How is Rugged different than any other "Best Practice"? 
Is there any evidence yet to show that it improves security?  Can I see it?   Can I share it with management?


2) Convincing the Developers to write more secure/stable software isn't my problem.
Talk to them, as I have, and most of them wouldn't mind writing more secure code.  Some of them even want to write more secure code.  And a certain chunk of them don't know how to write secure code.  I don't see how Rugged solves any of these problems very well.    The root of the problem comes from the fact that secure code still isn't spelled out in the requirements.  Developers can only do what the project manager demands, which is based on what the customer demands.   So if Rugged is aimed at convincing customers to ask for more rugged software, specifically and pointedly asking, then I'll admit it should be preached (but not to me, to my customers).


3) Software security problems are deep and complex.
A lot of security bugs are buried deep in old crufty code or libraries.  Even when all our developers are cracking on all cylinders of secure code dev, we're still excavating for fundamental faults and design flaws.  And when you land in those pits, you're dealing with Expensive Questions - redesign ($$$) or patch-and-move-on.   I need a movement that helps me make those decisions.


4) Rugged appears mysterious and embrionic.
I'm sure it will grow up to be influential and useful, but to be practical to me right now, I need something that's actionable that I can use with my Development team and management.    The story I mentioned in the opening about confusing my CTO cannot be repeated.   And beyond that, my executive team will ask for proof and metrics for any new development movements I propose.   I don't blame them.


So please, help me out here.  I am confused, what am I missing or misunderstanding?

UPDATE - People have stepped up to 'splain it to me (ha, my evil plan worked).  Read what I've learned here.

Tuesday, March 1, 2011

Good pen test reporting resource

I knew there were folks out there who could do a good job at this.   Instead of writing sloppy security reports, here's a positive example of how to do a better job at it by Steve Shead.  He's a security guy and a graphic designer, so no wonder I like his layout for pen test reporting.

Wednesday, February 23, 2011

Source Seattle

I will be speaking at Source Seattle

Here's the lineup

Friday, January 28, 2011

How much should you do to prevent malicious sys-admins?

In the age of Wikileaks, obviously trusted insider access must be controlled.   However, how much is enough?  Consider the following typical conversation between auditor and subject:

Auditor: What controls are in place to prevent employees from emailing confidential data?

IT Manager: All confidential data is secured in a separate environment where access is limited only to a dozen administrators… all thoroughly background checked.  Access to the environment requires strong authentication and access is logged in a tamper-proof data vault, so we know who did what.  Also, the rest of the environment is swept periodically with a DLP to ensure that no confidential data resides outside that controlled environment.

Auditor:  But what prevents an admin from emailing confidential data out of that secure environment?

IT: An admin would have to use his crypto key to open up a protected store in the separate environment and copy the data out to the main environment to use email.

Auditor:  So there are no email filters in place?  Alright, that's a finding.

IT: Wait? What are you saying?  Do you want us to protect against accidental exposure or do you want to us to protect against a determined privileged insider?  If the case is the latter, who do I prevent admins from viewing confidential data and copying it down on paper?  I mean we log all access but at some point, admins will need access to the root kernel.

Auditor: Uh huh.  I think I see another finding here.


As the Verizon Data Breach Report report shows, insider misuse accounts for nearly half of the breaches.  Note that this particular report has US Secret Service data in it is as well, so there is some good stuff on insiders.  So, on page 18, we see that 90% of internal agents attributable to breach are deliberating circumventing controls for malicious reasons.  On page 34 we see 48% of breaches and 3% of records because of "Misuse".  Of these, 49% were of the type "Embezzlement", so trusted insider determinedly circumventing the controls for malicious purposes.   So yes, there are data to back up the need for controls on insiders.

Fortunately, there are many of strong and somewhat easy (but not often politically easy) methods to lowering this threat.  First off, reducing the number of people who have access to the data, as the IT manager described above.  Second is to add strong accountability and monitoring, which she also does.  And of course, background checks are pretty easy and common as well.

But it seems that is not enough for the auditor.  Fair enough, in some environments, maybe even stronger controls can be applied.  You would expect this to be the case in military and governmental intelligence systems, which is why the Private Manning case is so disheartening.

However, it is not surprising.  Technical controls for privileged usage can run rather high.  Last I tried to implement them, I was looking at least $3,000 per admin ($5k when I factored soft costs) for a system that would actually mediate (read: prevent not just detect) privileged access.  And then the admins screamed about availability and manageability.   In short, it just wasn't feasible.  It didn't help that the systems that you most want to protect (the ones holding the credit cards) are also the mission-critical money-making applications that are heavily SLAed.   So usually we stop with separation of duties, least privilege, non-repudiated access, audit trails, background screening.  

So far, I don't think I've said anything new that most security folks don't encounter every day.  But what I also hear all the time is the push for even more controls on insiders.  So where do we go from here?  How much is enough? Because to me, there is a clear point of diminishing return on insider controls and we're pretty much there.

Sunday, January 23, 2011

Peter Sandman and risk communication

Many of us in the infosec profession struggle with communicating risk.  Not only do we need to communicate it upstream to the decision makers, but we also must spread it wide and downstream to the every day folks so they can do their jobs.  

In my work in disaster preparedness, I stumbled across the work of Peter Sandman.  I've read most of his articles on risk communication. I really found a lot of useful wisdom in his advice on how talk about scary potential future events.  Although his specialty is disasters such as pandemics and major industrial accidents, his breakdowns of the psychology behind risk communication is sound.  And in many cases, an infosec practitioner must also deal with business continuity, so it can be directly useful.

One component of his advice I find most interesting is his breakdown of Risk = Hazard + Outrage.   He says,

In the mid-1980s I coined the formula “Risk = Hazard + Outrage” to reflect a growing body of research indicating that people assess risks according to metrics other than their technical seriousness: that factors such as trust, control, voluntariness, dread, and familiarity (now widely called “the outrage factors”) are as important as mortality or morbidity in what we mean by risk.

With this, he describes outrage management, which for us, is about how we handle incidents.  Not the technical pieces of incident response, but how we communicate the incident to all the stakeholders (executives, customers, auditors), with the ultimate goal of minimizing the reputation damage.   I see similar factors at play in communicating a massive oil spill and handling a public disclosure of a severe vulnerability in your product.

Many interesting lessons on his site and worth spending some time seeing what might prove useful for you.

Sunday, January 2, 2011

PDCA for IT InfoSec, much assembly required

"But ignorance, while it checks the enthusiasm of the sensible, in no way restrains the fools." -Stanislaw Lem, His Master's Voice

A lot of the tech industry worldwide have turned to the ISO 27k standard as guide for getting their hands around IT security.  I say "getting their hands around" because I don't think as a whole, we're up to the challenge of actually measuring and managing IT risk (but that's a post for another day).

The heart of ISO 27K is the Plan-Do-Check-Act (PDCA), or the famous Deming Wheel Some even call it the Hamster Wheel of Pain because the process can be endless and ineffective if implemented sloppily.  Alex Hutton has recently pointed out that the ISO 27k standard doesn't say very much about whether your processes improve your security or not.  I'm inclined to agree, as the standard is primarily about the bureaucratic process of managing risk as opposed to defining the "real" work that needs to be done.   It can be wielded as bluntly and ineffectively as a SAS-70.  (hint: like a SAS-70, you actually need to read an ISO 27K certification report and keep a close eye on the scope and how decisions were made).

As a former IRCA Certified Lead Auditor for ISO27k (my cert expired this past November), I was fortunate enough to get both deep and wide training in the standard from some very experienced and gifted practioners. It led me to a deeper understanding of the standard, far beyond the page, and what it was trying to accomplish.

It also revealed to me how right Alex is in saying the standard is too rough to be applied with significant training and additional material.
In fact, many apply the standard as the same old laundry list of "shoulds" and "musts" of controls (aka the 27002 list). In fact, the toughest but most important piece of the standard is based on Deming's base concept.  Again, PDCA.   I have seen many skim organizations skim through Plan and race right to "Do".  Without a strong and detailed Plan, every other step is futile.

Do what? Why? And how much?  Check against what?  Act to correct back to what plan?  The essence of planning as I see it is something that is hard to define as a hard-coded procedure, which is perhaps why it is so watered-down in the standard.

A fallacy in management is that what works for some organizations may not work for others.   Cargo-cult mimicking of management processes is not only ineffective but dangerously misleading when certifications start getting thrown around.

Planning involves coordinating with the business of the organization to discover the information flows, data repositories, business rules and critical objectives.  Then working with upper-management to define priorities and trade-offs.   After that is done, a thorough risk analysis of the dangers to those objectives has to be done.  The standard does offer a risk analysis method, but it simplistic and shallow compared to more in-depth methods like FAIR or FMEA.

The final piece of planning is to decide how to treat those risks.   In the standard, this is documented in the Statement of Applicability or SOA.   The SOA is a mapping of objectives to risks with the selection of a treatment method.  The list of controls in 27002 is suggested but not mandatory.  You can drop controls to your list, if your analysis supports it.  The standard actually says "Note: Annex A contains a comprehensive list of control objectives and controls that have been found to be commonly relevant in organizations.  Users of this International Standard are directed to Annex A (ISO 27002) as a starting point for control selection to ensure that no important controls are overlooked."   Let me repeat that, you do not and probably should not take the list of 133 controls in 27002 at face value, implement them all and think you're done.   Here you have the flexibility to choose what works to deal with the risk to your organization's objectives.  That's "applicability" part of the standard.

I am really excited that Verizon is now giving us a more accurate picture of risk and controls in the real world.  I, for one, welcome our new Evidence-based Overlords. Especially as an more in-depth list of control deployment tactics instead of ISO 27002.   As said in medicine, half of what we know is wrong, but we don't know what half.  This is a step in moving towards knowing and the key is learning from other's mistakes.

You can see that a solid foundation is how the PDCA begins.   And as you move through the Deming Wheel, you "Do" and "Check" to see how well your controls are doing.   Not only are they being implemented correctly (which is where most people and auditors stop checking) but how appropriate and useful are they to the risks to the objectives.  You also should be "Checking" how accurate your original analyses of the business and risks are.  Then you "Act" to revise them appropriately.

But almost none of this is very explicit in the standard. Especially to those who used to the world of checklists and to-dos, and have a tough time with deep business analysis and strategic planning.  But that is where the real value lies.  My problem is that if you know how to plan your infosec well, what do you need the standard for?  The ISO implementation guides do help a little (at an extra cost), but the hard stuff is to be found elsewhere.The rest of ISO 27k just defines the paperwork format that is certifiable to the standard. 

TLDR; If you understand IT strategy and analysis, you probably don't need the standard except for certification. If you don't, the standard isn't enough to help you.