In the age of Wikileaks, obviously trusted insider access must be controlled. However, how much is enough? Consider the following typical conversation between auditor and subject:
Auditor: What controls are in place to prevent employees from emailing confidential data?
IT Manager: All confidential data is secured in a separate environment where access is limited only to a dozen administrators… all thoroughly background checked. Access to the environment requires strong authentication and access is logged in a tamper-proof data vault, so we know who did what. Also, the rest of the environment is swept periodically with a DLP to ensure that no confidential data resides outside that controlled environment.
Auditor: But what prevents an admin from emailing confidential data out of that secure environment?
IT: An admin would have to use his crypto key to open up a protected store in the separate environment and copy the data out to the main environment to use email.
Auditor: So there are no email filters in place? Alright, that's a finding.
IT: Wait? What are you saying? Do you want us to protect against accidental exposure or do you want to us to protect against a determined privileged insider? If the case is the latter, who do I prevent admins from viewing confidential data and copying it down on paper? I mean we log all access but at some point, admins will need access to the root kernel.
Auditor: Uh huh. I think I see another finding here.
As the Verizon Data Breach Report report shows, insider misuse accounts for nearly half of the breaches. Note that this particular report has US Secret Service data in it is as well, so there is some good stuff on insiders. So, on page 18, we see that 90% of internal agents attributable to breach are deliberating circumventing controls for malicious reasons. On page 34 we see 48% of breaches and 3% of records because of "Misuse". Of these, 49% were of the type "Embezzlement", so trusted insider determinedly circumventing the controls for malicious purposes. So yes, there are data to back up the need for controls on insiders.
Fortunately, there are many of strong and somewhat easy (but not often politically easy) methods to lowering this threat. First off, reducing the number of people who have access to the data, as the IT manager described above. Second is to add strong accountability and monitoring, which she also does. And of course, background checks are pretty easy and common as well.
But it seems that is not enough for the auditor. Fair enough, in some environments, maybe even stronger controls can be applied. You would expect this to be the case in military and governmental intelligence systems, which is why the Private Manning case is so disheartening.
However, it is not surprising. Technical controls for privileged usage can run rather high. Last I tried to implement them, I was looking at least $3,000 per admin ($5k when I factored soft costs) for a system that would actually mediate (read: prevent not just detect) privileged access. And then the admins screamed about availability and manageability. In short, it just wasn't feasible. It didn't help that the systems that you most want to protect (the ones holding the credit cards) are also the mission-critical money-making applications that are heavily SLAed. So usually we stop with separation of duties, least privilege, non-repudiated access, audit trails, background screening.
So far, I don't think I've said anything new that most security folks don't encounter every day. But what I also hear all the time is the push for even more controls on insiders. So where do we go from here? How much is enough? Because to me, there is a clear point of diminishing return on insider controls and we're pretty much there.
“We’ve just traced the attack... its coming from inside the house!” How do you secure your network when the bad guys already have control of your servers? It’s so hard to keep up with the attacks, maybe it’s safer to architect with the assumption that you’ve already been breached. What does this entail?
Friday, January 28, 2011
Sunday, January 23, 2011
Peter Sandman and risk communication
Many of us in the infosec profession struggle with communicating risk. Not only do we need to communicate it upstream to the decision makers, but we also must spread it wide and downstream to the every day folks so they can do their jobs.
In my work in disaster preparedness, I stumbled across the work of Peter Sandman. I've read most of his articles on risk communication. I really found a lot of useful wisdom in his advice on how talk about scary potential future events. Although his specialty is disasters such as pandemics and major industrial accidents, his breakdowns of the psychology behind risk communication is sound. And in many cases, an infosec practitioner must also deal with business continuity, so it can be directly useful.
One component of his advice I find most interesting is his breakdown of Risk = Hazard + Outrage. He says,
In the mid-1980s I coined the formula “Risk = Hazard + Outrage” to reflect a growing body of research indicating that people assess risks according to metrics other than their technical seriousness: that factors such as trust, control, voluntariness, dread, and familiarity (now widely called “the outrage factors”) are as important as mortality or morbidity in what we mean by risk.
With this, he describes outrage management, which for us, is about how we handle incidents. Not the technical pieces of incident response, but how we communicate the incident to all the stakeholders (executives, customers, auditors), with the ultimate goal of minimizing the reputation damage. I see similar factors at play in communicating a massive oil spill and handling a public disclosure of a severe vulnerability in your product.
Many interesting lessons on his site and worth spending some time seeing what might prove useful for you.
In my work in disaster preparedness, I stumbled across the work of Peter Sandman. I've read most of his articles on risk communication. I really found a lot of useful wisdom in his advice on how talk about scary potential future events. Although his specialty is disasters such as pandemics and major industrial accidents, his breakdowns of the psychology behind risk communication is sound. And in many cases, an infosec practitioner must also deal with business continuity, so it can be directly useful.
One component of his advice I find most interesting is his breakdown of Risk = Hazard + Outrage. He says,
In the mid-1980s I coined the formula “Risk = Hazard + Outrage” to reflect a growing body of research indicating that people assess risks according to metrics other than their technical seriousness: that factors such as trust, control, voluntariness, dread, and familiarity (now widely called “the outrage factors”) are as important as mortality or morbidity in what we mean by risk.
With this, he describes outrage management, which for us, is about how we handle incidents. Not the technical pieces of incident response, but how we communicate the incident to all the stakeholders (executives, customers, auditors), with the ultimate goal of minimizing the reputation damage. I see similar factors at play in communicating a massive oil spill and handling a public disclosure of a severe vulnerability in your product.
Many interesting lessons on his site and worth spending some time seeing what might prove useful for you.
Sunday, January 2, 2011
PDCA for IT InfoSec, much assembly required
"But ignorance, while it checks the enthusiasm of the sensible, in no way restrains the fools." -Stanislaw Lem, His Master's Voice
A lot of the tech industry worldwide have turned to the ISO 27k standard as guide for getting their hands around IT security. I say "getting their hands around" because I don't think as a whole, we're up to the challenge of actually measuring and managing IT risk (but that's a post for another day).
The heart of ISO 27K is the Plan-Do-Check-Act (PDCA), or the famous Deming Wheel Some even call it the Hamster Wheel of Pain because the process can be endless and ineffective if implemented sloppily. Alex Hutton has recently pointed out that the ISO 27k standard doesn't say very much about whether your processes improve your security or not. I'm inclined to agree, as the standard is primarily about the bureaucratic process of managing risk as opposed to defining the "real" work that needs to be done. It can be wielded as bluntly and ineffectively as a SAS-70. (hint: like a SAS-70, you actually need to read an ISO 27K certification report and keep a close eye on the scope and how decisions were made).
As a former IRCA Certified Lead Auditor for ISO27k (my cert expired this past November), I was fortunate enough to get both deep and wide training in the standard from some very experienced and gifted practioners. It led me to a deeper understanding of the standard, far beyond the page, and what it was trying to accomplish.
It also revealed to me how right Alex is in saying the standard is too rough to be applied with significant training and additional material.
In fact, many apply the standard as the same old laundry list of "shoulds" and "musts" of controls (aka the 27002 list). In fact, the toughest but most important piece of the standard is based on Deming's base concept. Again, PDCA. I have seen many skim organizations skim through Plan and race right to "Do". Without a strong and detailed Plan, every other step is futile.
Do what? Why? And how much? Check against what? Act to correct back to what plan? The essence of planning as I see it is something that is hard to define as a hard-coded procedure, which is perhaps why it is so watered-down in the standard.
A fallacy in management is that what works for some organizations may not work for others. Cargo-cult mimicking of management processes is not only ineffective but dangerously misleading when certifications start getting thrown around.
Planning involves coordinating with the business of the organization to discover the information flows, data repositories, business rules and critical objectives. Then working with upper-management to define priorities and trade-offs. After that is done, a thorough risk analysis of the dangers to those objectives has to be done. The standard does offer a risk analysis method, but it simplistic and shallow compared to more in-depth methods like FAIR or FMEA.
The final piece of planning is to decide how to treat those risks. In the standard, this is documented in the Statement of Applicability or SOA. The SOA is a mapping of objectives to risks with the selection of a treatment method. The list of controls in 27002 is suggested but not mandatory. You can drop controls to your list, if your analysis supports it. The standard actually says "Note: Annex A contains a comprehensive list of control objectives and controls that have been found to be commonly relevant in organizations. Users of this International Standard are directed to Annex A (ISO 27002) as a starting point for control selection to ensure that no important controls are overlooked." Let me repeat that, you do not and probably should not take the list of 133 controls in 27002 at face value, implement them all and think you're done. Here you have the flexibility to choose what works to deal with the risk to your organization's objectives. That's "applicability" part of the standard.
I am really excited that Verizon is now giving us a more accurate picture of risk and controls in the real world. I, for one, welcome our new Evidence-based Overlords. Especially as an more in-depth list of control deployment tactics instead of ISO 27002. As said in medicine, half of what we know is wrong, but we don't know what half. This is a step in moving towards knowing and the key is learning from other's mistakes.
You can see that a solid foundation is how the PDCA begins. And as you move through the Deming Wheel, you "Do" and "Check" to see how well your controls are doing. Not only are they being implemented correctly (which is where most people and auditors stop checking) but how appropriate and useful are they to the risks to the objectives. You also should be "Checking" how accurate your original analyses of the business and risks are. Then you "Act" to revise them appropriately.
But almost none of this is very explicit in the standard. Especially to those who used to the world of checklists and to-dos, and have a tough time with deep business analysis and strategic planning. But that is where the real value lies. My problem is that if you know how to plan your infosec well, what do you need the standard for? The ISO implementation guides do help a little (at an extra cost), but the hard stuff is to be found elsewhere.The rest of ISO 27k just defines the paperwork format that is certifiable to the standard.
TLDR; If you understand IT strategy and analysis, you probably don't need the standard except for certification. If you don't, the standard isn't enough to help you.
A lot of the tech industry worldwide have turned to the ISO 27k standard as guide for getting their hands around IT security. I say "getting their hands around" because I don't think as a whole, we're up to the challenge of actually measuring and managing IT risk (but that's a post for another day).
The heart of ISO 27K is the Plan-Do-Check-Act (PDCA), or the famous Deming Wheel Some even call it the Hamster Wheel of Pain because the process can be endless and ineffective if implemented sloppily. Alex Hutton has recently pointed out that the ISO 27k standard doesn't say very much about whether your processes improve your security or not. I'm inclined to agree, as the standard is primarily about the bureaucratic process of managing risk as opposed to defining the "real" work that needs to be done. It can be wielded as bluntly and ineffectively as a SAS-70. (hint: like a SAS-70, you actually need to read an ISO 27K certification report and keep a close eye on the scope and how decisions were made).
As a former IRCA Certified Lead Auditor for ISO27k (my cert expired this past November), I was fortunate enough to get both deep and wide training in the standard from some very experienced and gifted practioners. It led me to a deeper understanding of the standard, far beyond the page, and what it was trying to accomplish.
It also revealed to me how right Alex is in saying the standard is too rough to be applied with significant training and additional material.
In fact, many apply the standard as the same old laundry list of "shoulds" and "musts" of controls (aka the 27002 list). In fact, the toughest but most important piece of the standard is based on Deming's base concept. Again, PDCA. I have seen many skim organizations skim through Plan and race right to "Do". Without a strong and detailed Plan, every other step is futile.
Do what? Why? And how much? Check against what? Act to correct back to what plan? The essence of planning as I see it is something that is hard to define as a hard-coded procedure, which is perhaps why it is so watered-down in the standard.
A fallacy in management is that what works for some organizations may not work for others. Cargo-cult mimicking of management processes is not only ineffective but dangerously misleading when certifications start getting thrown around.
Planning involves coordinating with the business of the organization to discover the information flows, data repositories, business rules and critical objectives. Then working with upper-management to define priorities and trade-offs. After that is done, a thorough risk analysis of the dangers to those objectives has to be done. The standard does offer a risk analysis method, but it simplistic and shallow compared to more in-depth methods like FAIR or FMEA.
The final piece of planning is to decide how to treat those risks. In the standard, this is documented in the Statement of Applicability or SOA. The SOA is a mapping of objectives to risks with the selection of a treatment method. The list of controls in 27002 is suggested but not mandatory. You can drop controls to your list, if your analysis supports it. The standard actually says "Note: Annex A contains a comprehensive list of control objectives and controls that have been found to be commonly relevant in organizations. Users of this International Standard are directed to Annex A (ISO 27002) as a starting point for control selection to ensure that no important controls are overlooked." Let me repeat that, you do not and probably should not take the list of 133 controls in 27002 at face value, implement them all and think you're done. Here you have the flexibility to choose what works to deal with the risk to your organization's objectives. That's "applicability" part of the standard.
I am really excited that Verizon is now giving us a more accurate picture of risk and controls in the real world. I, for one, welcome our new Evidence-based Overlords. Especially as an more in-depth list of control deployment tactics instead of ISO 27002. As said in medicine, half of what we know is wrong, but we don't know what half. This is a step in moving towards knowing and the key is learning from other's mistakes.
You can see that a solid foundation is how the PDCA begins. And as you move through the Deming Wheel, you "Do" and "Check" to see how well your controls are doing. Not only are they being implemented correctly (which is where most people and auditors stop checking) but how appropriate and useful are they to the risks to the objectives. You also should be "Checking" how accurate your original analyses of the business and risks are. Then you "Act" to revise them appropriately.
But almost none of this is very explicit in the standard. Especially to those who used to the world of checklists and to-dos, and have a tough time with deep business analysis and strategic planning. But that is where the real value lies. My problem is that if you know how to plan your infosec well, what do you need the standard for? The ISO implementation guides do help a little (at an extra cost), but the hard stuff is to be found elsewhere.The rest of ISO 27k just defines the paperwork format that is certifiable to the standard.
TLDR; If you understand IT strategy and analysis, you probably don't need the standard except for certification. If you don't, the standard isn't enough to help you.
Subscribe to:
Posts (Atom)