In the age of Wikileaks, obviously trusted insider access must be controlled. However, how much is enough? Consider the following typical conversation between auditor and subject:
Auditor: What controls are in place to prevent employees from emailing confidential data?
IT Manager: All confidential data is secured in a separate environment where access is limited only to a dozen administrators… all thoroughly background checked. Access to the environment requires strong authentication and access is logged in a tamper-proof data vault, so we know who did what. Also, the rest of the environment is swept periodically with a DLP to ensure that no confidential data resides outside that controlled environment.
Auditor: But what prevents an admin from emailing confidential data out of that secure environment?
IT: An admin would have to use his crypto key to open up a protected store in the separate environment and copy the data out to the main environment to use email.
Auditor: So there are no email filters in place? Alright, that's a finding.
IT: Wait? What are you saying? Do you want us to protect against accidental exposure or do you want to us to protect against a determined privileged insider? If the case is the latter, who do I prevent admins from viewing confidential data and copying it down on paper? I mean we log all access but at some point, admins will need access to the root kernel.
Auditor: Uh huh. I think I see another finding here.
As the Verizon Data Breach Report report shows, insider misuse accounts for nearly half of the breaches. Note that this particular report has US Secret Service data in it is as well, so there is some good stuff on insiders. So, on page 18, we see that 90% of internal agents attributable to breach are deliberating circumventing controls for malicious reasons. On page 34 we see 48% of breaches and 3% of records because of "Misuse". Of these, 49% were of the type "Embezzlement", so trusted insider determinedly circumventing the controls for malicious purposes. So yes, there are data to back up the need for controls on insiders.
Fortunately, there are many of strong and somewhat easy (but not often politically easy) methods to lowering this threat. First off, reducing the number of people who have access to the data, as the IT manager described above. Second is to add strong accountability and monitoring, which she also does. And of course, background checks are pretty easy and common as well.
But it seems that is not enough for the auditor. Fair enough, in some environments, maybe even stronger controls can be applied. You would expect this to be the case in military and governmental intelligence systems, which is why the Private Manning case is so disheartening.
However, it is not surprising. Technical controls for privileged usage can run rather high. Last I tried to implement them, I was looking at least $3,000 per admin ($5k when I factored soft costs) for a system that would actually mediate (read: prevent not just detect) privileged access. And then the admins screamed about availability and manageability. In short, it just wasn't feasible. It didn't help that the systems that you most want to protect (the ones holding the credit cards) are also the mission-critical money-making applications that are heavily SLAed. So usually we stop with separation of duties, least privilege, non-repudiated access, audit trails, background screening.
So far, I don't think I've said anything new that most security folks don't encounter every day. But what I also hear all the time is the push for even more controls on insiders. So where do we go from here? How much is enough? Because to me, there is a clear point of diminishing return on insider controls and we're pretty much there.
No comments:
Post a Comment