“We’ve just traced the attack... its coming from inside the house!” How do you secure your network when the bad guys already have control of your servers? It’s so hard to keep up with the attacks, maybe it’s safer to architect with the assumption that you’ve already been breached. What does this entail?
Monday, February 24, 2014
Speaking at "The Cloud and Big Data 2014"
I'll be enpaneling on "Payment Card Data in the Cloud "
Law Seminars International 2-day conference on The Cloud and Big Data 2014 - The Law and Business of the Cloud and Big Data Today
Thursday, February 20, 2014
Internal Vulnerability scanning
The hardest thing about vulnerability scanning is not the scanning itself. There are literally dozens of pretty decent scanning tools and vendors out there at way reasonable prices. The hard part is prioritizing the mountain of vulnerability data you get back. This is especially true if you’re scanning your inside network, which I highly recommend you do as frequently as possible. Our team runs scans nearly every other day, tho the scans are different (which I’ll get into), with the entire suite of scans completing once a week. I’m a big believer in getting an “attacker's eye view” of my network and using that as a component of my risk and architecture decision making.
However, every scan seems to generate dozens and dozens of vulnerabilities per host. Multiply that by a good sized network and you’ll find things are quickly unmanageable. If your organization is lucky enough where you’re seeing only a few hits or none per host, then congratulations, you’re very lucky (either than or your scanner is malfunctioning). I live in the hyper-fast world where innovation, customer service, and agility (you can’t spell agility without Agile) are key profit drivers while InfoSec is not. So my team has a lot of stuff to wade through. Here’s how we deal with it.
Multiple scans
I do most my scanning after hours as not to disrupt the business and clog up the pipes. Yes, I have blown up boxes and switches doing vuln scans, despite a decade and a half of experience using these things. It happens. So, I do it at night. But that gives me limited time. Also, for risk management purposes, I want to get different perspectives on scans, which some scanners can do with a single deep scan but others it’s harder. There are some tools that let you aggregate your scans in a database and let you slice and dice there. I haven’t found one that I thought was worth the money… mostly because I like munging the data directly with my own risk models and none let me do that. If I have some spare time (ha!) I might right my own vuln database analysis tool. But for now, it works out easiest for me to run different scans on different days, and then look at the aggregate. Here are the type of scans I run:
1) Full-bore with credentials. The scanner has full administrative login creds for everything on the network. All the signatures are active and even some automated web app hacking is enabled. These can run so long that I have to break them up over several days to catch all of my enterprise (or buy even more scanners if my budget can handle it). It gives me the fullest grandest possible picture of what is messed up and where. Problem is that it also generates a ton of data.
2) Pivot scan with limited credentials. Now the scanner has the login creds of an ordinary user. This scan are much faster than above. The report tells me what my network looks like if a user’s workstation gets popped and an attacker is pivoting laterally and looking for soft targets. A very interesting perspective.
3) External scan with no credentials. Fast and quick, find everything that’s low-hanging fruit. I do these frequently.
4) Patch and default settings scan. Another fast and quick scan, look for missing patches and default creds and other dumb stuff. I do these frequently as well.
5) Discovery scan. Quick and fast network mapping to make sure nothing new has been added to the network. Also done frequently.
Break-it down
Whether you’ve done one big scan & aggregated it, or stitched together your multiple scans, you can’t possible have IT patch every single hole. Especially in a dynamic corporate environment such as ours. I long for the restricted deployment world of no-local-admins, certified install images and mandatory configuration compliance… but then that world isn't known for innovation or profit. So I have this pile of vulns to deal with. How do I break them down?
1) Take High-Med-Low/Red-Orange-Yellow/CVSS with a grain of salt. Yeah, a Purple Critical 11.5 scored vuln is probably bad. But then there seems to be a lot of vulnerability score inflation out there. I need something I can work with. One thing is to think of a points system. Maybe start with a CVSS score (or whatever you like) and add/subtract priority points based on the rest of these rules.
2) Vulnerabilities that have known exploits are high priority. If there’s a hole and a script kiddie can poke it, we need to fix it. We’re below the Mendoza Line for Security.
3) Protocol attacks, especially on the inside, are lower priority. Yeah, man-in-the-middle or crypto-break attacks happen. But they’re less common than the dumb stuff (see previous).
4) Extra attention to the key servers. Duh. But yes, your AD controllers, Sharepoints, databases, terminal servers and file shares need to be clean. Not only do they hold important goodies hackers want (like data or password databases) but if they go down, lots of folks can’t work. Bad news for the SLA and IT”s reputation.
5) Easy wins are high priority. This includes basic OS patches, fixing default passwords, turning off dumb services.
6) User workstation “Internet contact points” are scored higher as well. This means un-patched browsers, Java, Adobe readers, mail clients, etc. This where malware comes into the organization. Lock them down.
Hand-check the important stuff
I don’t trust machines. It’s why I’m in this business. So the really important systems, we hand check critical things at least once a month. This means logging into the box, making sure anti-virus is running and updated, patches have been run, local security policies are in place, and no suspicious local users have been added. We also do hand checks of key ACLs on routers, switches and firewalls. I wish I could say that these checks are superfluous but unfortunately they’ve proven fruitful enough that we keep doing them. Scanners miss things for complicated reasons. We don’t check a lot things this way, just the 10 to 15% of really critical servers and hosts.
Find out where the money is If you can afford it, I suggest looking into Digital Leak Prevention tools. Pairing up a scan for confidential data laying around on servers and workstations against a vuln scan is really helpful. Your idea of “important servers” and “work flows” changes when you see where things end up. There are a lot of DLP tools out there. I haven’t found one I liked. So we wrote our own. But that is a story for another day.
Happy scanning
However, every scan seems to generate dozens and dozens of vulnerabilities per host. Multiply that by a good sized network and you’ll find things are quickly unmanageable. If your organization is lucky enough where you’re seeing only a few hits or none per host, then congratulations, you’re very lucky (either than or your scanner is malfunctioning). I live in the hyper-fast world where innovation, customer service, and agility (you can’t spell agility without Agile) are key profit drivers while InfoSec is not. So my team has a lot of stuff to wade through. Here’s how we deal with it.
Multiple scans
I do most my scanning after hours as not to disrupt the business and clog up the pipes. Yes, I have blown up boxes and switches doing vuln scans, despite a decade and a half of experience using these things. It happens. So, I do it at night. But that gives me limited time. Also, for risk management purposes, I want to get different perspectives on scans, which some scanners can do with a single deep scan but others it’s harder. There are some tools that let you aggregate your scans in a database and let you slice and dice there. I haven’t found one that I thought was worth the money… mostly because I like munging the data directly with my own risk models and none let me do that. If I have some spare time (ha!) I might right my own vuln database analysis tool. But for now, it works out easiest for me to run different scans on different days, and then look at the aggregate. Here are the type of scans I run:
1) Full-bore with credentials. The scanner has full administrative login creds for everything on the network. All the signatures are active and even some automated web app hacking is enabled. These can run so long that I have to break them up over several days to catch all of my enterprise (or buy even more scanners if my budget can handle it). It gives me the fullest grandest possible picture of what is messed up and where. Problem is that it also generates a ton of data.
2) Pivot scan with limited credentials. Now the scanner has the login creds of an ordinary user. This scan are much faster than above. The report tells me what my network looks like if a user’s workstation gets popped and an attacker is pivoting laterally and looking for soft targets. A very interesting perspective.
3) External scan with no credentials. Fast and quick, find everything that’s low-hanging fruit. I do these frequently.
4) Patch and default settings scan. Another fast and quick scan, look for missing patches and default creds and other dumb stuff. I do these frequently as well.
5) Discovery scan. Quick and fast network mapping to make sure nothing new has been added to the network. Also done frequently.
Break-it down
Whether you’ve done one big scan & aggregated it, or stitched together your multiple scans, you can’t possible have IT patch every single hole. Especially in a dynamic corporate environment such as ours. I long for the restricted deployment world of no-local-admins, certified install images and mandatory configuration compliance… but then that world isn't known for innovation or profit. So I have this pile of vulns to deal with. How do I break them down?
1) Take High-Med-Low/Red-Orange-Yellow/CVSS with a grain of salt. Yeah, a Purple Critical 11.5 scored vuln is probably bad. But then there seems to be a lot of vulnerability score inflation out there. I need something I can work with. One thing is to think of a points system. Maybe start with a CVSS score (or whatever you like) and add/subtract priority points based on the rest of these rules.
2) Vulnerabilities that have known exploits are high priority. If there’s a hole and a script kiddie can poke it, we need to fix it. We’re below the Mendoza Line for Security.
3) Protocol attacks, especially on the inside, are lower priority. Yeah, man-in-the-middle or crypto-break attacks happen. But they’re less common than the dumb stuff (see previous).
4) Extra attention to the key servers. Duh. But yes, your AD controllers, Sharepoints, databases, terminal servers and file shares need to be clean. Not only do they hold important goodies hackers want (like data or password databases) but if they go down, lots of folks can’t work. Bad news for the SLA and IT”s reputation.
5) Easy wins are high priority. This includes basic OS patches, fixing default passwords, turning off dumb services.
6) User workstation “Internet contact points” are scored higher as well. This means un-patched browsers, Java, Adobe readers, mail clients, etc. This where malware comes into the organization. Lock them down.
Hand-check the important stuff
I don’t trust machines. It’s why I’m in this business. So the really important systems, we hand check critical things at least once a month. This means logging into the box, making sure anti-virus is running and updated, patches have been run, local security policies are in place, and no suspicious local users have been added. We also do hand checks of key ACLs on routers, switches and firewalls. I wish I could say that these checks are superfluous but unfortunately they’ve proven fruitful enough that we keep doing them. Scanners miss things for complicated reasons. We don’t check a lot things this way, just the 10 to 15% of really critical servers and hosts.
Find out where the money is If you can afford it, I suggest looking into Digital Leak Prevention tools. Pairing up a scan for confidential data laying around on servers and workstations against a vuln scan is really helpful. Your idea of “important servers” and “work flows” changes when you see where things end up. There are a lot of DLP tools out there. I haven’t found one I liked. So we wrote our own. But that is a story for another day.
Happy scanning
Wednesday, February 12, 2014
Top 5 ways organizations fail at managing third-party risk
Those blasted third-parties! Turns out they’re to blame for Target's mishap.
Well, guess what? We all know you can’t outsource the blame and Target is taking the hit for not managing their third-party risk very well. Having spent the past 6 years as one of those blasted third-parties (and before that about the same amount of time as someone who audited third-parties for banks), I can tell you there are right ways to do this and wrong ways.
BTW, if you’re a PHB and prefer the “business friendly version” of this post, just read this article I wrote last summer for the financial industry.
So in my years of auditing and being audited, I have seen many many many irrational and ineffective choices made by auditor and auditee. One of the worst cases (there are so many to choose from) as an auditor was when I assessed a third-party servicing the banking industry in downtown Manhattan who refused to answer any of my questions. They failed and did not get the contract…. despite their belief that they would despite what the audit report read. Hmm.. then there was that third-party we convinced to get out of the financial services industry because they security was so bad. Sigh, sweet memories.
Oh, where was I… yeah, on with my rant list:
1. Wrong-fit assessment for the organization
If your third-party has direct physical access to your internal network, then a five page spreadsheet questionnaire is not going to tell you enough. If the company is producing software that is essential or is counting the money, then yeah, the audit should include some secure development practices. If the company is a cloud provider or a hosting company, you probably need to include audits of disaster recovery and physical security. These all seem obvious, but I’ve endured thirty page questionnaires and hours of grilling about things that were most “not applicable” for our organization, while other more important issues were left wholly unexamined.
2. Over-reliance on the wrong certification
I’ve written about this a little bit before, but this is really a variant of #1. The easiest miss I’ve seen is asking for PCI certification from companies that don’t process credit cards. If you followed the letter of the rule for PCI and you don’t have credit cards, it’s a pretty low bar to jump over. If you can’t tell the different between SSAE-16 SOC-1, SOC-2 or SOC-3, then don’t use them to rate your third-parties.
3. Sloppy scoping
The scope is where you begin, not an afterthought. You need to understand what data and dependencies the third-party is responsible for and where the heck they are. Two times out of three, the third-party does not even fully understand this. You can’t do a risk analysis if you don’t what and where the assets are. You surely can’t do a useful assessment. And once the scope is established and verified, then you can start looking how hard the boundaries are between the in-scope and out-of-scope areas.
4. Fire and forget
Most organizations can’t afford to review their third-parties more than once a year. Some only doing once every three years. That means that for one or two days out of 365, things are being looked at the third-party. How effective is that? This why I push for Type II audits which cover at least six months of assessments, and are often “rolling” so the review is constant. I also like weekly or even daily vulnerability scans for IT posture assessment. Threats change, infrastructure changes, compliance needs changes. Review should be as ubiquitous as it can be.
5. Lack of assessor skill
If the person doing the assessment doesn’t understand everything we’ve mentioned up until now, they’re not skilled enough to do the assessment. A lot of folks doing third-party audits on behalf of large organizations are just business dev people with checklists that they submit back to infosec for review. Fail. A good auditor also knows when a control is appropriate and a risk acceptable, which is why I always prefer working with knowledgeable experienced people than clueless newbs who ask all the wrong questions.
That’s it for today. Maybe later I’ll list how I think you should do this right.
Subscribe to:
Posts (Atom)