Wednesday, March 26, 2014

7 areas outside of infosec to study

There's a lot of areas that most of us infosec people like to dabble in that is outside of our required skill set. For example, it seems like every third security person has a set of lock picks and loves to do it. Unless you're a red teamer, admit that it's just a puzzle you like to play with and stop trying to impress us. Here are some areas just outside of infosec that I like it to hone:

1. SEO - Becuz hackers use it to sneak malware into your organization. Information warfare is older school than “cyber warfare”, and information warfare is all about managing perception. Where to start? I recommend my neighbor, Moz

2. Effective communication. That means learning to write well in both email, long form, and to educate. It means being to speak effectively one-on-one, in a meeting, and giving a speech. It means being clear, concise and consistent. It means respecting your audience and establishing rapport. Where to start? I recommend Manager Tools

3. Project Management. Everything we do is a project. We can always be better at doing them. I’ve been managing projects for decades and I’m still not satisfied on how well things are run. I recommend Herding Cats.

4. Programming. I started in programming but rarely do it anymore. We work in technology. We give advice to developers. We work with sysadmins on scripting. We should at least have a good fundamental grasp of programming in a few major flavors: basic automation scripting, web apps, and short executables. I’d say you should at least be able create something useful (beyond Hello World) in PERL, Bash, or PowerShell… plus something in Ruby/Python/Java.

5. Databases. Most of everything is built on on a database. You should at least be able to write queries and understand how tables and indices work. It’s helpful to know a little more than how to do a SQL inject “drop tables” or “Select *”. You don’t need to become a DBA but tinker with SQLite or MySQL. As I level-up on item 4, I find myself doing more and more of number 5. They kinda go together.

6. Psychology. Since we can't solve all our security problems with money (cuz we don't have enough) we have to use influence to get things done. And we have to anticipate how controls will live or die in the real world. A good basic understanding of people beyond treating users as passive objects (or even worset, as rational actors) is required. A good starting place is Dan Ariely's Predictably Irrational: The Hidden Forces That Shape Our Decisions.

7. Behavioral economics (More psychology) If you ever wondered why I have a CISSP, do SSAE-16 audits, and have an office shelf of security awards, it’s because I get visited by a lot of nervous customers and auditors entrusting me with their data. And signaling theory.

Note how almost have the things on my list are human-centric areas… because the people are always the hardest part of the job.

Wednesday, March 19, 2014

An interesting tidbit in the EU data protection regs:

The European Parliament has finally passed their big redesign of data protection regulation. Nothing too shocking in there, in light of the Snowden fallout. One little item caught my eye tho:

 Data Protection Officers: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period.

Data protection officers shall be bound by secrecy concerning the identity of data subjects and concerning circumstances enabling data subjects to be identified, unless they are released from that obligation by the data subject. The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects. DPOs should be appointed for at least four years in the case of employees and two in that of external contractors.

The Commission proposed two years in both cases.

Data protection officers should be in a position to perform their duties and tasks independently and enjoy special protection against dismissal. Final responsibility should stay with the management of an organisation.

The data protection officer should be consulted prior to the design, procurement, development and setting-up of systems for the automated processing of personal data, in order to ensure the principles of privacy by design and privacy by default.


Not anything new here, but reviewing it made me thing about an interesting metric buried in there: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period ... The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects

First I liked the old metric of 250 employees per data protection officer. It tracked with my experience with about the right size to start having a dedicated security officer. But changing it to the size of pile of confidential data you're protecting is even more relevant.

When I was hired on in my current job, we were a smallish company but we were custodians of megatons of PII. And 5000 sounds about right, if nothing else, for breach numbers: If the average cost is around $136 per person's records breached, then 5000 x $136 = $680,000.

Okay, now we have our impact. The question is what is the probability of breach and how much does a dedicated DPO reduce that probability? Well, that probably varies on organization to organization, tho it'd be good to know some hard numbers. Something to munch on.

The other thing I liked in the regs is Data protection officers should be in a position to perform their duties and tasks independently which continues to support my position that infosec should not report into the IT hierarchy.

Monday, March 17, 2014

Make your security tools: DLP

After spending tens of thousands of dollars on commercial security solutions that did not meet our needs, our security team opted for a DIY approach. One of the first tools we wanted was a decent DLP.  We were also very disappointed in the DLP solutions available, especially when it came to tracking confidential data elements across both Linux and Windows file systems. Many were hard to use, difficult to configure, and/or dragged along an infrastructure of servers, agents and reporting systems. We wanted something small, flexible, and dead simple.  At this point, we were either looking at going back to the well for more resources to get the job done or coming up with some crafty.   None of us were coders beyond some basic sysadmin scripting, but we decided to give it a shot.

The problem was that we potentially had confidential data laying around on several large file repositories.  Nasty stuff like name + SSN, birthdate, credit card, etc.  We tried several commercial and open source DLP scanners and they missed huge swaths of stuff.  What was particularly vexing is that our in-house apps were generating some of this stuff, but it was in our own format.  It was pure ASCII text but the actual formatting of the data was making it invisible to the DLP tools.   It was structured but not in a way that any other tool could deal with.   Most of the tools didn't offer much flexibility in terms of configuration.  Those that did were limited to single pass reg-ex.

Our second problem is that we also wanted a way to cleanly scrub the data we found.  Not delete it, not encrypt it, but excise like a tumor with precision of a surgeon.  We were tearing through log files and test data load files used by developers.   Some of these files came directly from customers who did not know better to scrub out their own PII.  We had the blessing of management to clip the Personal out of PII and anonymize it in place.  No tool on the market did that.

Luckily we knew what we were looking for and how it was structured and what we wanted to do with it.  It allowed us to do contextual analysis... when you see these indicators, look here for these kinds of files.  Using Python and some hints based on OpenDLP (one of the things we looked at), plus a little Luhn test, and did a first pass.

We got a ton load of stuff back.  Almost none of good.   This was not unexpected, as this was our experience with a lot of the DLP tools. 

So we then started a second pass of contextual and content analyses.  We dove in and looked at look at these false positives and found what made them false.  This second pass scan would weed out those cases with pattern matching and algorithms.   We rinse, lathered and repeated with bigger and bigger data sets until we were hitting exactly what we want with no false positives. 

Next we added a scrub routine that replaced the exact piece of PII in a file with a unique nonsense data element.  For example, some of these files were being used as test loads by developers.  If we just turned all credit card numbers in 9's, their code would fail.   They also needed unique numbers for data analysis. If you turn a table of SSNs into every single entry being 99999, the test will fail.  So we selectively changed digits but maintained uniqueness.  I can't get into too much detail without giving away proprietary code, but you can read all about it here

We also kept a detailed log of what was changed to what, so that we could un-ring that bell if it ever misfired.  And of course, we protected those log files since they now have confidential data elements in them.

What we ended up with was a single script that given a file path, would just go to town on the files it found.  No agents, no back-end databases, no configuration. Just point and shoot. 

The beauty is we knew what we were willing to trade off, which was speed, against precision.  Our goal was the reduction of manual labor and better assurance.   Our code was clunky, ran in a slow interpreted language, and it took hours to complete.  But it was also easy to modify, easy to pass around to team members, and the logic was very clear.  Adopting the release early and often approach, we had something usable within weeks that proved more functional than the products on the market. 

The tool proved to be laser-precise in hunting down the unique PII data records in our environment, preventing costly and embarrassing data leaks.  After showing it around, we were given precious developer resources to clean up our code, add functionality, and fix a few little bugs.  It's been so successful as an in-house tool that our management will soon be releasing it as a software utility to go along with our product.


Thursday, February 20, 2014

Internal Vulnerability scanning

The hardest thing about vulnerability scanning is not the scanning itself. There are literally dozens of pretty decent scanning tools and vendors out there at way reasonable prices. The hard part is prioritizing the mountain of vulnerability data you get back. This is especially true if you’re scanning your inside network, which I highly recommend you do as frequently as possible. Our team runs scans nearly every other day, tho the scans are different (which I’ll get into), with the entire suite of scans completing once a week. I’m a big believer in getting an “attacker's eye view” of my network and using that as a component of my risk and architecture decision making.

 However, every scan seems to generate dozens and dozens of vulnerabilities per host. Multiply that by a good sized network and you’ll find things are quickly unmanageable. If your organization is lucky enough where you’re seeing only a few hits or none per host, then congratulations, you’re very lucky (either than or your scanner is malfunctioning). I live in the hyper-fast world where innovation, customer service, and agility (you can’t spell agility without Agile) are key profit drivers while InfoSec is not. So my team has a lot of stuff to wade through. Here’s how we deal with it.  

Multiple scans
 I do most my scanning after hours as not to disrupt the business and clog up the pipes. Yes, I have blown up boxes and switches doing vuln scans, despite a decade and a half of experience using these things. It happens. So, I do it at night. But that gives me limited time. Also, for risk management purposes, I want to get different perspectives on scans, which some scanners can do with a single deep scan but others it’s harder. There are some tools that let you aggregate your scans in a database and let you slice and dice there. I haven’t found one that I thought was worth the money… mostly because I like munging the data directly with my own risk models and none let me do that. If I have some spare time (ha!) I might right my own vuln database analysis tool. But for now, it works out easiest for me to run different scans on different days, and then look at the aggregate. Here are the type of scans I run:

1) Full-bore with credentials. The scanner has full administrative login creds for everything on the network. All the signatures are active and even some automated web app hacking is enabled. These can run so long that I have to break them up over several days to catch all of my enterprise (or buy even more scanners if my budget can handle it). It gives me the fullest grandest possible picture of what is messed up and where. Problem is that it also generates a ton of data.

2) Pivot scan with limited credentials. Now the scanner has the login creds of an ordinary user. This scan are much faster than above. The report tells me what my network looks like if a user’s workstation gets popped and an attacker is pivoting laterally and looking for soft targets. A very interesting perspective.

3) External scan with no credentials. Fast and quick, find everything that’s low-hanging fruit. I do these frequently.

4) Patch and default settings scan. Another fast and quick scan, look for missing patches and default creds and other dumb stuff. I do these frequently as well.

5) Discovery scan. Quick and fast network mapping to make sure nothing new has been added to the network. Also done frequently.  

Break-it down
Whether you’ve done one big scan & aggregated it, or stitched together your multiple scans, you can’t possible have IT patch every single hole. Especially in a dynamic corporate environment such as ours. I long for the restricted deployment world of no-local-admins, certified install images and mandatory configuration compliance… but then that world isn't known for innovation or profit. So I have this pile of vulns to deal with. How do I break them down?

1) Take High-Med-Low/Red-Orange-Yellow/CVSS with a grain of salt. Yeah, a Purple Critical 11.5 scored vuln is probably bad. But then there seems to be a lot of vulnerability score inflation out there. I need something I can work with. One thing is to think of a points system. Maybe start with a CVSS score (or whatever you like) and add/subtract priority points based on the rest of these rules.

 2) Vulnerabilities that have known exploits are high priority. If there’s a hole and a script kiddie can poke it, we need to fix it. We’re below the Mendoza Line for Security.

 3) Protocol attacks, especially on the inside, are lower priority. Yeah, man-in-the-middle or crypto-break attacks happen. But they’re less common than the dumb stuff (see previous).

 4) Extra attention to the key servers. Duh. But yes, your AD controllers, Sharepoints, databases, terminal servers and file shares need to be clean. Not only do they hold important goodies hackers want (like data or password databases) but if they go down, lots of folks can’t work. Bad news for the SLA and IT”s reputation.

5) Easy wins are high priority. This includes basic OS patches, fixing default passwords, turning off dumb services.

6) User workstation “Internet contact points” are scored higher as well. This means un-patched browsers, Java, Adobe readers, mail clients, etc. This where malware comes into the organization. Lock them down.  

Hand-check the important stuff
I don’t trust machines. It’s why I’m in this business. So the really important systems, we hand check critical things at least once a month. This means logging into the box, making sure anti-virus is running and updated, patches have been run, local security policies are in place, and no suspicious local users have been added. We also do hand checks of key ACLs on routers, switches and firewalls. I wish I could say that these checks are superfluous but unfortunately they’ve proven fruitful enough that we keep doing them. Scanners miss things for complicated reasons. We don’t check a lot things this way, just the 10 to 15% of really critical servers and hosts.  

Find out where the money is If you can afford it, I suggest looking into Digital Leak Prevention tools. Pairing up a scan for confidential data laying around on servers and workstations against a vuln scan is really helpful. Your idea of “important servers” and “work flows” changes when you see where things end up. There are a lot of DLP tools out there. I haven’t found one I liked. So we wrote our own. But that is a story for another day.  

Happy scanning

Wednesday, February 12, 2014

Top 5 ways organizations fail at managing third-party risk



Those blasted third-parties!  Turns out they’re to blame for Target's mishap.

Well, guess what?  We all know you can’t outsource the blame and Target is taking the hit for not managing their third-party risk very well.  Having spent the past 6 years as one of those blasted third-parties (and before that about the same amount of time as someone who audited third-parties for banks), I can tell you there are right ways to do this and wrong ways.

BTW, if you’re a PHB and prefer the “business friendly version” of this post, just read this article I wrote last summer for the financial industry.

So in my years of auditing and being audited, I have seen many many many irrational and ineffective choices  made by auditor and auditee.   One of the worst cases (there are so many to choose from) as an auditor was when I assessed a third-party servicing the banking industry in downtown Manhattan who refused to answer any of my questions.  They failed and did not get the contract…. despite their belief that they would despite what the audit report read.  Hmm.. then there was that third-party we convinced to get out of the financial services industry because they security was so bad.   Sigh, sweet memories.

Oh, where was I… yeah, on with my rant list:

1. Wrong-fit assessment for the organization
If your third-party has direct physical access to your internal network, then a five page spreadsheet questionnaire is not going to tell you enough.  If the company is producing software that is essential or is counting the money, then yeah, the audit should include some secure development practices.  If the company is a cloud provider or a hosting company, you probably need to include audits of disaster recovery and physical security.   These all seem obvious, but I’ve endured thirty page questionnaires and hours of grilling about things that were most “not applicable” for our organization, while other more important issues were left wholly unexamined.


2. Over-reliance on the wrong certification
I’ve written about this a little bit before, but this is really a variant of #1.   The easiest miss I’ve seen is asking for PCI certification from companies that don’t process credit cards.  If you followed the letter of the rule for PCI and you don’t have credit cards, it’s a pretty low bar to jump over.   If you can’t tell the different between SSAE-16 SOC-1, SOC-2 or SOC-3, then don’t use them to rate your third-parties. 

3. Sloppy scoping
The scope is where you begin, not an afterthought.  You need to understand what data and dependencies the third-party is responsible for and where the heck they are.   Two times out of three, the third-party does not even fully understand this.  You can’t do a risk analysis if you don’t what and where the assets are.  You surely can’t do a useful assessment.  And once the scope is established and verified, then you can start looking how hard the boundaries are between the in-scope and out-of-scope areas.

4. Fire and forget
Most organizations can’t afford to review their third-parties more than once a year.  Some only doing once every three years.  That means that for one or two days out of 365, things are being looked at the third-party.  How effective is that?  This why I push for Type II audits which cover at least six months of assessments, and are often “rolling” so the review is constant.   I also like weekly or even daily vulnerability scans for IT posture assessment.  Threats change, infrastructure changes, compliance needs changes.  Review should be as ubiquitous as it can be.

5. Lack of assessor skill
If the person doing the assessment doesn’t understand everything we’ve mentioned up until now, they’re not skilled enough to do the assessment.  A lot of folks doing third-party audits on behalf of large organizations are just business dev people with checklists that they submit back to infosec for review.  Fail.  A good auditor also knows when a control is appropriate and a risk acceptable, which is why I always prefer working with knowledgeable experienced people than clueless newbs who ask all the wrong questions.

That’s it for today.  Maybe later I’ll list how I think you should do this right.

Friday, November 8, 2013

What is cyber security?

Once again I'm ranting again about things that I thought should have already been settled long ago.  And in some ways, they have… but on the ground, I see nothing but confusion. So today is: what is this thing that we do?

I purposely chose the blandest and overused term "cyber security", because I see it thrown around by the folks who seem the most clueless about it.   This simple thing of what is expected of a cyber security professional gets to be particularly problematic when an organization goes to staff up and build their first cyber security program.  You're hiring your first security professionals, what should they know?  You're reorging your IT group, where does the security department fit in?  Who does your head of security report to?   What should the dedicated cyber security team be responsible for and more importantly, be not allowed to do?  I've seen a wide variety of expectations across industries and organizations.  In some cases, the role is defined by regulation but when it's not, it tends to be squishy and inefficient. 

As previously alluded to,  old school cyber security equated IT security entirely with info security.  A tactical place to start but definitely not a mature or effective model to thrive in a DevOps meets APT world.  And in the most ineffective and inefficient cases, you'd see normal IT and development stuff going on with the security team coming in afterwards to "make things secure".    

First off, let me say that I don't know exactly what cyber security should definitively encompass.  In this post, I'm only going to discuss my own thoughts and experience.  A bit of background, I'm an IT ops guy (first 10 years of my career) who grew into IT security (next 5 to 7 years) and now does information assurance (last 10 to 12 years).   I think the best way to break this down in this first cut is to roll thru the ISC2's CBK.  It ain't perfect and it's kinda old, but it's a general shape matches what we're supposed to be doing in cyber security. 


Access Control
I see this is as split between the security group and the IT operations group.  The security group should definitely consult, design and assess against The big picture stuff, like models, techniques, and threats around access control.  The IT ops group should deal with the specific mechanisms.  You will not see me tinkering with AD permission groups or managing two-factor tokens.  But I will provide guidance on how these things should done to match the risk and business requirements of the organization.

Telecommunications and Network Security
Same as access control.  I grew up as a firewall guy but I haven't configured a firewall for years.   The IT network engineering team is best to do this.  Especially since they understand all the nuances and implications of opening this port over that port, splitting a particular VLAN or how the VPN links should be laid out.  I absolutely consult and oversee the design and configurations to make sure they meet our requirements.  I still keep up on latest DDOS techniques and research new perimeter controls and make sure the infrastructure team gets the highlights regularly as well.

InfoSec Governance and Risk Management
Pretty much entirely the security department to drives this.  That doesn't mean that everyone else doesn't understand and follow the policies to the best of their abilities.  Hopefully, the message that security is everyone's job applies here.    I often spend a bulk of my time translating and interpreting the organizations policies for individuals and departments wanting to accomplish some business thing in accordance with policy (well, actually they want to make sure they flag the attention of the auditors, but that's mostly because I've trained that way).

Software Development Security

Realistically, this is it's own thing looking like network security but for programmers.  A group of security consultants and assessors doing the big picture stuff embedded with the developers doing the actual implementing of the more secure code.  I can think of no better concrete example of how to organize this than Microsoft Trustworthy Computing.  I'd even slice web security off as it's own specialty here as well, but still following the same model.

Cryptography
Beyond the specialized discipline, the security team would know the concepts, the algorithms and the pitfalls as to advise the various IT and developer teams on implementation of specific controls.  

Security Architecture and Design
This is something that consumes a lot of my time… and something I enjoy doing.  This is definitely where I would expect a "cyber security professional" to know things deep and wide.   Lots of advising and assessing both up and down the organization chart on how and why some designs are better than others.  And yes, lots of research and upkeep to keep abreast of new threats and technologies.

Operations Security
Ideally, I think this should be something designed with direction from the security team but run at the edges or even outside of the security team's domain.  Realistically, it's often part of the day-to-day duties of the security group.   Vulnerability management should really be part of the IT groups daily grind.  Incident response is led by security but should pull from all groups, tho forensics is it's own thing and usually owned by security or legal (see below)

Legal and Compliance
Security should definitely understand a lot here, especially because in some organizations, the legal department can be often lack cyberlaw knowledge.  The minimum expected of any security professional should be to able to translate compliance requirements into technical standards.  If the security team is lucky, they've got a technically savvy legal team that collaborates on cyber security matters. I haven't been lucky.   Cyber forensics and investigations should also live in the legal department, with some assist from security.

Business Continuity
Again, I think this should be it's own group but often security owns most if not all of this function.  It's a deep and wide discipline and affects everyone in the organization which demands a lot of resources.  Because of this, when it's own solely by security, it's often not done very effectively.  The security team is often so busy they rarely have time (or interest or skills) to do a great job here.

Physical and Environmental Security
In large organizations, this is a separate group standing parallel to cyber security.  In most, the security team owns it as well.  Typically, the security group does the design and sets the standards, then works with either facilities and/or the IT team to make sure things are enforced.  Things definitely get strained when a physical security event occurs and three different groups come to the table and try to figure what the heck happened and where the fingers should be pointed.  Since I'm in a medium/small organization with high assurance requirements, it means that I own most of this and have to know a bit about alarms, door locks, and building materials. 


Okay, that's it. Rant off.  What's your experience?  What is it do you think we should be doing?

Extra credit: does infosec require an IT background or can you be a pure infosec professional with minimal engineering training?