Thursday, August 7, 2014

Things used interchangeably that are not

I keep seeing security "professionals" mixing and matching terms interchangeably that are not.  I can understand this confusion from a user or a PHB, but not from a security professional.   I mix conflating these terms should result in automatic disbarment of whatever the latest security certification that person is holding.  Considering how tricky security and assurance work already is, it'd be really nice if we all used the same terms for some of the most basic things we do.

The terms I most often see conflated or misused for each other are:


Privacy and Confidentiality
Privacy relates to a person, Confidentiality relates to information about a person.  It gets awkward when folks ask for a privacy policy when they really mean confidentiality policy.   A privacy policy would talk about how I handle (collect, use, retain and disclose) someone’s data.  A confidentiality policy talks about how I protect it.


Vulnerability scan and Penetration test
You can often get a vuln scan as part of a pen-test, but they really aren't the same thing.  The tip-off should be the word "penetration" which means someone is actually breaking in instead of just looking at you.   One usually costs a lot more than the other as well.  Bonus: a port scan is part of a vulnerability scan, but not the whole thing.




Vulnerability/Threat/Impact and Risk
I'm a proud member of SIRA, where a bunch of nerds sit around to argue about different risk models and which fits/works best in what situation.  But you know what?  I'd be happy if the entire industry just started using the most basic simplistic formula for risk: Risk = Threat × Probability × Impact.  Sadly, what I see folks doing is:
  1. "We need to stop doing this because APTs are dangerous" -> Risk = Threat
  2. "We need to shut down email because half our messages have malware in them" -> Risk = Probability
  3. "We need to do something about DDOS because our site could go down." -> Risk = Impact
No.  You aren't thinking this through.  And you're confusing the users.  Stop it.

Disaster Recovery and Business Continuity
Again, the tip-off is in the words themselves.  Disaster recovery is about recovering the IT systems after a disaster.  Just the IT systems.  Business continuity involves recovering the entire business process.  BC can include DR but not the other way around.

2 factor and additional authentication
You know when you login to your web banking from a new computer and it suddenly asks you what high school you went to?  That's not 2-factor authentication... because that's just more of "something you know." It's layered or risk-based or adaptive authentication.  But it's not a different factor so it's not as strong.  So stop thinking that it is.

What do you see security professionals mixing up all the time?




Wednesday, April 30, 2014

Great blog

So I stumbled across this blog post the other day and really liked it. If I wasn't so lazy, I'd rewrite it, replacing all the references to the development projects with security/risk mitigation projects.

But I am lazy, so read it yourself and make the replacement in your head.

It's a must read for anyone in the security biz communicating or managing risk to the business folks (in others, almost everyone in security)

No Deadlines For You! Software Dev Without Estimates, Specs or Other Lies

Seriously, go read it.  It's great.

Wednesday, March 26, 2014

7 areas outside of infosec to study

There's a lot of areas that most of us infosec people like to dabble in that is outside of our required skill set. For example, it seems like every third security person has a set of lock picks and loves to do it. Unless you're a red teamer, admit that it's just a puzzle you like to play with and stop trying to impress us. Here are some areas just outside of infosec that I like it to hone:

1. SEO - Becuz hackers use it to sneak malware into your organization. Information warfare is older school than “cyber warfare”, and information warfare is all about managing perception. Where to start? I recommend my neighbor, Moz

2. Effective communication. That means learning to write well in both email, long form, and to educate. It means being to speak effectively one-on-one, in a meeting, and giving a speech. It means being clear, concise and consistent. It means respecting your audience and establishing rapport. Where to start? I recommend Manager Tools

3. Project Management. Everything we do is a project. We can always be better at doing them. I’ve been managing projects for decades and I’m still not satisfied on how well things are run. I recommend Herding Cats.

4. Programming. I started in programming but rarely do it anymore. We work in technology. We give advice to developers. We work with sysadmins on scripting. We should at least have a good fundamental grasp of programming in a few major flavors: basic automation scripting, web apps, and short executables. I’d say you should at least be able create something useful (beyond Hello World) in PERL, Bash, or PowerShell… plus something in Ruby/Python/Java.

5. Databases. Most of everything is built on on a database. You should at least be able to write queries and understand how tables and indices work. It’s helpful to know a little more than how to do a SQL inject “drop tables” or “Select *”. You don’t need to become a DBA but tinker with SQLite or MySQL. As I level-up on item 4, I find myself doing more and more of number 5. They kinda go together.

6. Psychology. Since we can't solve all our security problems with money (cuz we don't have enough) we have to use influence to get things done. And we have to anticipate how controls will live or die in the real world. A good basic understanding of people beyond treating users as passive objects (or even worset, as rational actors) is required. A good starting place is Dan Ariely's Predictably Irrational: The Hidden Forces That Shape Our Decisions.

7. Behavioral economics (More psychology) If you ever wondered why I have a CISSP, do SSAE-16 audits, and have an office shelf of security awards, it’s because I get visited by a lot of nervous customers and auditors entrusting me with their data. And signaling theory.

Note how almost have the things on my list are human-centric areas… because the people are always the hardest part of the job.

Wednesday, March 19, 2014

An interesting tidbit in the EU data protection regs:

The European Parliament has finally passed their big redesign of data protection regulation. Nothing too shocking in there, in light of the Snowden fallout. One little item caught my eye tho:

 Data Protection Officers: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period.

Data protection officers shall be bound by secrecy concerning the identity of data subjects and concerning circumstances enabling data subjects to be identified, unless they are released from that obligation by the data subject. The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects. DPOs should be appointed for at least four years in the case of employees and two in that of external contractors.

The Commission proposed two years in both cases.

Data protection officers should be in a position to perform their duties and tasks independently and enjoy special protection against dismissal. Final responsibility should stay with the management of an organisation.

The data protection officer should be consulted prior to the design, procurement, development and setting-up of systems for the automated processing of personal data, in order to ensure the principles of privacy by design and privacy by default.


Not anything new here, but reviewing it made me thing about an interesting metric buried in there: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period ... The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects

First I liked the old metric of 250 employees per data protection officer. It tracked with my experience with about the right size to start having a dedicated security officer. But changing it to the size of pile of confidential data you're protecting is even more relevant.

When I was hired on in my current job, we were a smallish company but we were custodians of megatons of PII. And 5000 sounds about right, if nothing else, for breach numbers: If the average cost is around $136 per person's records breached, then 5000 x $136 = $680,000.

Okay, now we have our impact. The question is what is the probability of breach and how much does a dedicated DPO reduce that probability? Well, that probably varies on organization to organization, tho it'd be good to know some hard numbers. Something to munch on.

The other thing I liked in the regs is Data protection officers should be in a position to perform their duties and tasks independently which continues to support my position that infosec should not report into the IT hierarchy.

Monday, March 17, 2014

Make your security tools: DLP

After spending tens of thousands of dollars on commercial security solutions that did not meet our needs, our security team opted for a DIY approach. One of the first tools we wanted was a decent DLP.  We were also very disappointed in the DLP solutions available, especially when it came to tracking confidential data elements across both Linux and Windows file systems. Many were hard to use, difficult to configure, and/or dragged along an infrastructure of servers, agents and reporting systems. We wanted something small, flexible, and dead simple.  At this point, we were either looking at going back to the well for more resources to get the job done or coming up with some crafty.   None of us were coders beyond some basic sysadmin scripting, but we decided to give it a shot.

The problem was that we potentially had confidential data laying around on several large file repositories.  Nasty stuff like name + SSN, birthdate, credit card, etc.  We tried several commercial and open source DLP scanners and they missed huge swaths of stuff.  What was particularly vexing is that our in-house apps were generating some of this stuff, but it was in our own format.  It was pure ASCII text but the actual formatting of the data was making it invisible to the DLP tools.   It was structured but not in a way that any other tool could deal with.   Most of the tools didn't offer much flexibility in terms of configuration.  Those that did were limited to single pass reg-ex.

Our second problem is that we also wanted a way to cleanly scrub the data we found.  Not delete it, not encrypt it, but excise like a tumor with precision of a surgeon.  We were tearing through log files and test data load files used by developers.   Some of these files came directly from customers who did not know better to scrub out their own PII.  We had the blessing of management to clip the Personal out of PII and anonymize it in place.  No tool on the market did that.

Luckily we knew what we were looking for and how it was structured and what we wanted to do with it.  It allowed us to do contextual analysis... when you see these indicators, look here for these kinds of files.  Using Python and some hints based on OpenDLP (one of the things we looked at), plus a little Luhn test, and did a first pass.

We got a ton load of stuff back.  Almost none of good.   This was not unexpected, as this was our experience with a lot of the DLP tools. 

So we then started a second pass of contextual and content analyses.  We dove in and looked at look at these false positives and found what made them false.  This second pass scan would weed out those cases with pattern matching and algorithms.   We rinse, lathered and repeated with bigger and bigger data sets until we were hitting exactly what we want with no false positives. 

Next we added a scrub routine that replaced the exact piece of PII in a file with a unique nonsense data element.  For example, some of these files were being used as test loads by developers.  If we just turned all credit card numbers in 9's, their code would fail.   They also needed unique numbers for data analysis. If you turn a table of SSNs into every single entry being 99999, the test will fail.  So we selectively changed digits but maintained uniqueness.  I can't get into too much detail without giving away proprietary code, but you can read all about it here

We also kept a detailed log of what was changed to what, so that we could un-ring that bell if it ever misfired.  And of course, we protected those log files since they now have confidential data elements in them.

What we ended up with was a single script that given a file path, would just go to town on the files it found.  No agents, no back-end databases, no configuration. Just point and shoot. 

The beauty is we knew what we were willing to trade off, which was speed, against precision.  Our goal was the reduction of manual labor and better assurance.   Our code was clunky, ran in a slow interpreted language, and it took hours to complete.  But it was also easy to modify, easy to pass around to team members, and the logic was very clear.  Adopting the release early and often approach, we had something usable within weeks that proved more functional than the products on the market. 

The tool proved to be laser-precise in hunting down the unique PII data records in our environment, preventing costly and embarrassing data leaks.  After showing it around, we were given precious developer resources to clean up our code, add functionality, and fix a few little bugs.  It's been so successful as an in-house tool that our management will soon be releasing it as a software utility to go along with our product.


Thursday, February 20, 2014

Internal Vulnerability scanning

The hardest thing about vulnerability scanning is not the scanning itself. There are literally dozens of pretty decent scanning tools and vendors out there at way reasonable prices. The hard part is prioritizing the mountain of vulnerability data you get back. This is especially true if you’re scanning your inside network, which I highly recommend you do as frequently as possible. Our team runs scans nearly every other day, tho the scans are different (which I’ll get into), with the entire suite of scans completing once a week. I’m a big believer in getting an “attacker's eye view” of my network and using that as a component of my risk and architecture decision making.

 However, every scan seems to generate dozens and dozens of vulnerabilities per host. Multiply that by a good sized network and you’ll find things are quickly unmanageable. If your organization is lucky enough where you’re seeing only a few hits or none per host, then congratulations, you’re very lucky (either than or your scanner is malfunctioning). I live in the hyper-fast world where innovation, customer service, and agility (you can’t spell agility without Agile) are key profit drivers while InfoSec is not. So my team has a lot of stuff to wade through. Here’s how we deal with it.  

Multiple scans
 I do most my scanning after hours as not to disrupt the business and clog up the pipes. Yes, I have blown up boxes and switches doing vuln scans, despite a decade and a half of experience using these things. It happens. So, I do it at night. But that gives me limited time. Also, for risk management purposes, I want to get different perspectives on scans, which some scanners can do with a single deep scan but others it’s harder. There are some tools that let you aggregate your scans in a database and let you slice and dice there. I haven’t found one that I thought was worth the money… mostly because I like munging the data directly with my own risk models and none let me do that. If I have some spare time (ha!) I might right my own vuln database analysis tool. But for now, it works out easiest for me to run different scans on different days, and then look at the aggregate. Here are the type of scans I run:

1) Full-bore with credentials. The scanner has full administrative login creds for everything on the network. All the signatures are active and even some automated web app hacking is enabled. These can run so long that I have to break them up over several days to catch all of my enterprise (or buy even more scanners if my budget can handle it). It gives me the fullest grandest possible picture of what is messed up and where. Problem is that it also generates a ton of data.

2) Pivot scan with limited credentials. Now the scanner has the login creds of an ordinary user. This scan are much faster than above. The report tells me what my network looks like if a user’s workstation gets popped and an attacker is pivoting laterally and looking for soft targets. A very interesting perspective.

3) External scan with no credentials. Fast and quick, find everything that’s low-hanging fruit. I do these frequently.

4) Patch and default settings scan. Another fast and quick scan, look for missing patches and default creds and other dumb stuff. I do these frequently as well.

5) Discovery scan. Quick and fast network mapping to make sure nothing new has been added to the network. Also done frequently.  

Break-it down
Whether you’ve done one big scan & aggregated it, or stitched together your multiple scans, you can’t possible have IT patch every single hole. Especially in a dynamic corporate environment such as ours. I long for the restricted deployment world of no-local-admins, certified install images and mandatory configuration compliance… but then that world isn't known for innovation or profit. So I have this pile of vulns to deal with. How do I break them down?

1) Take High-Med-Low/Red-Orange-Yellow/CVSS with a grain of salt. Yeah, a Purple Critical 11.5 scored vuln is probably bad. But then there seems to be a lot of vulnerability score inflation out there. I need something I can work with. One thing is to think of a points system. Maybe start with a CVSS score (or whatever you like) and add/subtract priority points based on the rest of these rules.

 2) Vulnerabilities that have known exploits are high priority. If there’s a hole and a script kiddie can poke it, we need to fix it. We’re below the Mendoza Line for Security.

 3) Protocol attacks, especially on the inside, are lower priority. Yeah, man-in-the-middle or crypto-break attacks happen. But they’re less common than the dumb stuff (see previous).

 4) Extra attention to the key servers. Duh. But yes, your AD controllers, Sharepoints, databases, terminal servers and file shares need to be clean. Not only do they hold important goodies hackers want (like data or password databases) but if they go down, lots of folks can’t work. Bad news for the SLA and IT”s reputation.

5) Easy wins are high priority. This includes basic OS patches, fixing default passwords, turning off dumb services.

6) User workstation “Internet contact points” are scored higher as well. This means un-patched browsers, Java, Adobe readers, mail clients, etc. This where malware comes into the organization. Lock them down.  

Hand-check the important stuff
I don’t trust machines. It’s why I’m in this business. So the really important systems, we hand check critical things at least once a month. This means logging into the box, making sure anti-virus is running and updated, patches have been run, local security policies are in place, and no suspicious local users have been added. We also do hand checks of key ACLs on routers, switches and firewalls. I wish I could say that these checks are superfluous but unfortunately they’ve proven fruitful enough that we keep doing them. Scanners miss things for complicated reasons. We don’t check a lot things this way, just the 10 to 15% of really critical servers and hosts.  

Find out where the money is If you can afford it, I suggest looking into Digital Leak Prevention tools. Pairing up a scan for confidential data laying around on servers and workstations against a vuln scan is really helpful. Your idea of “important servers” and “work flows” changes when you see where things end up. There are a lot of DLP tools out there. I haven’t found one I liked. So we wrote our own. But that is a story for another day.  

Happy scanning