Friday, November 21, 2014

The Spoon Model

The spoon theory describes the daily life of people with medical conditions and their limited energy resources for doing seemingly everyday tasks. The model goes like this: each day you’re given a handful of spoons, which you will use for an activity. When the spoons are used up, you need to lay down until the next day. The difference between healthy people as they have an ever-renewing amount of spoons and can push themselves. while the medically-challenged must work within their limited allotment.

“Most people start the day with unlimited amount of possibilities, and energy to do whatever they desire, especially young people. For the most part, they do not need to worry about the effects of their actions.”

Just the daily tasks associated with living (getting dressed, making breakfast, getting on the bus) will cost spoons. Often once these spoons are allotted, there are aren’t many left for extra activities. Furthermore, simple problem like skipping a meal or being too cold can reduce the spoon allocation to the point where even normal activity is beyond the budget. Sometimes even pushing or overspending the spoon budget can seriously reduce the number of spoons available for the next few days.

It’s a very good and highly recommended read to understand how life is with a chronic illness or disability. I also think it’s a good metaphor for the daily workload of a IT worker.

I think folks outside of IT (and especially management) think they are like healthy people with boundless energy. However, most IT shops are burdened with technical debt dealing with poorly installed or poorly implemented software and architecture. They only have so many spoons! So when we security folks come in with “You need to patch everything right now!” 

Boom! All the spoons are gone. That means less time for other things that might affect your risk profile, like fixing broken anti-virus, monitoring & responding to security alerts, encrypting laptops, and removing accounts for terminated users. And this doesn’t count all the other things IT has to deal with that affect uptime, their user’s satisfaction and their own sanity.

I’m not sure every security professional realizes that they need to remember that IT has only so many spoons and only so many requests are going to be followed through on. We all need to plan carefully less we make things worse.

Thursday, September 11, 2014

How to interview security people

Good security people are in demand.  Emphasis on good.  We all don’t need any more paper-tigers toting the certs and zero useful skills.  Sometimes you can spot these folks on a resume but the last line of defense is the interview.

One problem I’ve seen for a lot of organizations filling their first security position is that the hiring team is unaware of how to interview for security.  So you can end up with someone with a serious skill gap.  And unfortunately, as the security program is built and staffed up, this already unqualified person then hires in more unqualified people.   As Rumsfeld said, “A’s hire A’s, B’s hire C’s.“  So it’s best to get this right in the beginning.

Evaluating security skills is tough.  So much so that my organization usually does dual interviews, sometimes triple.  One for cultural/team fit (which is often the hardest and weeds out a lot of people fast) and them a separate interview on technical topics.

But again, security technical interviews are tough if you don’t already have those skills in-house and can’t hire in a consultant to help in hiring.  As Dan Geer says, "cybersecurity is perhaps the most intellectually demanding occupation on the planet.”  So how do you evaluate that?

So what do you do?  Well, let’s step back and look at a  simple model of skills:

Now, we have a basic idea of the spectrum.  The next important thing is to look closely at the actual skills to do the job. For a lot of security folks, there’s a lot skills needed that aren’t about deciphering malware, configuring firewalls, decoding IDS traces, or hacking web servers.  A lot of security people, and even people who’s job title is simply “sysadmin”, need skills along the lines of risk judgement, security architecture, the spectrum of threats, and compliance issues.  Almost all of these topics are covered deeply in certifications and classes, except for good risk judgement.  And tellingly, that’s where I see a lot of security candidates falter.

So consider that fundamental skills, a good interview question would be to ask “Tell me about how do you risk analysis”

Another would be to give them an opportunity to demonstrate, with either a hypothetical question or asking them to tell you about a situation where they had to navigate the gray areas of risk.

Here’s an example of hypothetical and how I would think someone of various skill levels would answer:

“A business unit just inked a new deal where we’re going to be retrieving our daily TPS reports from an SFTP server in the cloud.  The cloud provider is SSAE audited and is willing to work with us on security.  What risks do you report back to your Director of Ops about this connection?"

“Cloud?  First of all, that’s a huge issue.  And audited, SSAE is like SAS-70, so that’s pretty not bad.  But there are still a lot of security problems with the cloud.   That’s a big risk there.  FTP?  No wait, you said SFTP.  That means the connection is encrypted, okay. I’m mostly concerned about malware.  I didn’t hear anything about anti-virus being used on the connection.  Just because the server is Linux doesn’t mean they can’t get a virus."

Advanced beginner 
“Cloud is audited?  SSAE?  Type 1 or Type 2? Okay, what else? Cloud Security Alliance?  SFTP is okay, but how is it authenticating?  Password?  What are the password rules? How often is it rotated? And what’s in the file?  I'd like to nmap that SFTP server too."

Proficient - prioritizes importance of aspects, employs maxims
“First off, what’s in the file.  Is it just documents?  It sounds like it, but is there PII in them?  SFTP file transfer is okay but since is this automated?  If so, are we using SSH keys?  Cloud provider says SSAE, is that Type 1 or Type 2, and SOC 1, 2 or 3?  What other certifications or audits do they have?   Is that server hardened?"

Expert - "intuitive grasp of situations based on deep, tacit understanding"
“First off, I'm excited to hear the cloud provider is willing to work with us. That tells us a lot about how well this will turn out.   Okay, onto the risl.  A lot depends on the nature of the files transferred.  If they have confidential data in them, then we need to look at a lot of things: is the file encrypted at rest in addition to the SFTP encryption, whatever that is.  I’d make sure the level of encryption AND authentication on the SSH session matches the need, not just risk but also for compliance.  Same goes for the Cloud provider.  They’re audited but what’s the scope?  Does it include servers hosted by them or just their infrastructure?  Is that SFTP server covered? And what levels? And I’d need to see the actual SSAE report so I can read it for scope, relevance and standards.” 

Note how at the highest level, there's an immediate realization that this whole project is being driven for a business purpose.  And the implies that your job as a security professional is to make this work in a safe and sane manner... not stomp all over some possibly important money-making process and shut down a project for compliance reasons.

Okay, another example:  “We have Jericho firewalls.  Tell me about your experience with them?”

“Jericho firewalls?  They suck. Remember the ram’s horn vulnerability they had two years ago? First thing I’d do is replace ‘em. And I know just how to do that.”

To me, this is the worst possible answer. This is someone who, without ever seeing our infrastructure or doing a risk analysis, has already decided what our highest priority is and is willing to commit resources to it. This is not a security professional I’d want on my team.

As someone who never has touched a Jericho firewall, I’d say something like “Well, I’ve never actually heard of them… which is strange since I have hands-on experience implementing and auditing many major firewalls (then I’d list the manufacturers and my experiences/certs) But I’m confident I’d be able to figure them out how they work.  Hmm, what was it you’d be expecting me to do with them?”

Now in this case, I’ve qualified my experience and have shown that I need to open a dialog about the nature of the control before proceeding any further.  At this point, it gives me a chance to shine (or fog up) as I probe and analyze their firewall requirements.  And if you as an interviewer are worried about giving away too much internal information in an interview, use made-up or outdated information for the dialog.  Of course, you should have NDAed and vetted the candidate before getting to this point.

Assuming you have NDAed and done basic vetting of the candidate, a useful assessment is to have them review an old audit or vulnerability assessment report of yours in front of you.  Their initial impressions will tell you a lot about their security philosophy - Yes, everyone has a security philosophy.  This field is too new and too fluid for rigid "best practices" to remain relevant1.  Pay particularly close attention to the candidate's comments on compliance.  Ideally, you want attorney-level knowledge of your compliance requirements.  This gets back to the Dreyfus model.  A novice will know the basic compliance rules.  Someone more skilled will understand the "case law" and how the compliance requirement is interpreted in the real world.  A highly skilled individual will work from the intent of the rule and even know how the rule is subverted.

One of the last key elements in a strong security candidate, of any skill level, is their enthusiasm.  This is a tough and often thankless job.  If they've been in the job long enough, they've, as Dan Geer says, sadder but wiser.

I've seen a lot of burned out security professionals either as bitter as an old lemon rind or sleeping walking through the job.  One thing to look for is do they have a "constant learning" attitude.  Ask about what they're interested in learning next in security. Ask what the big challenges in the field are for them and how they plan to meet them.  Ask if they teach or mentor or write about security.  These are all good indicators of someone who enjoys the field and is trying to improve.

Whew. Once again this blog post has rambled on and on.  That's enough for now.  Got questions or comments on how you hire security people?  Fire away.

1 - And I realize that attitude in itself is a philosophy.

Thursday, August 7, 2014

Things used interchangeably that are not

I keep seeing security "professionals" mixing and matching terms interchangeably that are not.  I can understand this confusion from a user or a PHB, but not from a security professional.   I mix conflating these terms should result in automatic disbarment of whatever the latest security certification that person is holding.  Considering how tricky security and assurance work already is, it'd be really nice if we all used the same terms for some of the most basic things we do.

The terms I most often see conflated or misused for each other are:

Privacy and Confidentiality
Privacy relates to a person, Confidentiality relates to information about a person.  It gets awkward when folks ask for a privacy policy when they really mean confidentiality policy.   A privacy policy would talk about how I handle (collect, use, retain and disclose) someone’s data.  A confidentiality policy talks about how I protect it.

Vulnerability scan and Penetration test
You can often get a vuln scan as part of a pen-test, but they really aren't the same thing.  The tip-off should be the word "penetration" which means someone is actually breaking in instead of just looking at you.   One usually costs a lot more than the other as well.  Bonus: a port scan is part of a vulnerability scan, but not the whole thing.

Vulnerability/Threat/Impact and Risk
I'm a proud member of SIRA, where a bunch of nerds sit around to argue about different risk models and which fits/works best in what situation.  But you know what?  I'd be happy if the entire industry just started using the most basic simplistic formula for risk: Risk = Threat × Probability × Impact.  Sadly, what I see folks doing is:
  1. "We need to stop doing this because APTs are dangerous" -> Risk = Threat
  2. "We need to shut down email because half our messages have malware in them" -> Risk = Probability
  3. "We need to do something about DDOS because our site could go down." -> Risk = Impact
No.  You aren't thinking this through.  And you're confusing the users.  Stop it.

Disaster Recovery and Business Continuity
Again, the tip-off is in the words themselves.  Disaster recovery is about recovering the IT systems after a disaster.  Just the IT systems.  Business continuity involves recovering the entire business process.  BC can include DR but not the other way around.

2 factor and additional authentication
You know when you login to your web banking from a new computer and it suddenly asks you what high school you went to?  That's not 2-factor authentication... because that's just more of "something you know." It's layered or risk-based or adaptive authentication.  But it's not a different factor so it's not as strong.  So stop thinking that it is.

What do you see security professionals mixing up all the time?

Wednesday, April 30, 2014

Great blog

So I stumbled across this blog post the other day and really liked it. If I wasn't so lazy, I'd rewrite it, replacing all the references to the development projects with security/risk mitigation projects.

But I am lazy, so read it yourself and make the replacement in your head.

It's a must read for anyone in the security biz communicating or managing risk to the business folks (in others, almost everyone in security)

No Deadlines For You! Software Dev Without Estimates, Specs or Other Lies

Seriously, go read it.  It's great.

Wednesday, March 26, 2014

7 areas outside of infosec to study

There's a lot of areas that most of us infosec people like to dabble in that is outside of our required skill set. For example, it seems like every third security person has a set of lock picks and loves to do it. Unless you're a red teamer, admit that it's just a puzzle you like to play with and stop trying to impress us. Here are some areas just outside of infosec that I like it to hone:

1. SEO - Becuz hackers use it to sneak malware into your organization. Information warfare is older school than “cyber warfare”, and information warfare is all about managing perception. Where to start? I recommend my neighbor, Moz

2. Effective communication. That means learning to write well in both email, long form, and to educate. It means being to speak effectively one-on-one, in a meeting, and giving a speech. It means being clear, concise and consistent. It means respecting your audience and establishing rapport. Where to start? I recommend Manager Tools

3. Project Management. Everything we do is a project. We can always be better at doing them. I’ve been managing projects for decades and I’m still not satisfied on how well things are run. I recommend Herding Cats.

4. Programming. I started in programming but rarely do it anymore. We work in technology. We give advice to developers. We work with sysadmins on scripting. We should at least have a good fundamental grasp of programming in a few major flavors: basic automation scripting, web apps, and short executables. I’d say you should at least be able create something useful (beyond Hello World) in PERL, Bash, or PowerShell… plus something in Ruby/Python/Java.

5. Databases. Most of everything is built on on a database. You should at least be able to write queries and understand how tables and indices work. It’s helpful to know a little more than how to do a SQL inject “drop tables” or “Select *”. You don’t need to become a DBA but tinker with SQLite or MySQL. As I level-up on item 4, I find myself doing more and more of number 5. They kinda go together.

6. Psychology. Since we can't solve all our security problems with money (cuz we don't have enough) we have to use influence to get things done. And we have to anticipate how controls will live or die in the real world. A good basic understanding of people beyond treating users as passive objects (or even worset, as rational actors) is required. A good starting place is Dan Ariely's Predictably Irrational: The Hidden Forces That Shape Our Decisions.

7. Behavioral economics (More psychology) If you ever wondered why I have a CISSP, do SSAE-16 audits, and have an office shelf of security awards, it’s because I get visited by a lot of nervous customers and auditors entrusting me with their data. And signaling theory.

Note how almost have the things on my list are human-centric areas… because the people are always the hardest part of the job.

Wednesday, March 19, 2014

An interesting tidbit in the EU data protection regs:

The European Parliament has finally passed their big redesign of data protection regulation. Nothing too shocking in there, in light of the Snowden fallout. One little item caught my eye tho:

 Data Protection Officers: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period.

Data protection officers shall be bound by secrecy concerning the identity of data subjects and concerning circumstances enabling data subjects to be identified, unless they are released from that obligation by the data subject. The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects. DPOs should be appointed for at least four years in the case of employees and two in that of external contractors.

The Commission proposed two years in both cases.

Data protection officers should be in a position to perform their duties and tasks independently and enjoy special protection against dismissal. Final responsibility should stay with the management of an organisation.

The data protection officer should be consulted prior to the design, procurement, development and setting-up of systems for the automated processing of personal data, in order to ensure the principles of privacy by design and privacy by default.

Not anything new here, but reviewing it made me thing about an interesting metric buried in there: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period ... The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects

First I liked the old metric of 250 employees per data protection officer. It tracked with my experience with about the right size to start having a dedicated security officer. But changing it to the size of pile of confidential data you're protecting is even more relevant.

When I was hired on in my current job, we were a smallish company but we were custodians of megatons of PII. And 5000 sounds about right, if nothing else, for breach numbers: If the average cost is around $136 per person's records breached, then 5000 x $136 = $680,000.

Okay, now we have our impact. The question is what is the probability of breach and how much does a dedicated DPO reduce that probability? Well, that probably varies on organization to organization, tho it'd be good to know some hard numbers. Something to munch on.

The other thing I liked in the regs is Data protection officers should be in a position to perform their duties and tasks independently which continues to support my position that infosec should not report into the IT hierarchy.

Monday, March 17, 2014

Make your security tools: DLP

After spending tens of thousands of dollars on commercial security solutions that did not meet our needs, our security team opted for a DIY approach. One of the first tools we wanted was a decent DLP.  We were also very disappointed in the DLP solutions available, especially when it came to tracking confidential data elements across both Linux and Windows file systems. Many were hard to use, difficult to configure, and/or dragged along an infrastructure of servers, agents and reporting systems. We wanted something small, flexible, and dead simple.  At this point, we were either looking at going back to the well for more resources to get the job done or coming up with some crafty.   None of us were coders beyond some basic sysadmin scripting, but we decided to give it a shot.

The problem was that we potentially had confidential data laying around on several large file repositories.  Nasty stuff like name + SSN, birthdate, credit card, etc.  We tried several commercial and open source DLP scanners and they missed huge swaths of stuff.  What was particularly vexing is that our in-house apps were generating some of this stuff, but it was in our own format.  It was pure ASCII text but the actual formatting of the data was making it invisible to the DLP tools.   It was structured but not in a way that any other tool could deal with.   Most of the tools didn't offer much flexibility in terms of configuration.  Those that did were limited to single pass reg-ex.

Our second problem is that we also wanted a way to cleanly scrub the data we found.  Not delete it, not encrypt it, but excise like a tumor with precision of a surgeon.  We were tearing through log files and test data load files used by developers.   Some of these files came directly from customers who did not know better to scrub out their own PII.  We had the blessing of management to clip the Personal out of PII and anonymize it in place.  No tool on the market did that.

Luckily we knew what we were looking for and how it was structured and what we wanted to do with it.  It allowed us to do contextual analysis... when you see these indicators, look here for these kinds of files.  Using Python and some hints based on OpenDLP (one of the things we looked at), plus a little Luhn test, and did a first pass.

We got a ton load of stuff back.  Almost none of good.   This was not unexpected, as this was our experience with a lot of the DLP tools. 

So we then started a second pass of contextual and content analyses.  We dove in and looked at look at these false positives and found what made them false.  This second pass scan would weed out those cases with pattern matching and algorithms.   We rinse, lathered and repeated with bigger and bigger data sets until we were hitting exactly what we want with no false positives. 

Next we added a scrub routine that replaced the exact piece of PII in a file with a unique nonsense data element.  For example, some of these files were being used as test loads by developers.  If we just turned all credit card numbers in 9's, their code would fail.   They also needed unique numbers for data analysis. If you turn a table of SSNs into every single entry being 99999, the test will fail.  So we selectively changed digits but maintained uniqueness.  I can't get into too much detail without giving away proprietary code, but you can read all about it here

We also kept a detailed log of what was changed to what, so that we could un-ring that bell if it ever misfired.  And of course, we protected those log files since they now have confidential data elements in them.

What we ended up with was a single script that given a file path, would just go to town on the files it found.  No agents, no back-end databases, no configuration. Just point and shoot. 

The beauty is we knew what we were willing to trade off, which was speed, against precision.  Our goal was the reduction of manual labor and better assurance.   Our code was clunky, ran in a slow interpreted language, and it took hours to complete.  But it was also easy to modify, easy to pass around to team members, and the logic was very clear.  Adopting the release early and often approach, we had something usable within weeks that proved more functional than the products on the market. 

The tool proved to be laser-precise in hunting down the unique PII data records in our environment, preventing costly and embarrassing data leaks.  After showing it around, we were given precious developer resources to clean up our code, add functionality, and fix a few little bugs.  It's been so successful as an in-house tool that our management will soon be releasing it as a software utility to go along with our product.