Thursday, December 18, 2014

Our Gray Cyber Future

This post by Bruce Schneier kicked off an idea that's been rattling around in my head for a year or three...The future of cyber-security is indeed looking more and more gray.   Gray as in gray hat... as in more and more "bad guy" techniques being adopted by good guys.  We're going to see continued growth in offensive hacking techniques migrating from attacker to defender, just so defenders can keep up.   And bad guys are going to get even more bad.

As if things weren't already gray enough, with some security professionals jumping back and forth across the line from blackhat to whitehat and vice-versa. 

As Richard Thieme famously pointed out, in the near future the edge will move to be the center. To see the future of the mainstream, we only need look at the edge. So what is happening at the edge?

The Age of Mega-Threats

We're moving into a time of invisible cyber-wars between nation-states, NGOs and multinationals.  Not to mention large scale industrial espionage, DDOS and revenge hacking and major corporations getting whacked by semi-cloaked villains.


We can see the rise of true super villains - individuals who can cause massive cyber-damage.  Not just folks like Snowden, but what about the mega-Botmasters and mercenary super hacker-for-hire.  We're really only one good 0-day in the wrong hands from someone shutting down the Internet for a few days.  Large scale automation of attacks allows one person could direct an attack of millions of bots or use them as their own private surveillance network

Law Enforcement Response
Like they have in the face of terrorism and the war on drugs, LE is going to move away from the clean-cut traditional police methods and use more... let's say, aggressive "out-of-the-box" response techniques.  We started with the usual deceptive law enforcement tools:  stings and informant-baited traps... To deals with the questionable folks... collaboration with Internet providers to spy on people, surveillance in the soft areas of the law, spyware injections, and now outright hacking and social engineering.

The Private Sector Response

We in the private sector don't have the legal authority to hack and lie outright, but we're still definitely adopting more and more gray hat techniques.  We now have massive private intelligence operations with private undercover agents in the hacker underground, honey pots to trap and collect intel, reputation tracking and semi-secret threat sharing organizations.  As Dan Geer said, "We're all intelligence officers now."
But that's just the beginning.  There is a strong movement towards active-response.  And  more outsourced security and tech-giants doing “what’s necessary” to clean up the Internet.  There are also quieter semi-active responses going on: large scale black-list shunning, false data injections and active deception, and tar pitting.


The next step on the downward spiral would be cyber-privateers hired by victim companies to mete out retribution against attackers on the open Internet by means fair and foul.  And the bad guys and APTs stepping up their game to respond. With everyone else caught in the cross-fire.


PS: I know one can say this is reflective of our times, and you may be right.  But is that the future we want?


Friday, November 21, 2014

The Spoon Model

The spoon theory describes the daily life of people with medical conditions and their limited energy resources for doing seemingly everyday tasks. The model goes like this: each day you’re given a handful of spoons, which you will use for an activity. When the spoons are used up, you need to lay down until the next day. The difference between healthy people as they have an ever-renewing amount of spoons and can push themselves. while the medically-challenged must work within their limited allotment.

“Most people start the day with unlimited amount of possibilities, and energy to do whatever they desire, especially young people. For the most part, they do not need to worry about the effects of their actions.”



Just the daily tasks associated with living (getting dressed, making breakfast, getting on the bus) will cost spoons. Often once these spoons are allotted, there are aren’t many left for extra activities. Furthermore, simple problem like skipping a meal or being too cold can reduce the spoon allocation to the point where even normal activity is beyond the budget. Sometimes even pushing or overspending the spoon budget can seriously reduce the number of spoons available for the next few days.

It’s a very good and highly recommended read to understand how life is with a chronic illness or disability. I also think it’s a good metaphor for the daily workload of a IT worker.

I think folks outside of IT (and especially management) think they are like healthy people with boundless energy. However, most IT shops are burdened with technical debt dealing with poorly installed or poorly implemented software and architecture. They only have so many spoons! So when we security folks come in with “You need to patch everything right now!” 

Boom! All the spoons are gone. That means less time for other things that might affect your risk profile, like fixing broken anti-virus, monitoring & responding to security alerts, encrypting laptops, and removing accounts for terminated users. And this doesn’t count all the other things IT has to deal with that affect uptime, their user’s satisfaction and their own sanity.

I’m not sure every security professional realizes that they need to remember that IT has only so many spoons and only so many requests are going to be followed through on. We all need to plan carefully less we make things worse.

Thursday, August 7, 2014

Things used interchangeably that are not

I keep seeing security "professionals" mixing and matching terms interchangeably that are not.  I can understand this confusion from a user or a PHB, but not from a security professional.   I mix conflating these terms should result in automatic disbarment of whatever the latest security certification that person is holding.  Considering how tricky security and assurance work already is, it'd be really nice if we all used the same terms for some of the most basic things we do.

The terms I most often see conflated or misused for each other are:


Privacy and Confidentiality
Privacy relates to a person, Confidentiality relates to information about a person.  It gets awkward when folks ask for a privacy policy when they really mean confidentiality policy.   A privacy policy would talk about how I handle (collect, use, retain and disclose) someone’s data.  A confidentiality policy talks about how I protect it.


Vulnerability scan and Penetration test
You can often get a vuln scan as part of a pen-test, but they really aren't the same thing.  The tip-off should be the word "penetration" which means someone is actually breaking in instead of just looking at you.   One usually costs a lot more than the other as well.  Bonus: a port scan is part of a vulnerability scan, but not the whole thing.




Vulnerability/Threat/Impact and Risk
I'm a proud member of SIRA, where a bunch of nerds sit around to argue about different risk models and which fits/works best in what situation.  But you know what?  I'd be happy if the entire industry just started using the most basic simplistic formula for risk: Risk = Threat × Probability × Impact.  Sadly, what I see folks doing is:
  1. "We need to stop doing this because APTs are dangerous" -> Risk = Threat
  2. "We need to shut down email because half our messages have malware in them" -> Risk = Probability
  3. "We need to do something about DDOS because our site could go down." -> Risk = Impact
No.  You aren't thinking this through.  And you're confusing the users.  Stop it.

Disaster Recovery and Business Continuity
Again, the tip-off is in the words themselves.  Disaster recovery is about recovering the IT systems after a disaster.  Just the IT systems.  Business continuity involves recovering the entire business process.  BC can include DR but not the other way around.

2 factor and additional authentication
You know when you login to your web banking from a new computer and it suddenly asks you what high school you went to?  That's not 2-factor authentication... because that's just more of "something you know." It's layered or risk-based or adaptive authentication.  But it's not a different factor so it's not as strong.  So stop thinking that it is.

What do you see security professionals mixing up all the time?




Wednesday, April 30, 2014

Great blog

So I stumbled across this blog post the other day and really liked it. If I wasn't so lazy, I'd rewrite it, replacing all the references to the development projects with security/risk mitigation projects.

But I am lazy, so read it yourself and make the replacement in your head.

It's a must read for anyone in the security biz communicating or managing risk to the business folks (in others, almost everyone in security)

No Deadlines For You! Software Dev Without Estimates, Specs or Other Lies

Seriously, go read it.  It's great.

Wednesday, March 26, 2014

7 areas outside of infosec to study

There's a lot of areas that most of us infosec people like to dabble in that is outside of our required skill set. For example, it seems like every third security person has a set of lock picks and loves to do it. Unless you're a red teamer, admit that it's just a puzzle you like to play with and stop trying to impress us. Here are some areas just outside of infosec that I like it to hone:

1. SEO - Becuz hackers use it to sneak malware into your organization. Information warfare is older school than “cyber warfare”, and information warfare is all about managing perception. Where to start? I recommend my neighbor, Moz

2. Effective communication. That means learning to write well in both email, long form, and to educate. It means being to speak effectively one-on-one, in a meeting, and giving a speech. It means being clear, concise and consistent. It means respecting your audience and establishing rapport. Where to start? I recommend Manager Tools

3. Project Management. Everything we do is a project. We can always be better at doing them. I’ve been managing projects for decades and I’m still not satisfied on how well things are run. I recommend Herding Cats.

4. Programming. I started in programming but rarely do it anymore. We work in technology. We give advice to developers. We work with sysadmins on scripting. We should at least have a good fundamental grasp of programming in a few major flavors: basic automation scripting, web apps, and short executables. I’d say you should at least be able create something useful (beyond Hello World) in PERL, Bash, or PowerShell… plus something in Ruby/Python/Java.

5. Databases. Most of everything is built on on a database. You should at least be able to write queries and understand how tables and indices work. It’s helpful to know a little more than how to do a SQL inject “drop tables” or “Select *”. You don’t need to become a DBA but tinker with SQLite or MySQL. As I level-up on item 4, I find myself doing more and more of number 5. They kinda go together.

6. Psychology. Since we can't solve all our security problems with money (cuz we don't have enough) we have to use influence to get things done. And we have to anticipate how controls will live or die in the real world. A good basic understanding of people beyond treating users as passive objects (or even worset, as rational actors) is required. A good starting place is Dan Ariely's Predictably Irrational: The Hidden Forces That Shape Our Decisions.

7. Behavioral economics (More psychology) If you ever wondered why I have a CISSP, do SSAE-16 audits, and have an office shelf of security awards, it’s because I get visited by a lot of nervous customers and auditors entrusting me with their data. And signaling theory.

Note how almost have the things on my list are human-centric areas… because the people are always the hardest part of the job.

Wednesday, March 19, 2014

An interesting tidbit in the EU data protection regs:

The European Parliament has finally passed their big redesign of data protection regulation. Nothing too shocking in there, in light of the Snowden fallout. One little item caught my eye tho:

 Data Protection Officers: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period.

Data protection officers shall be bound by secrecy concerning the identity of data subjects and concerning circumstances enabling data subjects to be identified, unless they are released from that obligation by the data subject. The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects. DPOs should be appointed for at least four years in the case of employees and two in that of external contractors.

The Commission proposed two years in both cases.

Data protection officers should be in a position to perform their duties and tasks independently and enjoy special protection against dismissal. Final responsibility should stay with the management of an organisation.

The data protection officer should be consulted prior to the design, procurement, development and setting-up of systems for the automated processing of personal data, in order to ensure the principles of privacy by design and privacy by default.


Not anything new here, but reviewing it made me thing about an interesting metric buried in there: the controller and the processor shall designate a data protection officer inter alia, where the processing is carried out by a legal person and relates to more than 5000 data subjects in any consecutive 12-month period ... The committee changed the criterion from the number of employees a company has (the Commission suggested at least 250), to the number of data subjects

First I liked the old metric of 250 employees per data protection officer. It tracked with my experience with about the right size to start having a dedicated security officer. But changing it to the size of pile of confidential data you're protecting is even more relevant.

When I was hired on in my current job, we were a smallish company but we were custodians of megatons of PII. And 5000 sounds about right, if nothing else, for breach numbers: If the average cost is around $136 per person's records breached, then 5000 x $136 = $680,000.

Okay, now we have our impact. The question is what is the probability of breach and how much does a dedicated DPO reduce that probability? Well, that probably varies on organization to organization, tho it'd be good to know some hard numbers. Something to munch on.

The other thing I liked in the regs is Data protection officers should be in a position to perform their duties and tasks independently which continues to support my position that infosec should not report into the IT hierarchy.

Monday, March 17, 2014

Make your security tools: DLP

After spending tens of thousands of dollars on commercial security solutions that did not meet our needs, our security team opted for a DIY approach. One of the first tools we wanted was a decent DLP.  We were also very disappointed in the DLP solutions available, especially when it came to tracking confidential data elements across both Linux and Windows file systems. Many were hard to use, difficult to configure, and/or dragged along an infrastructure of servers, agents and reporting systems. We wanted something small, flexible, and dead simple.  At this point, we were either looking at going back to the well for more resources to get the job done or coming up with some crafty.   None of us were coders beyond some basic sysadmin scripting, but we decided to give it a shot.

The problem was that we potentially had confidential data laying around on several large file repositories.  Nasty stuff like name + SSN, birthdate, credit card, etc.  We tried several commercial and open source DLP scanners and they missed huge swaths of stuff.  What was particularly vexing is that our in-house apps were generating some of this stuff, but it was in our own format.  It was pure ASCII text but the actual formatting of the data was making it invisible to the DLP tools.   It was structured but not in a way that any other tool could deal with.   Most of the tools didn't offer much flexibility in terms of configuration.  Those that did were limited to single pass reg-ex.

Our second problem is that we also wanted a way to cleanly scrub the data we found.  Not delete it, not encrypt it, but excise like a tumor with precision of a surgeon.  We were tearing through log files and test data load files used by developers.   Some of these files came directly from customers who did not know better to scrub out their own PII.  We had the blessing of management to clip the Personal out of PII and anonymize it in place.  No tool on the market did that.

Luckily we knew what we were looking for and how it was structured and what we wanted to do with it.  It allowed us to do contextual analysis... when you see these indicators, look here for these kinds of files.  Using Python and some hints based on OpenDLP (one of the things we looked at), plus a little Luhn test, and did a first pass.

We got a ton load of stuff back.  Almost none of good.   This was not unexpected, as this was our experience with a lot of the DLP tools. 

So we then started a second pass of contextual and content analyses.  We dove in and looked at look at these false positives and found what made them false.  This second pass scan would weed out those cases with pattern matching and algorithms.   We rinse, lathered and repeated with bigger and bigger data sets until we were hitting exactly what we want with no false positives. 

Next we added a scrub routine that replaced the exact piece of PII in a file with a unique nonsense data element.  For example, some of these files were being used as test loads by developers.  If we just turned all credit card numbers in 9's, their code would fail.   They also needed unique numbers for data analysis. If you turn a table of SSNs into every single entry being 99999, the test will fail.  So we selectively changed digits but maintained uniqueness.  I can't get into too much detail without giving away proprietary code, but you can read all about it here

We also kept a detailed log of what was changed to what, so that we could un-ring that bell if it ever misfired.  And of course, we protected those log files since they now have confidential data elements in them.

What we ended up with was a single script that given a file path, would just go to town on the files it found.  No agents, no back-end databases, no configuration. Just point and shoot. 

The beauty is we knew what we were willing to trade off, which was speed, against precision.  Our goal was the reduction of manual labor and better assurance.   Our code was clunky, ran in a slow interpreted language, and it took hours to complete.  But it was also easy to modify, easy to pass around to team members, and the logic was very clear.  Adopting the release early and often approach, we had something usable within weeks that proved more functional than the products on the market. 

The tool proved to be laser-precise in hunting down the unique PII data records in our environment, preventing costly and embarrassing data leaks.  After showing it around, we were given precious developer resources to clean up our code, add functionality, and fix a few little bugs.  It's been so successful as an in-house tool that our management will soon be releasing it as a software utility to go along with our product.


Thursday, February 20, 2014

Internal Vulnerability scanning

The hardest thing about vulnerability scanning is not the scanning itself. There are literally dozens of pretty decent scanning tools and vendors out there at way reasonable prices. The hard part is prioritizing the mountain of vulnerability data you get back. This is especially true if you’re scanning your inside network, which I highly recommend you do as frequently as possible. Our team runs scans nearly every other day, tho the scans are different (which I’ll get into), with the entire suite of scans completing once a week. I’m a big believer in getting an “attacker's eye view” of my network and using that as a component of my risk and architecture decision making.

 However, every scan seems to generate dozens and dozens of vulnerabilities per host. Multiply that by a good sized network and you’ll find things are quickly unmanageable. If your organization is lucky enough where you’re seeing only a few hits or none per host, then congratulations, you’re very lucky (either than or your scanner is malfunctioning). I live in the hyper-fast world where innovation, customer service, and agility (you can’t spell agility without Agile) are key profit drivers while InfoSec is not. So my team has a lot of stuff to wade through. Here’s how we deal with it.  

Multiple scans
 I do most my scanning after hours as not to disrupt the business and clog up the pipes. Yes, I have blown up boxes and switches doing vuln scans, despite a decade and a half of experience using these things. It happens. So, I do it at night. But that gives me limited time. Also, for risk management purposes, I want to get different perspectives on scans, which some scanners can do with a single deep scan but others it’s harder. There are some tools that let you aggregate your scans in a database and let you slice and dice there. I haven’t found one that I thought was worth the money… mostly because I like munging the data directly with my own risk models and none let me do that. If I have some spare time (ha!) I might right my own vuln database analysis tool. But for now, it works out easiest for me to run different scans on different days, and then look at the aggregate. Here are the type of scans I run:

1) Full-bore with credentials. The scanner has full administrative login creds for everything on the network. All the signatures are active and even some automated web app hacking is enabled. These can run so long that I have to break them up over several days to catch all of my enterprise (or buy even more scanners if my budget can handle it). It gives me the fullest grandest possible picture of what is messed up and where. Problem is that it also generates a ton of data.

2) Pivot scan with limited credentials. Now the scanner has the login creds of an ordinary user. This scan are much faster than above. The report tells me what my network looks like if a user’s workstation gets popped and an attacker is pivoting laterally and looking for soft targets. A very interesting perspective.

3) External scan with no credentials. Fast and quick, find everything that’s low-hanging fruit. I do these frequently.

4) Patch and default settings scan. Another fast and quick scan, look for missing patches and default creds and other dumb stuff. I do these frequently as well.

5) Discovery scan. Quick and fast network mapping to make sure nothing new has been added to the network. Also done frequently.  

Break-it down
Whether you’ve done one big scan & aggregated it, or stitched together your multiple scans, you can’t possible have IT patch every single hole. Especially in a dynamic corporate environment such as ours. I long for the restricted deployment world of no-local-admins, certified install images and mandatory configuration compliance… but then that world isn't known for innovation or profit. So I have this pile of vulns to deal with. How do I break them down?

1) Take High-Med-Low/Red-Orange-Yellow/CVSS with a grain of salt. Yeah, a Purple Critical 11.5 scored vuln is probably bad. But then there seems to be a lot of vulnerability score inflation out there. I need something I can work with. One thing is to think of a points system. Maybe start with a CVSS score (or whatever you like) and add/subtract priority points based on the rest of these rules.

 2) Vulnerabilities that have known exploits are high priority. If there’s a hole and a script kiddie can poke it, we need to fix it. We’re below the Mendoza Line for Security.

 3) Protocol attacks, especially on the inside, are lower priority. Yeah, man-in-the-middle or crypto-break attacks happen. But they’re less common than the dumb stuff (see previous).

 4) Extra attention to the key servers. Duh. But yes, your AD controllers, Sharepoints, databases, terminal servers and file shares need to be clean. Not only do they hold important goodies hackers want (like data or password databases) but if they go down, lots of folks can’t work. Bad news for the SLA and IT”s reputation.

5) Easy wins are high priority. This includes basic OS patches, fixing default passwords, turning off dumb services.

6) User workstation “Internet contact points” are scored higher as well. This means un-patched browsers, Java, Adobe readers, mail clients, etc. This where malware comes into the organization. Lock them down.  

Hand-check the important stuff
I don’t trust machines. It’s why I’m in this business. So the really important systems, we hand check critical things at least once a month. This means logging into the box, making sure anti-virus is running and updated, patches have been run, local security policies are in place, and no suspicious local users have been added. We also do hand checks of key ACLs on routers, switches and firewalls. I wish I could say that these checks are superfluous but unfortunately they’ve proven fruitful enough that we keep doing them. Scanners miss things for complicated reasons. We don’t check a lot things this way, just the 10 to 15% of really critical servers and hosts.  

Find out where the money is If you can afford it, I suggest looking into Digital Leak Prevention tools. Pairing up a scan for confidential data laying around on servers and workstations against a vuln scan is really helpful. Your idea of “important servers” and “work flows” changes when you see where things end up. There are a lot of DLP tools out there. I haven’t found one I liked. So we wrote our own. But that is a story for another day.  

Happy scanning

Wednesday, February 12, 2014

Top 5 ways organizations fail at managing third-party risk



Those blasted third-parties!  Turns out they’re to blame for Target's mishap.

Well, guess what?  We all know you can’t outsource the blame and Target is taking the hit for not managing their third-party risk very well.  Having spent the past 6 years as one of those blasted third-parties (and before that about the same amount of time as someone who audited third-parties for banks), I can tell you there are right ways to do this and wrong ways.

BTW, if you’re a PHB and prefer the “business friendly version” of this post, just read this article I wrote last summer for the financial industry.

So in my years of auditing and being audited, I have seen many many many irrational and ineffective choices  made by auditor and auditee.   One of the worst cases (there are so many to choose from) as an auditor was when I assessed a third-party servicing the banking industry in downtown Manhattan who refused to answer any of my questions.  They failed and did not get the contract…. despite their belief that they would despite what the audit report read.  Hmm.. then there was that third-party we convinced to get out of the financial services industry because they security was so bad.   Sigh, sweet memories.

Oh, where was I… yeah, on with my rant list:

1. Wrong-fit assessment for the organization
If your third-party has direct physical access to your internal network, then a five page spreadsheet questionnaire is not going to tell you enough.  If the company is producing software that is essential or is counting the money, then yeah, the audit should include some secure development practices.  If the company is a cloud provider or a hosting company, you probably need to include audits of disaster recovery and physical security.   These all seem obvious, but I’ve endured thirty page questionnaires and hours of grilling about things that were most “not applicable” for our organization, while other more important issues were left wholly unexamined.


2. Over-reliance on the wrong certification
I’ve written about this a little bit before, but this is really a variant of #1.   The easiest miss I’ve seen is asking for PCI certification from companies that don’t process credit cards.  If you followed the letter of the rule for PCI and you don’t have credit cards, it’s a pretty low bar to jump over.   If you can’t tell the different between SSAE-16 SOC-1, SOC-2 or SOC-3, then don’t use them to rate your third-parties. 

3. Sloppy scoping
The scope is where you begin, not an afterthought.  You need to understand what data and dependencies the third-party is responsible for and where the heck they are.   Two times out of three, the third-party does not even fully understand this.  You can’t do a risk analysis if you don’t what and where the assets are.  You surely can’t do a useful assessment.  And once the scope is established and verified, then you can start looking how hard the boundaries are between the in-scope and out-of-scope areas.

4. Fire and forget
Most organizations can’t afford to review their third-parties more than once a year.  Some only doing once every three years.  That means that for one or two days out of 365, things are being looked at the third-party.  How effective is that?  This why I push for Type II audits which cover at least six months of assessments, and are often “rolling” so the review is constant.   I also like weekly or even daily vulnerability scans for IT posture assessment.  Threats change, infrastructure changes, compliance needs changes.  Review should be as ubiquitous as it can be.

5. Lack of assessor skill
If the person doing the assessment doesn’t understand everything we’ve mentioned up until now, they’re not skilled enough to do the assessment.  A lot of folks doing third-party audits on behalf of large organizations are just business dev people with checklists that they submit back to infosec for review.  Fail.  A good auditor also knows when a control is appropriate and a risk acceptable, which is why I always prefer working with knowledgeable experienced people than clueless newbs who ask all the wrong questions.

That’s it for today.  Maybe later I’ll list how I think you should do this right.