Sunday, December 4, 2016

7 Mistakes of InfoSec defense

I've been reflecting lately on the role of an InfoSec defender.  Having written a whole book on the subject forces one to think deeply about the day-to-day job of defense.  I've also been interviewing a lot of new people looking to step in my current defender role.  I'm seeing a lot of enthusiasm and smart thinking out there. Good stuff.

Unfortunately, I've also seen a few stumbles that I'd like to soapbox on. These are things that seemed like a good idea at first blush (see below) but in reality, not so much.

This isn't an exhaustive list and may not even be the most prevalent problems in the industry, but in my experience, this is what I see cropping up most commonly.

1. Prioritizing controls based only on auditor's demands instead of risk. 
When you do this, you waste resources on possibly unnecessary or unimportant risks. Auditors don't know everything and compliance frameworks don't always fit everyone.  Beware the fallacy that a missing control is the equivalent of a risk.  I've seen it more than once: "We're buying a NAC system because it was a finding in the audit." Is that a significant risk? "Well, not really... I'm not sure. Maybe?"  Prioritize based on risk. 

2. Prioritizing controls based only on the Headline of the Week.  
Egad, IOT DDOS botnets of doom! We need new defenses.  Stop, is that an actual risk for your organization?  What is the impact of a DDOS? Do the math and figure it out. Sometimes the answer will surprise you.  

3. Confusing Impact with Risk.
Impact is one factor of the risk equation.  An insider attack where someone steals all your source code will have a huge impact.  But is it a big risk? What are the odds of it happening?  How about an earthquake?  Don't fill up a risk report with just impact statements.   Risk is a combination of Impact and Probability of occurrence.

4.  Confusing Frequency with Risk.
Similarly, to the previous item, just because something is very likely to happen doesn't mean it will have a big impact on your organization.  Many also forget about all the good controls they already in place.  I've seen IT folks freak out about the high volume of SSH password brute force attempts... against a server set up to use SSH keys.  Or high volumes of TCP Port 1521 probes from the Internet when the database servers are buried deep between 3 layers of firewall and NATed. Very unlikely that these common attacks are actually going to get very far.

5. Overprotecting the wrong thing.
The classic: I have more budget so I'm going to get a new firewall. However, your old firewall is managing things pretty well and you have other serious risks to tackle next.  Maybe you need to fix cross-site scripting on your web application or lock down physical access to the server room.  Not as fun and sexy as a new firewall but you should make sure all your major risks are controlled before enhancing protection for a particular risk. 

6. Deploying shelfware. 
You've got a big problem and a vendor offers a big solution. POs are cut and solutions are deployed.  Except they're too complicated to manage. Or too cumbersome for users to deal with.  Require too much overhead to keep running.  Or require more admins than you have staff for.  This can also happen when IT and Security don't work together on a solution. Don't waste a bunch of money deploying a giant control that doesn't actually fit in your organization. This has been discussed by others before.  

7. Attacking an intractable head on with a supposedly simple solution.
I've seen folks go after the Big Whales of InfoSec risk with nothing but a fishhook and a row boat. Everyone in the organization has "local admin"? Just take away all their rights, we'll roll out a policy and force it through.  Yeah, except your organization isn't homogenous in use case or deployed versions of operating systems.  This quickly turns into a morass of exceptions and workarounds... and before long the whole thing is abandoned.  
Want more examples?  How about fixing SQL injection on the web app? Tell the developers to just patch it.  Easy to fix, just get them programmers motivated to fix it.  Phishing emails? Time for more user training. It's never that easy. If it was, don't you think everyone else would be doing it? As Mencken said, for every complex problem there is an answer that is clear, simple, and wrong.  It's not always a straight line to victory so think before you implement.

Sorry for the listicle title. I couldn't resist. :-)

Monday, November 28, 2016

Tuesday, October 25, 2016

The future

I just got back from speaking at a conference in Palm Springs.  On the plane ride, I read the most excellent book Superforecasting: The Art and Science of Prediction by Philip E. Tetlock and Dan Gardner.  Great book if you're doing risk analysis or threat analysis work.

One thing that really got under my skin was a reprint of a letter from Donald Rumsfeld to the President - "Thoughts for the 2001 Quadrennial Defense Review".  It's a PDF download, so here is the heart of the letter:


If you had been a security policy-maker in the world’s greatest power in 1900, you would have been a Brit, looking warily at your age-old enemy, France.

By 1910, you would be allied with France and your enemy would be Germany.

By 1920, World War I would have been fought and won, and you’d be engaged in a naval arms race with your erstwhile allies, the U.S. and Japan.

By 1930, naval arms limitation treaties were in effect, the Great Depression was underway, and the defense planning standard said “no war for ten years.”

Nine years later World War II had begun.

By 1950, Britain no longer was the worlds greatest power, the Atomic Age had dawned, and a “police action” was underway in Korea.

Ten years later the political focus was on the “missile gap,” the strategic paradigm was shifting from massive retaliation to flexible response, and few people had heard of Vietnam.

By 1970, the peak of our involvement in Vietnam had come and gone, we were beginning détente with the Soviets, and we were anointing the Shah as our protégé in the Gulf region.

By 1980, the Soviets were in Afghanistan, Iran was in the throes of revolution, there was talk of our “hollow forces” and a “window of vulnerability,” and the U.S. was the greatest creditor nation the world had ever seen.

By 1990, the Soviet Union was within a year of dissolution, American forces in the Desert were on the verge of showing they were anything but hollow, the U.S. had become the greatest debtor nation the world had ever known, and almost no one had heard of the internet.

Ten years later, Warsaw was the capital of a NATO nation, asymmetric threats transcended geography, and the parallel revolutions of information, biotechnology, robotics, nanotechnology, and high density energy sources foreshadowed changes almost beyond forecasting.

All of which is to say that I’m not sure what 2010 will look like, but I’m sure that it will be very little like we expect, so we should plan accordingly.



This really got me thinking about the where we will be in warfare, especially considering the recent DNS DDOS attacks and a possible cyber cold-war.  Really makes you think.

Monday, October 17, 2016

Third parties and SSAE 18

My new book talks about building a security program that can pass a number of audits, including the SSAE 16.  Now comes news of a new standard from the AICPA Auditing Standards Board (ASB) called the SSAE 18 that will replace SSAE 16 later in 2017.
Does that mean you need to redo everything? No. I wrote this book with the idea that technology, compliance, and threats will evolve over time.  The advice is designed to be timeless not timely.

There are some changes in how the auditors will conduct and write their opinion, but really one thing that affects the audited organization: increased scrutiny of the performance sub service organizations.  This is another way of talking about third party security.  I have an entire chapter covering that subject in the book.

So what about third party security?   Well, looking at the data from the past three months at the California Attorney General Breach records and I get the following:



Out of the 51 incidents examined, 6 were directly attributable to a third party's security. Is 12% significant?  Sure, maybe not a top risk but it's something to worry about.

So, what is going on with third party security? Protiviti  did a Vendor Risk Management Benchmark Study in 2015 and concluded that"Third party risk management is immature."

Furthermore, they went on to comment that out of all the third party risk management programs going on, the most mature ones are within the financial services organizations.  Good to know, which leads us to ask: how well are financial services organizations managing third party risk? 
Well, the New York State Department of Financial Services issued a "Update on Cyber Security in the Banking Sector: Third Party Service Providers." In this report, they noted that fewer than half of examined financial service companies do on-site assessments and within the programs themselves there is a lot of variation.

Having been on the pointy end of these assessments for nearly a decade, I concur with these findings.  I've seen banks assess vendor security by a large variety of methods including:
They all seem to have varying strengths (accuracy, low cost, speed) and weaknesses (lack of accuracy, difficulty).  For questionnaires, the actual questions always seem to revolve around the standard 27k2 control sets.  Maybe these are sufficient, but does fall victim to best practicism.

Whatever third party security assessment you use, doing something is better than doing nothing.  And if you're going to pursue meeting the SSAE 18 certification, you should invest in a good method.



Monday, October 10, 2016

Assume breach as a foundation of a security program

This picture below was excluded from my new book, IT Security Risk Control Management: An Audit Preparation Plan.



The publishers thought it wouldn't look very good in gray scale print. The story that goes with it is still there in Chapter 2, Assume Breach.

The concept of Assume Breach has been with us for over twenty years and I've been blogging here about it since 2008.

Assume Breach simply means don't count on our security defenses to keep the bad guys out. This picture is of the wall of a supposedly impregnable fortress after it went up against it's first real challenge against new technology.

Quinn Norton also coined a great corollary, called Norton's Law, which states that all data, over time, approaches deleted, or public.

In the book, the assume breach concept forms the foundation of a security program. What does this mean for defenders? It means that if you're going to be breached, you need to know what can be sacrificed and what must be protected at all costs. That implies you understand your organization, it's data flows, and what is truly important for survival. It also means you need to have a clear idea of the threats and vulnerabilities facing you. Lastly, assume breach means being prepared to adequately respond to incidents, survive them, and grow stronger because it.

It's the opposite of common rookie thinking “That’ll never happen in a million years!” or "why would anyone do that?" and instead think "When the inevitable happens, what will the damage look like and how will we react?" Assume breach forces you to focus on what matters and prioritize accordingly. Not a bad way to build a security program.



Saturday, October 1, 2016

The "softside" of Security can be the hardest

I just watched Leigh Honeywell's talk on "Building Secure Cultures" on the YouTubez. (BTW, it is a must watch for anyone remotely involved in security) I've been a big fan of Leigh's work and she lays down a lot of practical and effective advice.  Her talk also struck a chord with me and my recent book on how to build a successful security program to pass audit.

For one, I felt great that she was also emphasizing empathy and coaching when delivering security advice. She, like me, has seen how counter-productive the elitism, abrasiveness, and condescension (and just plain rudeness) that somehow has become associated with a lot of the security industry.   I especially liked how she called out "feigning surprise", an insulting practice I too have been guilty of doing.

"What? You didn't patch it?"

Her talk raises a powerful point: It serves no useful purpose beyond belittling the person seeking advice.  Remember, we want people to bring their security problems to us and report suspicious things.

Those of you who have read my book may have noticed a running theme of working to see things from other people's perspectives.  Yes, it's real work.  In fact, Leigh touches on that in her talk as well (if you haven't watched it, you should).  I've said it before: the hard part of security is sometimes the soft parts. By that I mean managing our feelings.  Sometimes we have to suppress our fear and anger and present a positive face despite what may be a valid emotional response.    This is work -- hard work.  There's even a term for it - Emotional Labor.

Working in security, especially if you're trying to actively improve things in an organization, takes emotional labor.  Many of us are geeks, having worked our way up from the techie trenches to meet the challenge of security work.  The term geek itself should tell you something about our innate people skills and our limits on managing our external personas.  Nevertheless, these soft skills are force multipliers we can leverage for effective security work done.  I've woven practical advice on how to do this into a number of chapters. It's nice to see it called out on its own as a critical success factor in security work.  Empathy is powerful in designing security solutions - how would a non-security tech react to what you think is obvious?  I'm happy to see there is work now starting to blossom in this area.

Being able to modulate our own external outputs is critical not just in social engineering, but in being heard and acknowledged by others.  Yes, listening to the other person before delivering your advice levels up your ability to create meaningful change.

As I said, none of this is easy. But it's definitely worth checking out.


Monday, September 26, 2016

IT Security Risk Control Management: An Audit Preparation Plan - PUBLISHED!


My new book is officially published!

It is available in both electronic and paper form.

After many months of hard work, I am so happy to see the fruits of my labor.


Wednesday, July 27, 2016

IT Security Risk Control Management: An Audit Preparation Plan

I've been quiet for a long, long while.  It hasn't been because I didn't have anything to say.  On the contrary, I've been pouring it all into my soon-to-be-released book, IT Security Risk Control Management: An Audit Preparation Plan.
 
Before you ask, I didn't come up with the title, the publisher did. The book is aimed at newly minted security professionals or those wanting to step into the security role. It covers how to build a security program from scratch, do the risk analysis, pick controls, implement the controls (in such a manner that they actually work), and then be able to pass an audit.  I specifically chose the SSAE-16/ISAE-3402 (SOCs 1,2,3), PCI DSS, and ISO 27001 as my audit candidates as they are the most common globally.
 
It'll be out in early October, but you can pre-order now.
 
I'll be writing more as we get closer to the publication date.