Monday, December 27, 2010

Security Douchanomics


Hopefully this decade will be the last of prevalent Security Douchanomics.  What do mean by Security Douchanomics?  It is the shortcutting of the hard work of security economics (analysis of data, discussion, trade-offs) and instead using the infosec bully pulpit to cram a simplistic reason down everyone's throat to ensure compliance.  While this strategy can work in the short-term, it is we in the infosec industry who must suffer the long-term degradation of authority and respect because this doucherie.  Worse, Security Douchanomics can foster adversarial relationships between security teams and the rest of the business.  The pronouncements seem inflated or unrealistic, the business pushes back, and everybody loses.

Some, like hyper-FUDding to sell security (koff, koff, APT)  is one glaring example of Security Douchanomics.  But there are more subtle, more institutionalized, more palatable douchtastic examples out there. 

Specific examples:

   1. Flatly denying that some new technology as insecure without discussing nuances, trade-offs, or specific risks. (Cloud has been popular for this)

   2. Flatly declaring a technology as obsolete and insecure because it is old without discussing nuances trade-offs, or specific risks. (Windows XP has been popular for this)

   3. Flatly declaring ANY technology as "secure" or" insecure" without discussing nuances, trade-offs, or specific risks (or what "secure" means in the particular context)

   4. Arbitrarily high-risk ratings for nearly any vulnerability or audit exception found.  Sometimes I think this done to make the assessors look good (see how badass I am that I found this super-s3kr1t 0-day hole that can pwnzr you?)   Of course, this makes the defenders look bad and then they usually push back leading to adversarial cycle of pain that is common to Security Douchanomics.

   5. Blindly enforcing best practices as if these one-size fits-all (and in many instances, cargo cult processes) are the answer to the entire world's security ills, regardless of cost or prove effectiveness,

   6. Using misleading/confusing graphics or statistics to convey risk metrics to a non-technical audience.  My favorite is the vulnerability scan that shows huge bar graphs with counts of "low" vulnerabilities, which usually are things like "server is listening on port 80" and "Scanner has identified site running Apache."  But bigger is worser, right? (see #4)

   7. Specious security reasoning.  My favorite: "If a person has financial problems, they would be very motivated to steal from the company, so we can't hire anyone with bad credit checks."  Uh huh, so can we please talk about implementing least privilege then?
I'm sure there are plenty I'm missing. These are just what came to my mind this morning. Feel free to add your own or comment on mine.

I'd like to think that most of the time that Security Douchanomics comes from ignorance or laziness rather than intentional misdirection.  And for most of this decade, Security Douchanomics have been as effective as anything else (that is to say, pretty ineffective but it was the only tool many people had).  But for whatever the causes, the end result is the same.  And to those security practitioners who fall victim to Security Douchanomics instead of doing your homework: you need to set up your game and do better.  We as an industry deserve better.

Friday, October 1, 2010

VB2010

I just attended my first Virus Bulletin Conference. Luckily it was in Vancouver, just a few hours north of Seattle, so it was an easy drive.   This was also my first time in Vancouver (more than a few hours) and I can say that this is a very beautiful, friendly and modern city with some fantastic food.  We also had a nice room with a fantastic view.

So the conference.

The keynote was by Nick Bilogorskiy of Facebook.  He got into all the evil ways your FB account can be jacked and what the crooks would do with it.  He got into Koobface a bit with hints that those responsible are in the cross-hairs. Graham Cluely blogged a nice summary here.  Take-away: If you must use FB, make sure you use the built-in tools to warn you if your account profile is altered.

First up after the keynote in the Corporate track was Ray Pompon, who is quite the handsome and intelligent fellow.  He did a fantastic job of breaking down how the FBI takes down a malware author.    He had to start late because of the keynote but he made his points well during both the talk and the Q&A.   (disclosure - I am Ray Pompon and this review might be a little biased)

Paul Boccas of Sophos got in some good PDF malware analysis and provided the perfect set up for an Adobe joke when he asked "Is there anyone from Adobe here?" with the response from the crowd "It's a security conference!"   Adobe is indeed the new "Microsoft" when it comes to being a security whipping boy.  The more things change, the more they stay the same.

Websense's Don Hubbard did a fantastic job of scaring the crap out of me with his breakdown of how easily it is to juice search engine results and plant fake news with links to malicious sites.    Highly recommend reading his slides if/when they are available.

I stayed late and caught a great vendor presentation from ESET on under-reporting in the financial sector.  The big problem is that banks tend to record customer stolen account fraud as "other" on the SARs.   Of course banks are incentivized to point the blame finger outside their institution (and in this case, it's partially justified) but in the end, everyone loses.   For more on bank shenanigans regarding misrepresenting risk please see The Headlines for the Past Two Years.

Gunter Ollmann's talk on measuring bot-net numbers was great.  TLDNR - bot-net numbers are misrepresented.  Why?  First, the bot-net operators themselves lie for obvious monetary reasons.  Second, what is considered a bot?  There are lots of categories that are not created equal.  1) Infected victims (the usual number reported) but may not have working rootlets.  2) Members - infected and root kitted but not under C&C.  3) Taskable - the subset of members under C&C but control is time or function limited.  and finally 4) Fully controlled zombies.   Each category is often an order of magnitude smaller than the previous category.   There's a meta-lesson there too - never take simple numbers at face value.  You need to dig deeper and understand what is being measured and how.

This led me to conclude just how generally misrepresented and misunderstood our numbers are in InfoSec.   Botnet numbers are inflated.  Bank customer fraud is under-reported.  Malware victims are under-reported (my talk).  We security folk have a serious problem here.  Not just a lack of actionable intelligence but these bad numbers just undermine our already shaky credibility with the business types.  Take heart, there are solutions out there.  Alex, I'm looking at you and your VERIS

Speaking of misunderstood, there was the Symantec Stuxnet talk.   Granted, these guys did a great job of forensics reverse engineering the SCADA payload embedded in the rootkit.  You've probably seen all the tweets, posts, and video from the presentation so I won't add much more.  Suffice to say that it was all very exciting to have news cameras rolling and an excited crowd… only to be confused and deflated (ha) by "theoretical" demo of an attack with some bizarre speculation thrown into the mix.  I wish more infosec folks would study basic intelligence analysis techniques before they attempt to speak in public about such matters.

It also gave me pause to think about Stuxnet and what it means.  It is indeed a very sophisticated piece of weaponized software.  This was no mere criminal malware and almost certainly the work of a (cough, cough) APT. Heck, even the United States could be the APT in this case.  But what does this say about the future of malware?   Will we security folks be ducking and cleaning the blowback and friendly fire of APT's shooting high-powered malware at each other.  Hey, we're all on the same Internet and it's all inter-connected.  Can we at least agree to play nice at a governmental level?   KTHX

Buried in all this, there was a diamond in the rough of a talk by Safensoft on ATM malware defenses.  The talk was the defensive response to the Barnaby Jack talk on Jackpotting an ATM.  Turns out that ATMs are heavily used in Russian for many things, including bill payment for consumers. In Russia, ATM takes your money. This makes them more heavily used and relied upon.  And of course, a lot of the ATMs are just Windows XP SP2 boxen with some ATM code running on it… and many on a network.   Based on this, it was no surprise to find that lots of Russian ATMs were "jackpotted" in 2009.  So Barnaby Jack wasn't just doing bleeding-edge proof-of-concept, he was reporting "old news".    Safensoft, a traditional anti-piracy company, was forced to use a different malware defense approach because ATM hardware was too slow for the usual AV big-blacklist-of-doom approach.  Instead, they went with a white-list focus with heavy integrity checking around program flow.  Sounds like a road map for the future of general AV to me.

General chatting at vendor booths and with other delegates revealed an interesting new fact to me.  As I'm not a deep malware guy, I did not realize just how few anti-X engines are out there.  There are the big guys like Symantec, McAfee, etc and then a lot of OEM and engine-licensing going on with other companies on top of that.  It does make me fear a little bit of a monoculture vulnerability but on the other hand, blacklist collection is tough, tough work.

Other conferences bonuses:
- Gratuitous use of 80's music on Hotel speakers between talks

- Lots of cool accents - Russian, Cockney, Hindi, Irish, Chinese.

- Lots of cool people attached to those accents.  It was a pleasure to meet so many smart and funny geeks in the malware field from all over the world. 

- Hordes of Microsofties attending - their first full year with a real AV product.  Yet overall, their talks were pretty tame.  One of the presenters actually did a magic trick during his talk.  But it was still a psych-101 talk aimed at novice infosecers.

- The Stuxnet balloon pop / May-9-1979 press by Symantec provided rich fodder for jokes… which the Symantec folks laughed along with like good sports.

- A cool presenter gift from the VB folks

Tuesday, September 21, 2010

Things I hate about security reports, a rant



This post is by request from @shrdlu and how I can say to no to that? 

I am frequently dismayed the quality (or lack there of) in what we security professionals choose to present outside our little geeky enclave.   I’ve covered some of this before when talking about pen-testing / vuln assessment.   

Sadly, it hasn’t improved much.  I am frequently put in the position of having to apologize for our profession’s inability to craft a document that anyone else but a security professional would consider a “business document”*.  This doesn’t even cover the persuasiveness (or lack of) in most “Security recommendations”.  The icing on the cake is that these documents are often the work product of consulting engagements costing tens of thousands of dollars.  When someone spends thirty grand for a pen-test or a firewall recommendation, the value of the work done needs to show in the document.  And I’m not talking about color glossy graphics.  I’m talking about clarity, relevance and clear reasoning.

You wonder why the executives ignore us, this is one big reason.

Now, I’ll just grab a random VENDOR$ report off my desk here and get into some specifics.


Your template makes you look lazy.  And the fact that you used improperly makes you look sloppy.

It’s got hooks for things that I didn’t buy yet there are orphaned headers and text in there from them.  It’s an awkward one-sized-fits-all affair. Does the advice you dispense also fall into that category?  I’m tempted to believe that.

Executive summaries that aren’t summaries and aren’t written for executives

Here’s how the current exec summary reads:

1.       Client hired Consultant to do job XYZ
2.       Consultant did job XYZ using generic technical process blahblahblah
3.       More detail on generic technical process blahblahblah
4.       Job XYZ was done on date ABC, the end.

Huh?  What is this a summary of?  The proposal?  Here’s how I would expect it to read:

1.       Client hired Consultant to do job XYZ and job was performed on date ABC
2.       Consultant found MOST-HEINOUS-FINDING1 explained in 1-sentence non-technical language covering likelihood and impact (repeat as necessary) or Consultant found no significant vulnerabilities and security of Client appears to be sufficient in comparison to comparable organizations
3.       Consultant also found OTHER-FINDINGS but they aren’t that important because of low likelihood or low impact
4.       We’re not perfect and were given constraints in our testing, other vulns could be there, please plan accordingly

Chart junk

Graphics, diagrams and charts that convey almost no useful information or are so confusing that they actually detract from the report.   More common than not in technical reports.  Sadly.  Do yourself a favor and read some Tufte.

Technical Tables of Torpor

Trying read through most tables in reports usually causes, blurry vision, dizziness, and finally sleep.  Sweet, sweet sleep.  The purpose of a table in report (especially if non-techies are going to see it) is to make your reasoning clear, to invite easy comparisons or to clarify a difficult concept.   Think about what you want to convey with a table before you start slapping text and numbers into boxes.   What decisions do want the reader to make using the table? (besides being impressed with your ability to cite lots of data)  Then eliminate everything else that doesn’t need to be there.

Apparently Arbitrary ratings

There are long strings of “high” attached to things like “Total risk” or “Cost to mitigate”.  Executives wonder if this is canned bs (yes, it is) or was this calculated relevant to their organization in a meaningful way (likely not). This just makes us want to see how you came up with the choices.  And often those details aren’t there.  How did you decide that this is a “Magenta priority” and the probability is “Unlikely”.  What does that mean anyway?  Where are you getting your data? (out of your posterior cavity, I bet)

Frontloading reams of technical detail

Technical detail needs to be there.  It falls under category of showing your work and how you came to some conclusions.  But put this stuff in the back.  No one wants to wade through it in the first reading of the report.   It gives me the nagging suspicion that you’re trying to impress me with your technical prowess.   Hint: Good work should not need to call attention to itself.  When it tries to, I suspect it’s the opposite of good work.

Qualitative Quantitative

The security person’s trap – mixing and matching Qualitative (real numbers) and Quantitative (subjective wild guesses) .  Both have their place (as long as their explained) but when their mixed together, or worse multiplied together, it just sets my teeth on edge.  And it confuses anyone who looks closely at whatever is being measured is going to ask “What exactly is being measured here?”  Cut it out.

Lack of examples

Whatever your doing, the more real world examples, you cite, the more credibility you gain. Screen shots, legal citations, news clippings, hacker emails, quote, whatever.  Put them in the report.  Cherry pick a few and put the rest in the back (again, don’t frontload). 


* Before you say it, let me add that if an organization spends a bunch of money on a security report, you can bet your sweet weasel that someone in a suit and tie is going to at least look it over.  So don’t go playing the “these reports aren’t meant for non-techies” card on me.  In any case, I’m a techie and think these reports are terribly written. So there.

Wednesday, May 12, 2010

Why do I do this?

I've watches this Simon Senek TED talk three times in as many days and it's given me a lot of food for thought.

He talks about the power of why, as in why do you do something. I'm not one for new agey happy talk and platitude pushing. Some of these kinds of speakers remind me of the Sphinx in Mystery Men. But this talk really got to me. It made me think about why I do what I do. Why am I in infosec? There are most days when it's a humiliating painful grind.

So far, I've come up with: I believe that most cyber-crime can be avoided.

Everything I've done in the past ten years stacks up behind this belief. I've consulted on security. I've sold security. I've lectured to infosec students and laymen alike. I've engineered . I've mentored. I even write a web comic about security.


I know there are some people in infosec because of the money, or the challenge, or even the (false sense of) power. Maybe I feel a little bit of all those things, but mostly I think that this hacking crap is far worse than it should be. And I want to do something about that.

Friday, March 26, 2010

VB 2010

Presenting at VB 2010 in Vancouver. Here's the program. My talk will be on "Case study - successes and failures apprehending malware authors"

Tuesday, February 23, 2010

Does past behavior predict future behavior for finding vulnerabilities?

I'm looking at my risk model for an application and faced with a question about whether past vulnerabilities is a relevant statistic to examine or not.

For example, say I'd found three buffer overflow weaknesses in Application X in past and had them fixed. Is the likelihood of more buffer overflow weaknesses higher, lower, or the same?

Off the top my head, the arguments are:

"Yes, more likely" - the programmers made this mistake several times already, they'll make more. This is the argument the auditors will probably make.

"No, less likely" - the programmers realized the error of their ways and removed all or most of the buffer overflow weaknesses in the entire application. This is the argument the development team will probably make.

"It depends" - Vulnerabilities are a series of independent events or this variable by itself is insufficient to determine predictability.

I'm sure someone's done some analysis in this area, probably with software bugs. Probably involve Markov chains and a lot of math.

Intuitively, I'm inclined to go with the "it depends" answer and throw this measure of my risk model, unless someone says otherwise.