Tuesday, November 4, 2008

What is good pen-testing?

Over my last couple of decades in InfoSec, I've been both a consumer and a provider of pen-tests. As a consumer, I've seen quite a few tests that I've simply thrown back in the face of the consultant requesting a do-over, mostly because the data was so inaccurate that it was unusable. I'm writing this to raise the bar on what is being offered. And rather than curse the lameness, I'm lighting a candle.

Wait, what do I mean by pen-test? Well, I'm lumping a wide spectrum into this post. So when I say pen-testing, assume I could also mean vulnerability testing/scanning, perimeter scanning, web app vulnerability analysis. For me, it all falls into the bucket of technical risk analysis. Some tests are done for certification purposes (PCI, Cybertrust), but I'm talking about actual value. And that's, providing a technical risk analysis.

Let me decompose this technical risk analysis business. A risk analysis requires several pieces, namely: 1) asset identification, 2) threat analysis, 3) vulnerability analysis, 4) impact analysis and, 5) control effectiveness analysis. Most pen-testers deliver a vulnerability analysis. Good testers do the threat analysis and ask for the asset identification before they start. The best spend a lot of time upfront to help figure out potential impacts and do a decent job on control effectiveness analysis. And yes, this means you should spend a bunch of time talking and analyzing before doing. Trust me, it pays off.

Of course, it's worthy assumption that an external pen-tester will get some of these things wrong. Either their guesses about impacts to an organization ("hey, we don't rely on that service so who cares if it's DOSed") or more likely the vulnerability assessment itself. Specifically, many pen-testers run scanning tools, jam the results into a report and call it done. Well, a lotta scanning tools do basic banner grabs on listening services and assume from there. A good pen-test report always shows its work. When I was a consultant, what I often wrote in my reports was something along the lines of "We initiated an TCP connect on port 5678 to IP address XYZ and sent along a packet stream MNOP and got back result ABC. This matches vulnerability #123. We verified by doing PQR and then reviewed JKL, which leads us to believe this is a high-risk root compromise vulnerability." And of course, by showing your work, the client can reproduce your results and test later to see if the hole is closed. Bonus points for including source code for scripts in the report to facilitate easy test duplication. If you can't be accurate, at least be transparent. And if you're extremely inaccurate, the act of writing up the assumptions should be a red flag for you.

All good pen-test reports will be shared everywhere by the client. Sometimes they will go to the client's clients and almost always they will go up the management chain. Keep this in mind when you are writing the analysis. Don't be glib, don't hide your assumptions, provide concrete proof, and don't ever assign blame. Contrary to what your Eighth Grade English teacher told you, passive voice is your friend. "This server was found to contain a database dump full of social security numbers. I have no idea how it got there and who fcked up, but this what we found on this day at this time using this tool. Oh, and here's a screenshot to prove that I'm not lying."

Also, you need to include an executive overview. Not only is this the shortest part of the report, usually 3-4 paragraphs, but it is also the hardest to write. I usually follow the BLUF method when writing these: Bottom Line Up Front. Open with a "we did a scan on this date/time against these assets. We found one serious vulnerability, which we alerted staff about during the test and it was immediately corrected." - Hint: make the client look good whenever the opportunity presents itself. These people sign your checks. Continuing - "We also found several other low risks..." Then the next paragraph summarizes those risks in as plain language as you can possibly can - be sure to speak of likelihood of exploit and potential business impacts. Executives will only read the summary, yet will be making a decision to spend money based on it (possible spending money to hire you to fix them or scan more later). Be concise and remember your audience. I usually spend 6-8 hours writing the summary, and spend the entire engagement thinking about what will go into it.

If there is a compliance requirement (and bingo, there almost always is), there should be a separate section highlighting the gaps found there as well. Usually a client considers this a high-priority (we need to pass PCI!) but I've found the compliance junk provides the least value. Heck, the reason pen-testing is in most compliance requirement lists is so that a good technical risk analysis is done regularly. But anyway, it is usually a requirement, so there you go. And if it's a key requirement, then the results here should also bubble up to the top of the executive overview - "Blomo corp. appears to be 95% compliant with RHINO rules with the exception of the weak password on the mail server."

Okay, I've ranted enough. I just wanted to pop off my experiences and wants regarding pen-testing. Who knows, maybe in a few years I'll go back into consulting and do these again. Until then, I'll just keep rewarding the competent professionals with more business and keep shunning the lamers.

2 comments:

Anonymous said...

Absolutely correct and very good points. Keep blogging more.

Thanks and cheers !

aphlat said...

Thanks for the thoughts. Good points made.