Tuesday, February 23, 2010

Does past behavior predict future behavior for finding vulnerabilities?

I'm looking at my risk model for an application and faced with a question about whether past vulnerabilities is a relevant statistic to examine or not.

For example, say I'd found three buffer overflow weaknesses in Application X in past and had them fixed. Is the likelihood of more buffer overflow weaknesses higher, lower, or the same?

Off the top my head, the arguments are:

"Yes, more likely" - the programmers made this mistake several times already, they'll make more. This is the argument the auditors will probably make.

"No, less likely" - the programmers realized the error of their ways and removed all or most of the buffer overflow weaknesses in the entire application. This is the argument the development team will probably make.

"It depends" - Vulnerabilities are a series of independent events or this variable by itself is insufficient to determine predictability.

I'm sure someone's done some analysis in this area, probably with software bugs. Probably involve Markov chains and a lot of math.

Intuitively, I'm inclined to go with the "it depends" answer and throw this measure of my risk model, unless someone says otherwise.