by David Mortman on July 16, 2009
Following up on my previous post, here’s Part 2, “The Factors that Drive Probable Use”. This is the meat of our model. Follow up posts will dig deeper into Parts 1 and 2. At Black Hat we’ll be applying this model to the vulnerabilities that are going to be released at the show.
But before we do our digging, two quick things about this model and post. First, the model is a “knowledge” model (that’s a term Alex just made up, btw to describe the difference between Bayesian and Frequentist approaches to statistical analytics). It actually measures state of knowledge rather than state of nature. That means that this is not a probabilistic model that attempts to try to fill in a number count for some amount of population (how many people expect traditional statistical analysis to work), rather this model measures a degree of belief. Second, the reason we did this was twofold – 1.) establishing accurate information around rate of adoption for what we want to study is difficult, 2.) If we get to the point where we *can* measure rate of adoption in the wild, there’s a proof that a strong knowledge model actually can never be worse than a frequentist model, and that sort of data will only increase the accuracy of the knowledge model results.
Finally, we’re using this model with “exploit code” in this post to describe how much we believe a specific exploit will be used by the aggregate threat community. If you recall the gartner hype cycle graph, it is an attempt to identify and “measure” the factors that drive the curve of “adoption”. It should be noted that this model might work well for not just describing a belief that exploit code will be well adopted, but also that a given weakness (what is commonly referred to as a vulnerability) will have exploit code developed for it. The authors intend to play with this model for both purposes.
On To The Model: The Mortman/Hutton Model for Expectation of Exploit Use
So here it is:
In this model, essentially, two significant factors begin to determine the fate of the exploit code. First, to understate the simplicity of the real world factors that drive exploit adoption, the exploit must be useful. And in order to be useful, the exploit would need to have valuable systems to penetrate, and people to use it. We break these two factors of usefulness down into two concepts: Saturation of Vulnerable Technology and Exploit Utility.
Saturation of Vulnerable Technology (SVT)
If an exploit is going to be developed or used, there actually need to be systems to attack. Saturation of Vulnerable Technology (SVT) reflects that requirement. SVT itself represents the aggregate amount of vulnerable systems, and is driven by two main factors. The number of vulnerable systems is determined by how significant the market penetration is for the technology (Windows = a lot, BeOS = a little), and the ability to compensate for the vulnerability. This ability to compensate is broken down into two sub-elements, the ability to repair (patch the system, for example) and/or the ability to apply compensating controls.
The outcome in SVT is “measured” in a number of probable systems without the ability to compensate. It is a combination of the number of systems with the initial vulnerability, and the probable % of the population that is currently vulnerable.
If SVT attempts to measure the aggregate attack surface available to the exploit code, then Exploit Utility is an attempt to measure how useful the exploit code is to the aggregate threat community. Exploit Utility is dependent on two factors, if the systems that the exploit will be used on can be expected to return value to the threat community, and how pervasive the exploit code is in the threat community. We call these The Expected Value of Vulnerable Systems (EVoVS) and Probability of Code Dissemination, respectively.
The premise behind the Expected Value of Vulnerable Systems is simply an attempt to suggest that exploit code will only be used if the attacker can expect to gain something from the attack. This value can be expressed in the Information it Offers (in volume and expected market return), the Resources it Offers (in power/space/bandwidth), or the Access it Allows (notably, to other systems). The EVoVS can be measured as a range of value compared to the broader distribution of computing assets available as targets to the broader threat community.
The premise behind the Probability of Code Dissemination is simply an attempt to suggest that the expected amount an exploit is to be used (in aggregate) is a function of how “available” it is. This could simply be stated as a proportion of exploit code inclusion in commonly available toolkit packages. An expected Rate of Exploit Adoption could be said to be dependent on two factors; the ease of use (skills, resources required to use it) and the nature of the exploit author (secretive, non-secretive).
So there it is. Developed at several different Columbus area coffee shops and lots of chicken wings and beer after work. Now many of us have been privy (or hostage) to discussions around the what and why’s of “some sort of vulnerability/exploit was just announced by research team/vendor”. If anything, hopefully this model will provide a rational basis for coming to a joint conclusion about why we should or shouldn’t care. This model is released under a creative commons attribute-share alike-non-commercial license. That means you have to tell people where you found it, if you mess with it – please give back to the community, and please don’t go charging anyone for the use of it.