Richard Bejtlich’s Quantum State

by alex on May 14, 2009

Is Statistically Mixed?

Richard Bejtlich (whom I do admire greatly in most all of his work) just dug up a dead horse and started beating it with the shovel, and I just happen to have this baseball bat in my hands, and we seem to be entangled together on this subject, so here goes:

I think Richard Bejtlich’s current post about precision and risk assessment is mostly right on.  In it, Richard gives us a long quote from Charlie Munger of Berkshire Hathaway.  I happen to think Munger is totally awesome, and agree with him completely in what he’s saying there.  Funny thing is, I think Richard and I are deriving different knowledge based on that same information (which is just dripping heavy with irony given Richard’s choice of post title).

See, I don’t think you should infer what I think you’re meant infer from Richard’s blog post.  So with no ill will or malice intended towards Richard, let’s have some fun and discover why I think that!

“Men With Spreadsheets” Is Not (just?) A Neo-New Wave Band

First, Munger is basically repeating the Buffet line of “beware of geeks with spreadsheets”.  Which is a concept, in loose analogy here, is similar to saying to someone traveling down the highway at 70 miles an hour “beware of drivers in cars”.  Spreadsheets are the basic common denominator in business language.  Be it a simple monthly budget, the much maligned Value At Risk, or some large scale interrelated financial exercise, the information on the spreadsheets are simply the language into which we we abstract our world in order to make decisions (1).    Munger describes Man With Spreadsheet Syndrome.  Munger says the trouble starts when financial analysts start to say, “Since I have this really neat spread sheet it must mean something…”

Having seen so many bad InfoSec risk models in the past several years, I can only agree with Charlie.  Beware.  As most of the time the, spreadsheets aren’t the problem.  It’s the quality of information on them, or the fact that someone tries to extrapolate more knowledge from the results of a model than they should. But more on Munger and what he really believes, in his own words, below.

Accuracy vs. Precision, The Art vs. Science Information Security Zombie Meme, and WAGing Sciences

“if it is impossible to deduce a wave equation strictly logically, then the formal steps carrying on to it, are, as a matter of fact, only witty guesses” (Max Born)”

Like a National Socialists or some Libertarians or NeoConservatives do in the political spectrum from left to right, Physics tends to bend the spectrum back on itself.  That is, like Charlie Munger intimates physics can be a very precise science.  Until you get to Heisenberg and/or Schrödinger that is, then all bets are off (or on, depending on your position of observation).  Once you hit that level, it’s kind of, well, a lot of guessing.   Very structured and at times accurate guessing, maybe even “witty”.  But if “guess” is defined as “estimate or suppose (something) without sufficient information to be sure of being correct”, then when scientists establish estimates around parameters in creating a theory or hypothesis, we might crudely call those estimates “guesses”.  It is an estimate, not the “value”.  You might say that only God *knows* the value, we mortals can only begin to understand the uncertainty around the estimate.

The Guessing Game

Now do me a favor.  Think about various fields of science, and tell me how many of those – especially given the rate of change in accepted theories over the past 250 years or so, operate even though they don’t (or didn’t in the 18th century) have sufficient information to be sure of being correct.

You know what?  Though Richard uses it like a dirty word, I’ll be happy to embrace the word “guess” at this point.  I like “hypothesis” or “theory” better, but I find that the only way to overcome manipulation of linguistic connotation is to embrace the intended insult.  And one of the reasons I’m comfortable here (besides being in such good company), is that, as anyone who has ever spent the last couple of years reading what I’ve been writing about risk and risk analysis will tell you (what is wrong with you people, anyway) I’ve been saying:

Risk is a piece of knowledge, and as such it’s not something to be found intrinsically in nature that you can engineer.  Like the concept of speed, it is a derived value that describes a combination of natural states.  Therefore, the hope in risk analysis is not precision, it’s accuracy.

One shot is precise but wrong, one shot is accurate but on target.

One shot group is precise but wrong, one shot group is not precise, but accurate.

As the diagram Jack Jones built there suggests, precision and accuracy are different subjects, different meanings, and different goals.  In fact, when I do a FAIR analysis, I use quantitative numbers intentionally *not* to provide precise posterior metrics, but to develop (hopefully accurate) imprecise scatterplots.  Again, those who know me do know that I’m really not a “quant” or a “qual” but a “visual”, someone who is comfortable with the use of different scales of measure because, as it turns out, that guessing that those physicists do?  They do it using Bayes Theorem, and Bayes (said to be a quantitative extension of Aristotelian logic) sort of warps our perceptive difference between many things, including quantitative and qualitative expression.  It might be helpful to think of it this way – we really can’t use frequentist statistics (the kind people my age would learn in undergrad classes) because we’re describing not something that occurs naturally that we can count, but something that is synthesized from nature.  Risk is a derived value around likelihood and impact.  Which is fine!  And it’s fine especially when we frame likelihood and impact in a real world notions (expected events per annum, and expected dollars lost per event, like FAIR does).  This framing in real world contexts allows us to go back and test our theories and models and revise them (you know, science).

Quantum Entanglement: Probability, Risk, Munger, & Rationality.  Or, Charlie Munger and 20 Bayesians Walk Into A Bar….

And they all come out good friends and in intellectual agreement?

So I don’t want to turn this post into too much of a primer on knowledge and nature, measurement and theory creation, but let me cover one last important Bayesian concept (it’s not really a Bayesian concept, per se, it’s just that like many things the use of numbers in Bayes helps formalize the process into something accurate).

Model Selection

In order to derive knowledge, the Bayesian must know that the model they have is the best one to use.  Thus, the concept of Bayesian model selection is really cool and very important.  Here’s a great man giving us an example of what I mean in a very pragmatic, real world sense. Do take a second and read the following:

“You’ve got to have models in your head.  And you’ve got to array your experience ? both vicarious and direct ? on this latticework of models.  You may have noticed students who just try to remember and pound back what is remembered.   Well, they fail in school and in life.  You’ve got to hang experience on a latticework of models in your head.

What are the models?  Well, the first rule is that you’ve got to have multiple models ? because if you just have one or two that you’re using, the nature of human psychology is such that you’ll torture reality so that it fits your models, or at least you’ll think it does.  You become the equivalent of a chiropractor who, of course, is the great boob in medicine.

It’s like the old saying, “To the man with only a hammer, every problem looks like a nail.”  And of course, that’s the way the chiropractor goes about practicing medicine.  But that’s a perfectly disastrous way to think and a perfectly disastrous way to operate in the world.  So you’ve got to have multiple models.

And the models have to come from multiple disciplines ? because all the wisdom of the world is not to be found in one little academic department.  That’s why poetry professors, by and large, are so unwise in a worldly sense.  They don’t have enough models in their heads.  So you’ve got to have models across a fair array of disciplines.

You may say, “My God, this is already getting way too tough.”  But, fortunately, it isn’t that tough ? because 80 or 90 important models will carry about 90% of the freight in making you a worldly ? wise person.  And, of those, only a mere handful really carry very heavy freight.

So let’s briefly review what kind of models and techniques constitute this basic knowledge that everybody has to have before they proceed to being really good at a narrow art like stock picking.

First there’s mathematics.  Obviously, you’ve got to be able to handle numbers and quantities ? basic arithmetic.  And the great useful model, after compound interest, is the elementary math of permutations and combinations.  And that was taught in my day in the sophomore year in high school.  I suppose by now in great private schools, it’s probably down to the eighth grade or so.

It’s very simple algebra.  It was all worked out in the course of about one year between Pascal and Fermat.  They worked it out casually in a series of letters.

It’s not that hard to learn.  What is hard is to get so you use it routinely almost everyday of your life.  The Fermat/Pascal system is dramatically consonant with the way that the world works.  And it’s fundamental truth.  So you simply have to have the technique.

Many educational institutions ? although not nearly enough ? have realized this.  At Harvard Business School, the great quantitative thing that bonds the first ? year class together is what they call decision tree theory.  All they do is take high school algebra and apply it to real life problems.  And the students love it.  They’re amazed to find that high school algebra works in life….

By and large, as it works out, people can’t naturally and automatically do this.  If you understand elementary psychology, the reason they can’t is really quite simple: The basic neural network of the brain is there through broad genetic and cultural evolution.  And it’s not Fermat/Pascal.  It uses a very crude, shortcut ? type of approximation.  It’s got elements of Fermat/Pascal in it.  However, it’s not good.

So you have to learn in a very usable way this very elementary math and use it routinely in life ? just the way if you want to become a golfer, you can’t use the natural swing that broad evolution gave you.  You have to learn  to have a certain grip and swing in a different way to realize your full potential as a golfer.

If you don’t get this elementary, but mildly unnatural, mathematics of elementary probability into your repertoire, then you go through a long life like a one?legged man in an ass?kicking contest.  You’re giving a huge advantage to everybody else.

One of the advantages of a fellow like Buffett, whom I’ve worked with all these years, is that he automatically thinks in terms of decision trees and the elementary math of permutations and combinations….””

– Charlie Munger

Yes, the Charlie Munger Richard is using to try to warp into his own perspective on risk analysis.  In fact, there are even better quotes on rationality and how this ad-hoc Bayesianesque model selection Munger uses allows him to be rational and identify bias – but this post was probably TLDNR about 500 words ago, so I’ll spare you.


But if quantum mechanics isn’t physics in the usual sense — if it’s not about matter, or energy, or waves, or particles — then what is it about? From my perspective, it’s about information and probabilities and observables, and how they relate to each other.

– Scott Aaronson

But if information risk management isn’t security in the usual sense – if it’s not about vulnerabilities, threats, controls or assets – then what is it about?  From my perspective, it’s about information and probabilities and observables, and how they relate to each other.

– Alex Hutton, with all apologies to Scott who has already been famously plagiarized to heck.

Finally, there is one other proof I’d like to offer you, my rational reader, as to why I think Richard is wrong.  See, I’m kind of confused as to what “his side” of the argument represents!

The “Not Using Information To Make Decisions Side?”  The Best Suggested  Practices side?  Information Witch-Doctory?  Information Security Professionals for Really Crappy Analysis?

The problem in suggesting that you take scientific method out of risk analysis is that Information Security then becomes a faith-based initiative.  If you’re good with that, nothing I’m gonna do or say here is gonna change you.  Create your secret societies, get your handshakes down, buy yourselves hooded dresses, select your Pope or whatever, and be done with it.

If you’re not fine with understanding your job based on the innards of sheep or the positions of celestial bodies, then sign up.  We’re the New School of Information Security.  We haven’t got all the answers, but try to keep it rational and we have some ideas about what needs to be done in order to derive real scientific progress in our field.

High Falootin’ I Don’t Have Time For This Pragmatism:

So one thing I always here when talking about risk analysis is the demand it creates on our resources.  Will you, the New School practitioner, need more than a “gut” assessment?  Sometimes, sometimes not.  Like Munger whose decision making accuracy is honed by his ad-hoc Bayesianish Model Selection, once we have an accurate model, we may or may not derive value from more than “back of a napkin” risk analysis.  But the models and analysis done as we learn and apply the models do help us identify bias and become rational.  You know, those concepts the Bayesian Rationalists feast on.  Oh, and apparently Charlie Munger, too.

But honestly, the to next person that says that risk analysis is too hard but then uses CVSS2, some GRC application with an absolutely terrifying User Experience/Interface, or does an OCTAVE enterprise “so-called risk assessment” (see Richard, we can both use Newspeakishy terms in our blogs!) with its boiling the ocean threat/vulnerability pairing….  I’ll say that these things are not only less *correct* (both in formal theory and in results) than proper risk analysis, but they are much less informative when used in decision making while taking up probably the same (or less) amount of time.  Oh, you might have to practice some at first, especially sense we’ve all be ingrained with such bad perspectives on things like probability theory and risk modeling (CISSP exam, I’m looking at you).


So Richard, please feel free to go on saying “can’t, can’t, can’t”.  I’ll just leave it with this.  I’ve said my piece.  And here out, I’d like to challenge you to start discussing what we need before you will say “can, can, can”.  Because frankly, I do get tired of repeating myself and I’m sorry if, for longtime readers, I’m repeating myself.  It’s difficult for me to understand my rate of entropy (apology for yet another and probably last semi-transparent and admittedly only semi-clever quantum physics reference in the blog post).  But these “can, can, can” sorts of discussions are those that I think we should take part in.

Meanwhile, these “can’t” discussions don’t really bother me, because I think discussion is healthy for the industry and I’m really confident that I don’t know a lot of stuff.  And ultimately, I have to not only be aware that there is a non-zero chance that some folks who don’t agree with me might turn out to be right but I feel the need to repeatedly entertain that fact (it’s an intellectual honesty thing).  So thanks for the digging up the dead horse, Richard.  I just hope it actually wasn’t simultaneously dead & alive when we started beating on it (oops, I guess my bad quantum physics reference frequency model was wrong again)!


Finally, let me encourage you, my friend.  Go forth with Metasploit and spreadsheet in hand, and do good things.  Create your theories, hypothesize away, and test, test, test.  Use FAIR or whatever you’d like to talk about the risk you think an asset has.  Then find information from log & event monitoring tools and other sources that help you identify Threat Event Frequency.  Have your Blue Team test the amount of “force” they apply to the assets controls before it breaks (you know, becomes a State of Vulnerability).  And be comfortable in the knowledge that, for the first time, you’re doing something unique in your field by applying the scientific method, something that the great physicists of the 20th century would probably be very proud of you for doing.


(1) I’ve said before that in the current state, as long as models are valid, we put too much emphasis on the difference between qualitative and quantitative labels.   It’s like talking about the difference between oral description and visual representation – but that’s another blog post for another day.


[…] has posted another good. It is very much worth reading and thinking about. Richard Bejtlich’s Quantum State << The New School of Information Security Tags: ( risk-management risk-analysis […]

by Interesting Information Security Bits for 05/14/2009 | Infosec Ramblings on May 14, 2009 at 10:30 pm. Reply #

Alex –

You just get better and better!



by Patrick on May 15, 2009 at 3:50 pm. Reply #

Great post Alex. I tried to add my 2 cents over on without being too redundant.


by Jack on May 15, 2009 at 10:04 pm. Reply #

Nice post Alex. I think you and I met online after his 2007 push against FAIR. He seems to have a completely different world view. From his posts, I don’t understand what his alternative to assessing risk (guessing about the future) would be. It is a requirement since we don’t know the future.

by Jon Robinson on May 22, 2009 at 12:32 am. Reply #

Leave your comment

Not published.

If you have one.