I’d argue that you’ve fallen into a bit of a false dichotomy at such an early stage of the project that you run the risk of missing the genuinely interesting stuff that’s going on in this space.

The model of “big for-profit company is the good guy protecting sensitive data” and “evil black hat ‘hackers’ are the bad guy trying to steal data” might have been useful in, say, 1985. However, to create that false simplicity in the online ecosystem, today, completely ignores the real topological patterns of security innovation, security threats, and forced evolutionary upgrades of mutualistic toolsets.

I’d suggest that starting with a network-centric (in the mathematical sense) model of online ecosystems is a more productive path for your stated security goals. Online, there’s “predators” and “prey” in various niches and sub-niches within the larger network of data connectivity and sharing. Sometimes the “predators” are governments and the “prey” are citizens; sometimes it’s media cartels as predators, and distributed groups of like-minded activists being targeted as prey. Sometimes the tables turn, and Anonymous is stalking Scientology with security-busting tools. And then there’s Wikileaks… bit of a complex variable, there.

It is within this shifting, fluid, dynamic structure that the really accelerated and creative evolution of security exploits and defenses are actually evolving. By the time the new stuff gets dumbed-down to the point where some overgrown helpdesk monkey in a bloated private corporation gets convinced by a slick salesman working for a proprietary software company “selling” limited access to non-source binaries, the actual cutting edge that developed the core innovation in that repackaged tool has long since moved on to newer, better, faster, smarter stuff.

The Platonic example of this is so-called “fast flux DNS” tools. The trajectory of development, through deployment, through expansion, and eventual pick-up by commercial entities is more or less textbook in how it’s played out. However, if you just look for “good guy big companies” and “bad guy scary hackers,” you’ll miss not only the forest for the trees, but also the trees themselves and maybe even the idea of plants in general.

Chief Technology Officer
Baneki Privacy Computing

by Fausty on November 24, 2009 at 5:06 am. Reply #

Great comments, Fausty.

Yes, the way I worded the description does fall into the “good guys vs. bad guys” mental trap. I hate that, myself, when other folks in the InfoSec community do that. And now I’ve done it myself. D’oh!

But we do have to start somewhere, and starting with a simple set of models and scenarios makes it more tractable and easier to digest. Starting with simple host-parasite models, preditor-prey, and then expanding from there seems like a productive research path. (I have to fight my own urges to add complexity!)

What would be fantastic would be to create a set of models that works for both good guy vs. bad guy scenarios and also for the more fluid/ambiguous/emergent network-centric scenarios.

When I was formulating this research project, I actually had the “network-centric” scenarios in mind, as you describe. I didn’t include them in the description, partly to keep it short. But it’s the network-centric scenarios that are a prime motivation for using methods and tools from computational social sciences rather than relying solely on dynamical system models drawn from standard evolutionary ecology (i.e. systems of ordinary or partial differential equations).

Taking on network-centric scenarios has the added benefit of making our models applicable to a wide range of applications outside of information security, strictly defined. That’s a good thing, because that means more sponsors and “powers that be” will be interested in the results.

I’m glad you brought up the example of “fast flux DNS” as a case example. I’ll dig into that more to see if it would make a good case study for this research.

Thanks again for your insights.


P.S. While we are discussing network-centric scenarios, I’m also keenly interested in modeling the dynamics of “grey hats” — all the poor users, managers, administrators, etc. who have to implement and use the security technologies and policies while trying to do their job or seeking personal pleasure. They are in their own “arms race” of sorts (or maybe “competitive race” might be better fit) with fast changing information technologies (that they want or try to use), changing threats (that they may or may not understand, or may misperceive), and changing security policies. Regarding security, the utility function of the “grey hats” is very much dominated by avoiding and minimizing their security capabilities and costs, which can lead to rational or irrational non-compliance, over-compliance (“ban everything!”), and also to falling prey to social engineering (e.g. downloading fake AV). Thus their lethargy and under-compliance, on the one hand, or their zealotry and anxiety-driven over-reactions, on the other hand, can cause the “grey hats” to unwittingly aid the “black hats” (how ever they are defined).

As far as I know, no one in the Economics of Information Security community has modelled all these dynamics at the same time, even in the simple attacker-defender frame. Should be fun!

by Russell on November 25, 2009 at 6:01 pm. Reply #

[…] and similar.” Read the rest of the post and drop him a line if you are interested. Information Security as an Evolutionary Arms Race – Research Collaborators Wanted << The… Tags: ( research […]

by Interesting Information Security Bits for 11/24/2009 | Infosec Ramblings on November 25, 2009 at 12:21 am. Reply #

Leave your comment

Not published.

If you have one.