29. November 2010 · Comments Off on What is Information Security: New School Primer « The New School of Information Security · Categories: blog · Tags: , , , ,

What is Information Security: New School Primer « The New School of Information Security.

I would like to comment on each of the three components of Alex’s “primer” on Information Security.

First, InfoSec is a hypothetical construct. It is something that we can all talk about, but it’s not directly observable and therefore measurable like, say, speed that we can describe km/hr.   “Directly” is to be stressed there because there are many hypothetical constructs of subjective value that we do create measurements and measurement scales for in order to create a state of (high) intersubjectivity between observers (don’t like that wikipedia definition, I use it to mean that you and I can kind of understand the same thing in the same way).

Clearly InfoSec cannot be measured like speed or acceleration or weight. Therefore I would agree with Alex’s classification.

Second, security is not an engineering discipline, per se.  Our industry treats it as such because most of us come from that background, and because the easiest thing to do to try to become “more secure” is buy a new engineering solution (security product marketing).   But the bankruptcy of this way of thinking is present in both our budgets and our standards.   A security management approach focused solely on engineering fails primarily because of the “intelligent” or adaptable attacker.

Again, clearly InfoSec involves people and therefore is more than purely an engineering exercise like building a bridge. On the other hand, if, for example, you look at the statistics from the Verizon Business 2010 Data Breach Investigation Report, page 3, 85% of the analyzed attacks were not considered highly difficult. In other words, if “sound” security engineering practices are applied, the number of breaches would decline dramatically.

This is why we at Cymbel have embraced the SANS 20 Critical Security Controls for Effective Cyber Defense.

Finally, InfoSec is a subset of Information Risk Management (IRM).  IRM takes what we know about “secure” and adds concepts like probable impacts and resource allocation strategies.  This can be confusing to many because of the many definitions of the word “risk” in the english language, but that’s a post for a different day.

This is the part of Alex’s primer with which I have the most concern – “probable impacts.” The problem is that estimating probabilities with respect to exploits is almost totally subjective and there is still far too little available data to estimate probabilities.On the other hand, there is enough information about successful exploits and threats in the wild, to give infosec teams a plan to move forward, like the SANS 20 Critical Controls.

My biggest concern is Alex referencing FAIR, Factor Analysis of Information Risk in a positive light. From my perspective, any tool which when used by two independent groups sitting in different rooms to analyze the same environment can generate wildly different results is simply not valid. Richard Bejtlich, in 2007, provided a thoughtful analysis of FAIR here and here.

Bejtlich shows that FAIR is just a more elaborate version of ALE, Annual Loss Expectency. For a more detailed analysis of the shortcomings of ALE, see Security Metrics, by Andrew Jaquith, page 31. In summary, the problems with ALE are:

  • The inherent difficulty of modeling outlier
  • The lack of data for estimating probabilities of occurrence or loss expectancies
  • Sensitivity  of the ALE model to small changes in assumptions

I am surely not saying that there are no valid methods of measuring risk. It’s just that I have not seen any that work effectively. I am intrigued by Douglas Hubbard’s theories expressed in his two books, How to Measure Anything and The Failure of Risk Management. Anyone using them? I would love to hear your results.

I look forward to Alex’s post on Risk.