Saturday, 28 June 2008

More Security Metrics

At the end of the day it is not the metrics, or the measures themselves that make the difference, it is their ability to affect positive change.”
Agreed. To do this, there needs to be some basis in fact.

if it doesn’t meet at least some of the above criteria than it is simply an exercise in academia
Agreed as well. Most of the issue I see are more a lack of trust in what is there. What we can get to with the models I am proposing in later stages to deploy is a quantitative economic risk model. This is one that will calculate expected and forecast risk in dollar terms. This is using financial language where needed, IT speak where needed etc.

I have the unusual ability to understand most professions as I collect professions and degrees as a type of hobby. Quantitative heteroscadestic risk models are going to feature more and more. Strangely enough, they already do for a large part of what we are ignoring.

BASELII requires a quantitative measurement and the calculation of IP value is generally based on these methods. Treasury functions run by a firms CFO are commonly based in these terms. Even marketing people are starting to deploy quantitative analysis.

IT people do not need to understand all the ins and outs of the math, just that it is verifiable.

The model should eventually get to a dollar amount where that is a valid measure and an accepted ordinal measure where necessary.

Hence the call for something simple and the same to start such as the CIS tool output. Coupled with a number of nominal and categorical fields, this data could be modelled into a decent hazard/survival model which could then be converted into a quantitative financial calculation for expected loss. The other is an expected time value on systems. This would be a survival metric of the expected distribution of compromise and recovery.

I do this for modelling PCI and POS risk now (in the real world) with good results (95% confidence or greater). So I fail to see why it can not be extended to more than PCI systems.

The output I generally report to boards is a cost distribution. That is:
· Max Cost and likelihood
· Expected cost
· 95% CI predictive cost range

This can then be compared with alternatives. Add a firewall and the risk =x.

A) identify your assets
B) identify the asset value for the enterprise
C) identify the risks to the asset
D) identify the impact and probability of these risk
E) calculate the risk per asset

B is economic value. This is a finance and not accounting calculation.
D is based on opportunity cost and the NPV and IRR of the asset/associated project.

There is a time basis to money. NPV (net present value) is a good means of determining value without being too complex. IRR is the Internal rate of return. (I also did finance)

Impact in D is the loss associated with the asset being removed or tarnished.

In many large firms, the finance and treasury groups will generally determine project metrics. Most firms buy risk beta's rather than calculating their own.


Anonymous said...

Who knows where to download XRumer 5.0 Palladium?
Help, please. All recommend this program to effectively advertise on the Internet, this is the best program!

Addison Conroy said...

Well clearly, if you've read the book or seen the film, you'll know that part of the answer to that question must only
be described as deception and fraud, which he ultimately served time for. But were we only to see things in such black-and-white terms, we would perhaps miss out on an awful lot that we could learn from. Here are serving , Stratton oakmont training manual, Stratton Oakmont training guide and script Stratton Oakmont.