Saturday, 13 February 2010

The economics of information security (2)

To represent the effect of security expenditure (y) against investment over time (x) and the result (z) as expected returns (or profit) we see that there are expenditure inflection points.

image

What we see is that spending too much on security has a limiting function on profit. Also too little expenditure has a negative effect on profit.

This is where risk analysis comes into its own. The idea is to choose an optimal expenditure on security that limits the losses. Money should be spend on security until that last dollar returns at least a dollar in mitigated expected loss.

Once the expenditure of a dollar returns less than a dollar, the incremental investment is wasted.

This is of course highly simplified. The reality is that any organisation is a conglomeration of choices. Each of these choices involves a decision against other options as well as trade offs against other uses of funds.

Risk is not simply about the total IT security budget, but involves the optimum mix of choices.

Friday, 12 February 2010

Risk and more

I shall start with an apology. 

I have allowed myself to be drawn into a set of flames and online posts with Tim Mullen. This occurs as I am passionate about this topic. That is risk and economics and how it applies to security. I do apologize for allowing myself to be drawn into a flame war with Tim, something he is far better at.

Risk and economics matter to security. Like it or not, money is a limited resource and spending it on the most effective measures that return more effective results means something. Actually, it means a good deal. Going to management with another request for more money means taking funds from some other place where it may be better utilised.

In a few weeks I am submitting a series of papers on risk modeling. These are being submitted to IEEE and other peer reviewed papers. Together, these form the foundation of an expert system. As Tim and others assert, the use of mathematically based systems is not perfect. This is what probability means. I have not aimed at perfection, that is a fools errand. I have aimed at economic optimality. This is the best result for the best economic return. This can be argued in a heated debate, but the matter is not about rhetoric. It is about science and science is not about debate. It is not about consensus. It is about getting to the truth no matter how many people do not like the answers.

These papers will be public domain. At this point, the answer is simple, the assertions I make in them can be tested. I do not assert that they will lead to perfect calculations of what will occur. If this was true, it would not be risk. By its very definition, risk is a probabilistic function. Many people in the industry seem to forget this.

Formally, a (strong/weak) Pareto optimum in economics is a maximal element for the partial order-relation of Pareto improvement. To put this simply (something I do not always do), economic efficiency requires that an allocation is made such that no other allocation is "better". This means, it is not ferasible to find a better solution for a cost. One hence maximizes the utility function for the resources in question. This is always a trade-off. To have more security results in lower growth.  Lower growth can result in less funds being available. Less funding results in less security. The answer is to balance security and factors that lead to growth as this allows for a growth in the security risk expenditure as well.

An expert system does not have to be perfect to have value. It needs to be better than what we do now. What we do now is commonly no better than taking one number that an expert makes up and multiplying this by another made up number. A system that works within a confidence bound will miss some instances of attack. By definition. The difference is that the number of errors can also be predicted. You may not know which system gets compromised, but you can estimate how many will be compromised over a time period. For an organisation this has value.

This matters as management can see the results and make a choice based on reason. Some servers get compromised, but the cost of this occurring can be planned and if the cost of a compromise is less than the fix, then the fix is not effective.

"Everybody knows that you can't model risk".
Once, everybody knew that the earth was the centre of the universe. That the stars are just holes in the carpet of the sky. Rhetoric has no scientific value. Some people, such as Tim may use this in a demagogical manner to cover the facts. This is a common political attack. The issue is that it has no alignment to truth. Truth is based on fact. The scientific method is a valid measure and little else is.

So, people will slur me, attack my character, and do whatever else seems fit. The end result is that I shall publish later this year. These will be in peer reviewed journals and conferences.Like other papers I have published, some people will denounce these, but they will not do this through science.

I cannot win at a flame war nor against rhetoric. I am not inclined to be a sophist. The simple answer will come from testing the models and systems I shall be publishing. If they do better than existing risk guessing, they are valuable. If they save money, they are valuable.

Tim has not considered the value of security. This is dangerous as it results in a misapplication of funds.Many do not understand this, and many in the information security industry do not care.

I am used to being slurred and attacked personally. It is simpler than actually checking the facts or what I propose. Economics does have a place in security.

My allowing myself to be drawn into such a debate simply lowers myself to the level of the other party. I cannot win any debate this way. Nor would I want to.

Qualifications.
SANS has online training. I have done some of their online courses. I have also done many courses at conferences both here in Australia and in the US. I have sat the same exams and the results make no difference whether I attended the course in person or by distance. I have sat three GSE exams. I wish that more people would do these. They are difficult and they do challenge you, but there are many people that could take the test and pass.

I hold at present more GIAC certifications than any other person.

I will not apologize for this. I invested in training and certification from SANS/GIAC for one reason, it is the best training from the best instructors anywhere in the world hands down.

It is true, I do not list all of my qualifications and certifications. I have over 100 IT certifications from SANS to Microsoft to Cisco to many others. I do not have value in what I can sell myself for by doing more. What I do get is knowledge that I have applied to risk systems I have been working on for over a decade now. I am good at math, but I have also spent decades learning risk and information security. What I have learnt is that those of us who are experts are also extremely biased. Our biases to one system or another and the heuristics this results in skews our opinions. We do not provide the best answers, we are biased. This is the reason for an expert risk system. It is not biased, it is repeatable. Any person can run the same calculations and come up with the same answers - and these can be tested empirically. I do not seek to be liked. I seek an efficient risk system, this means less loss and more money where it is actually needed.

As for my degrees.

I have recently completed my Masters Degree in Statistics. This was from the University of Newcastle. This is a highly respected research university. My paper was on methods for analysis for analyzing the homogeneity of variance. I also did heteroscadestics and time series modeling for risk. Generally, these skills are deployed into risk modeling for financial instruments. In this field I could earn far more, but it is not my passion.

I have several degrees from CSU. I am working on my 4th masters degree with them (a Masters degree in systems development - coding). I am also completing my second doctorate with them. I have posted the details of this PhD on my blog in the past. I also do not apologize for this. I have done these degrees by distance, that much is true. At present I graduate an average of one post-graduate degree a year. I also work full time to the tune of 60 hours a week and give further time to my church and charity. I also teach on top of this.

Outside of professional environments, I am socially isolated for much of the time. I do not know how to handle people like Tim and they beat me constantly and consistently. I like to think that rationally means something. I like to believe in the inherent goodness of people. This coupled with little social interaction outside academic or professional circles leaves me at a disadvantage. 

Charles Sturt University is a "real" university. Distance degrees are valid. I may not physically attend campus much, but I do use modern technologies to video conference with my supervisors and peers. I support these and am teaching at CSU for this reason.

Doing a degree by distance can be more difficult than doing it on campus. You have to pace yourself and there are far fewer prompts. I did not used to tell people of my academic leanings, as IT is for many a place adverse to formal learning. There are many (generally unqualified) people who look down on those who have taken the time to study. I will remain proud of study, my universities and my teachers.

Tuesday, 9 February 2010

Modelling Risk

Many people feel that it is not feasible to model risk quantitatively. This of course is blatantly false. In the past, many of the calculations have been computationally infeasible at worst and economically costly at best. This has changed. The large volumes of computational power that is available coupled with novel stochastic methods has resulted in an efficiently viable means of calculating risk quantitatively with a high degree of accuracy. This can be measured as a function of time (as survival time), finance (or monetary value) or any number of other processes.

As an example, a recent question as to the ability of secure SMS based banking applications has been posed on the Security Focus mailing list.

The reality is that any SMS application should be a composite of multiple applications. Such a system could be one where a system uses an SMS response with a separate system (such as a web page over SSL), the probability that the banking user is compromised and a fraud is committed, P(Compromise), can be calculated as:

P(Compromise) = P(C.SMS) x P(C.PIN)

Where: P(C.SMS) is the probability of compromising the SMS function and P(C.PIN) is the compromise of the user authentication method.

P(C.PIN) is related to the security of the GSM system itself without additional input. P(C.SMS) and P(C.PIN) are statistically independent and hence we can simply multiply these two probability functions to gain P(Compromise).

The reason for this is that (at present) the SMS and web functions are not the same process and compromising one does not aid in compromising another. With the uptake of 4G networks this may change and the function will not remain as simple.

The probability that an SMS only system can be cracked is simply the P(C.SMS) function and this is far lower than a system that deploys multiple methods.

For each application, we can use Bayes' theorem to model the number of vulnerabilities and the associated risk. For open ports, we can use the expected reliability of the software together with the expected risk of each individual vulnerability to model the expected risk of the application. For instance, we could model clip_image002using this method.

clip_image004

alternatively;

clip_image006

Over time, as vulnerabilities are uncovered the system has a growing number of issues. Hence, the confidence in the product decreases with time as a function of the SMS utility alone. This also means that mathematical observations can be used to produce better estimates of the number vulnerabilities and attacks as more are uncovered.

It is thus possible to can observe the time that elapses since the last discovery of a vulnerability. This value is dependent upon the number of vulnerabilities in the system and the number of users of the software. The more vulnerabilities, the faster the discovery rate of flaws. Likewise, the more users of the software, the faster the existing vulnerabilities are found (through both formal and adverse discovery).

If we let E sand for the event where a vulnerability is discovered within the Times T and T+h for n vulnerabilities in the software

clip_image008

Where a vulnerability is discovered between time T and T+h we can use Bayes’ Theorem to compute the probability that we have n bugs:

clip_image010

From this we see that:

clip_image012

By summing the denominator we can see that if we observe a vulnerability at time T after the release and the decay constant for defect discovery is clip_image014, then the conditional distribution for the number of defects is a Poisson distribution with expected number of defectsclip_image016, .

Hence:

clip_image018

The reliability function (also called the survival function) represents the probability that a system will survive a specified time t. Reliability is expressed as either MTBF (Mean time between failures) and MTTF (Mean time to failure). The choice of terms is related to the system being analysed. In the case of system security, it relates to the time that the system can be expected to survive when exposed to attack. This function is hence defined as:

clip_image020

The function F(t) in x.x1 is the probability that the system will fail within the time 't'. As such, this function is the failure distribution function (also called the unreliability function). The randomly distributed expected life of the system (t) can be represented by a density function, clip_image022and thus the reliability function can be expressed as:

clip_image024

The time to failure of a system under attack can be expressed as an exponential density function:

clip_image026

where clip_image028is the mean survival time of the system when in the hostile environment and t is the time of interest (the time we wish to evaluate the survival of the system over). Together, the reliability function, R(t) can be expressed as:

clip_image030

The mean (clip_image028[1]) or expected life of the system under hostile conditions can hence be expressed as:

clip_image032

Where M is the MTBF of the system or component under test and clip_image034is the instantaneous failure rate where Mean life and failure rate are related by the formula:

clip_image036

The failure rate for a specific time interval can also be expressed as:

clip_image038

As clip_image020[1] and clip_image018[1], we can see that the reliability of the SMS function can be expressed as:

clip_image040

What this means is that the SMS only function has a limit at R(t)=0 as tclip_image042. This means that the longer the application is running, the less secure it is.

Adding an independent second factor goes some way to mitigate this issue as long as R(t) does not clip_image0440 as tclip_image042[1]and that it does this more effectively than the SMS function itself.