Many people feel that it is not feasible to model risk quantitatively. This of course is blatantly false. In the past, many of the calculations have been computationally infeasible at worst and economically costly at best. This has changed. The large volumes of computational power that is available coupled with novel stochastic methods has resulted in an efficiently viable means of calculating risk quantitatively with a high degree of accuracy. This can be measured as a function of time (as survival time), finance (or monetary value) or any number of other processes.

As an example, a recent question as to the ability of secure SMS based banking applications has been posed on the Security Focus mailing list.

The reality is that any SMS application should be a composite of multiple applications. Such a system could be one where a system uses an SMS response with a separate system (such as a web page over SSL), the probability that the banking user is compromised and a fraud is committed, P(Compromise), can be calculated as:

P(Compromise) = P(C.SMS) x P(C.PIN)

Where: P(C.SMS) is the probability of compromising the SMS function and P(C.PIN) is the compromise of the user authentication method.

P(C.PIN) is related to the security of the GSM system itself without additional input. P(C.SMS) and P(C.PIN) are statistically independent and hence we can simply multiply these two probability functions to gain P(Compromise).

The reason for this is that (at present) the SMS and web functions are not the same process and compromising one does not aid in compromising another. With the uptake of 4G networks this may change and the function will not remain as simple.

The probability that an SMS only system can be cracked is simply the P(C.SMS) function and this is far lower than a system that deploys multiple methods.

For each application, we can use Bayes' theorem to model the number of vulnerabilities and the associated risk. For open ports, we can use the expected reliability of the software together with the expected risk of each individual vulnerability to model the expected risk of the application. For instance, we could model using this method.

alternatively;

Over time, as vulnerabilities are uncovered the system has a growing number of issues. Hence, the confidence in the product decreases with time as a function of the SMS utility alone. This also means that mathematical observations can be used to produce better estimates of the number vulnerabilities and attacks as more are uncovered.

It is thus possible to can observe the time that elapses since the last discovery of a vulnerability. This value is dependent upon the number of vulnerabilities in the system and the number of users of the software. The more vulnerabilities, the faster the discovery rate of flaws. Likewise, the more users of the software, the faster the existing vulnerabilities are found (through both formal and adverse discovery).

If we let E sand for the event where a vulnerability is discovered within the Times T and T+h for n vulnerabilities in the software

Where a vulnerability is discovered between time T and T+h we can use Bayes’ Theorem to compute the probability that we have n bugs:

From this we see that:

By summing the denominator we can see that if we observe a vulnerability at time T after the release and the decay constant for defect discovery is , then the conditional distribution for the number of defects is a Poisson distribution with expected number of defects, .

Hence:

The reliability function (also called the survival function) represents the probability that a system will survive a specified time t. Reliability is expressed as either MTBF (Mean time between failures) and MTTF (Mean time to failure). The choice of terms is related to the system being analysed. In the case of system security, it relates to the time that the system can be expected to survive when exposed to attack. This function is hence defined as:

The function F(t) in x.x1 is the probability that the system will fail within the time 't'. As such, this function is the failure distribution function (also called the unreliability function). The randomly distributed expected life of the system (t) can be represented by a density function, and thus the reliability function can be expressed as:

The time to failure of a system under attack can be expressed as an exponential density function:

where is the mean survival time of the system when in the hostile environment and t is the time of interest (the time we wish to evaluate the survival of the system over). Together, the reliability function, R(t) can be expressed as:

The mean () or expected life of the system under hostile conditions can hence be expressed as:

Where M is the MTBF of the system or component under test and is the instantaneous failure rate where Mean life and failure rate are related by the formula:

The failure rate for a specific time interval can also be expressed as:

As and , we can see that the reliability of the SMS function can be expressed as:

What this means is that the SMS only function has a limit at R(t)=0 as t. This means that the longer the application is running, the less secure it is.

Adding an independent second factor goes some way to mitigate this issue as long as R(t) does not 0 as tand that it does this more effectively than the SMS function itself.