Sunday, 14 February 2010

Reliability

I have noticed a sever lack of understanding where reliability and survival modelling is concerned over the last few days in people in the IT Security community. As such, I am going to try and explain a few terms here so that I can minimise (a little) some of this confusion.

Time Degrading Functions
A time degrading reliability function is one that approaches a minimal limit as time increases. That means that the longer the system runs, the more unreliable it becomes. This in information security parlance means that as the number of users and the length of time each increase, the security of the system decreases (and in some cases to zero).
Hence:

  • R(t)=0 as tclip_image042
Limiting Functions
Software vulnerabilities are a limited function. In any piece of software code, there are a limited number of bugs. These are not known, but they can be estimated. Unlike a time degrading function, the discovery of each bug lowers the number of remaining bugs and increases the time to discovery of the next.

In this case, the longer the software is used and the more users, the less vulnerabilities will remain.

The issue of course is the rate at which new vulnerabilities are added as a patching function. If the rate of adding additional bugs through vulnerability patching is less than one additional expected bug for each patched bug (at the mean), then the software will tend over time to becoming more secure.

The issue is that old software becomes obsolete over time, so that it is replaced with new and hence buggy software.

Finding software bugs can hence be modeled as a Cobb-Douglass function with decay over time (x-axis) and an input as to rate based on the number of users (y-axis). This means the rate at which bugs are discovered increases as the number of users of the software increases. It also means that over time, less vulnerabilities will remain in the software and the chance of a vulnerability being discovered decreases.

No comments: