Tuesday, 4 November 2008

Security Risk Metrics

To measure residual business risk we need to assess a number of areas. These should be measured using quantitative methods. Subjective evaluations of risk (qualitative measures) are based on perception. These have their place. They aid in measuring the perception of risk.

The more common risk frameworks are problematic in that they rely heavily on the use of personal judgment. The correlation between personal judgments is skewed by both the ethos and pathos of the argument. The logos (logical thought), is lost in the heart.

We need to break the risk down into its components. Let’s start with the fundamentals and work up. If we start by breaking up risk first off (risk = the probability of a threat source exploiting a vulnerability) we have a few places to start.

It is essential to create models that reflect the functions for each of these aspects.

Threats and threat modeling.

There are several data sources available that could supply the raw data necessary for the creation of a viable quantitative risk framework for security:

  1. Dshield
  2. Storm center
  3. CERTS
  4. Internal metrics etc
What is needed to create these models is to classify the data into manageable classes. Through the creation of a set of categories that can be used to model an organization and also the same with the attacker, the data from these sources can be divided into categories that provide a suitable framework. These would be organizational classifications that we can align the ongoing risk to. This will allow both predictive modeling (GARCH, ARIMA – Heteroscadastic time series data) and point processes (the existing risk, risk if we do X).

An organization itself needs to be categorized. To characterize an organization’s systems, network, applications etc, it is necessary to;
  1. Identify the access points into the network (i.e. gateways, remote access etc);
  2. Determine growth and future business needs;
  3. Make allowances for legacy systems which may affect the security design;
  4. Allow for business constraints (i.e. cost, legal requirements, existing access needs etc);
  5. Identify the known Threats and Visibility of the organization.
To do this, we need to look at both the technological needs and the business needs of that organization. We can start by mapping the risk into qualitative classes to ease people into more complete models involving distributions. At least however, these will be classes based on a non-biased and non-subjective analysis.

Administrative Steps
Administrative processes impact operational issues and as such need to be noted. In particular, areas such as policies and processes. Some the areas to consider in analyzing the administrative controls on organization would include:
  1. Determine the organizations (Business) Goals
  2. Determine the organizations structure
  3. Determine the organizations geographical layout
  4. Determine current and future staffing requirements
  5. Determine the organizations existing policies and politics
Technical Steps
Applications or systems generally do not act in isolation. Consequently it is necessary to consider more than just the primary application. By this it is meant that you also need to investigate how the application interacts with other systems. Some of the things to check this are going to include:
  1. Identifying Applications
  2. Map information flow requirements
  3. Determine the organizations data sharing requirements
  4. Determine the organizations network and server traffic access and access requirements
Characterization
In characterizing an organization we have a number of stages and that will quickly help to determine the risk stance taken. This means looking at the various applications and protocols deployed within the organization. For instance, have internal firewalls been deployed? Does centralized antivirus exist within the organization? The stages of characterization are generally conducted in an opposing order to a review. Rather than starting with policy this type of characterization starts with applications and works to see how well these fulfill the organization’s vision. The areas we need to consider are:
  1. Applications
  2. Network protocols
  3. Document the existing network
  4. Identify access points
  5. Identify business constraints
  6. Identify existing Policy and procedures
  7. Review existing network security measures
  8. Summarize the existing security state of the organization
This information is vital to be able to understand an organization’s requirements:
  • The need to be able to do to conduct your business,
  • What should the system’s security to set to permit, deny, and log, and
  • From where and by whom.
Next we need to consider the vulnerability and the attack. This is derived from survival models. I am hoping that we get a positive result from requests to the Honeynet project, but there are alternatives even without these.

Using a common framework such as the CIS application and OS risk levels to selected configurations will allow a more scientific determination. These can be aligned with Honeypot data and mapped to DShield results.

Ease of Resolution and Ease of Exploitation
Some problems are easier to fix than others, some are harder to exploit. Knowing how difficult it is to solve a problem will aid in assessing the amount of effort necessary to correct it. The following are given as human classifications, but the idea would be to use survival data. Set a vulnerability in the wild and how long does it take to be exploited? This would be a continuous function, but we can set values at points to make it easy to explain to people.

The following are common classifications in risk analysis. However they are at present subjective and biased metrics;

Trivial Resolution
The vulnerability can be resolved quickly and without risk of disruption.
The vulnerability can be exploited quickly and with a low risk of detection.

Simple
The vulnerability can be mitigated through a reconfiguration of the vulnerable system, or by a patch. The risk through a disruption of services is present, but diligent direct effort to resolve the problem is acceptable.

Moderate
The vulnerability requires a patch to mitigate and is a significant risk. for instance, an upgrade may be required.

Difficult
The mitigation of the vulnerability requires an obscure patch to resolve, requires source code editing or is likely to result in an increased risk of service disruption. This type of problem is impractical to solve for mission critical systems without careful scheduling.

Infeasible
An infeasible fix to a vulnerability is due to a design-level flaw. This type of vulnerability cannot be mitigated through patching or reconfiguring vulnerable software. It is possible that the only manner of addressing the issue is to stop using the vulnerable service.

Trivial Exploitation
The vulnerability can be exploited quickly and with a low risk of detection.

How to improve these...
The answer is to start creating Hazard/Survival distributions for the various attacks and threats. Using data that is readily available, complex but accurate mathematical forecasting and modeling may be conducted even now.

No comments: