Friday, 7 November 2008

The Fundimental Maxims of Security

There are a number of maxims for the creation of a secure system in information technology. The question is where do these come from and what are they all.

The paper, "The Protection of Information in Computer Systems" by J. H. Saltzer and M. D. Schroeder [Proc. IEEE 63, 9 (Sept. 1975), pp. 1278-1308] was the watershed paper on this topic and the origins of he maxims that we take for granted today.

These maxims are the fundamentals of information security. These are:

  1. Economy of mechanism: Keep the design as simple and small as possible.
  2. Fail-safe defaults: Base access decisions on permission rather than exclusion.
  3. Complete mediation: Every access to every object must be checked for authority.
  4. Open design: The design should not be secret.
  5. Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.
  6. Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
  7. Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.
  8. Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.
Point 8 is commonly overlooked. To make a security system work it needs to be accepted by the people using it. If we make a system too complex it will fail. If people perceive it is impeding on their ability to undertake their job, they will find a way to bypass it.

These Maxims are listed in the section of the paper under called "Design Principles". This section begins by stating: "Whatever the level of functionality provided, the usefulness of a set of protection mechanisms depends upon the ability of a system to prevent security violations. In practice, producing a system at any level of functionality (except level one) that actually does prevent all such unauthorized acts has proved to be extremely difficult. Sophisticated users of most systems are aware of at least one way to crash the system, denying other users authorized access to stored information. Penetration exercises involving a large number of different general-purpose systems all have shown that users can construct programs that can obtain unauthorized access to information stored within. Even in systems designed and implemented with security as an important objective, design and implementation flaws provide paths that circumvent the intended access constraints. Design and construction techniques that systematically exclude flaws are the topic of much research activity, but no complete method applicable to the construction of large general-purpose systems exists yet. This difficulty is related to the negative quality of the requirement to prevent all unauthorized actions".

A few applications (such as Port Knocking) should go over these maxims and maybe they might realize that they are not meeting several of them.

Wednesday, 5 November 2008

Security & Economics

Honestly I find it difficult to understand why people do not get the idea of why errors and low quality software occur.

A comment was made as a question on Security Focus:
Why isn't Quality Assumed?
Why isn't Security Assumed?
Why are these concepts thought of as add ons to Applications and Services?

Why do they need to be specified, when they should be taken for granted?
- Input Validation
- Boundary Conditions
- Encrypt Data as necessary
- Least Privilege Access
- White lists are better than Black lists

It is simple economic theory. We are talking high school level. If you
think about it for a moment you will come to understand.

First, think of a few things in life outside IT. I will pose a few
questions and see if you can answer them:

  • Are all cars of the same quality? Why do you pay more for a Lexus over
  • a Hyundai?
  • Do you have to take insurance on a trip?

Now some that are a little closer to home:
  • Are all door locks of the same quality?
  • Do all houses come with dead-bolts and alarm systems?
  • Do all cars have a lojack installed?
  • Do all windows on all houses have quality locks?
  • Are all windows made of Lucite (which is child proof)?

The simple answer is that quality varies with cost. If you want more
you pay more. This is honestly a simple exercise. Quality software
does exist. If you like you can go to the old US Redbook standards and
have an "A" class software verification. Except that that copy of
Windows XP or Vista will now cost $10000+.

I do code reviews. They are needed to both verify the findings from
static analysis software used to test code as well as to gain a higher
level of assurance. Even then, this is not perfect as modeling complex
interactions is more time consuming and error prone.

I can do around 190 to 220 lines of code an hour on a good day for a
language such as C. Less for Assembly. My rates are charged hourly. An
analysis of XP would take over 50,000 man hours at this level. This
excludes the fixes. This excludes the add-ons.

How many million lines of code are in Vista?

You get what you pay for.

Tuesday, 4 November 2008

Security Risk Metrics

To measure residual business risk we need to assess a number of areas. These should be measured using quantitative methods. Subjective evaluations of risk (qualitative measures) are based on perception. These have their place. They aid in measuring the perception of risk.

The more common risk frameworks are problematic in that they rely heavily on the use of personal judgment. The correlation between personal judgments is skewed by both the ethos and pathos of the argument. The logos (logical thought), is lost in the heart.

We need to break the risk down into its components. Let’s start with the fundamentals and work up. If we start by breaking up risk first off (risk = the probability of a threat source exploiting a vulnerability) we have a few places to start.

It is essential to create models that reflect the functions for each of these aspects.

Threats and threat modeling.

There are several data sources available that could supply the raw data necessary for the creation of a viable quantitative risk framework for security:

  1. Dshield
  2. Storm center
  3. CERTS
  4. Internal metrics etc
What is needed to create these models is to classify the data into manageable classes. Through the creation of a set of categories that can be used to model an organization and also the same with the attacker, the data from these sources can be divided into categories that provide a suitable framework. These would be organizational classifications that we can align the ongoing risk to. This will allow both predictive modeling (GARCH, ARIMA – Heteroscadastic time series data) and point processes (the existing risk, risk if we do X).

An organization itself needs to be categorized. To characterize an organization’s systems, network, applications etc, it is necessary to;
  1. Identify the access points into the network (i.e. gateways, remote access etc);
  2. Determine growth and future business needs;
  3. Make allowances for legacy systems which may affect the security design;
  4. Allow for business constraints (i.e. cost, legal requirements, existing access needs etc);
  5. Identify the known Threats and Visibility of the organization.
To do this, we need to look at both the technological needs and the business needs of that organization. We can start by mapping the risk into qualitative classes to ease people into more complete models involving distributions. At least however, these will be classes based on a non-biased and non-subjective analysis.

Administrative Steps
Administrative processes impact operational issues and as such need to be noted. In particular, areas such as policies and processes. Some the areas to consider in analyzing the administrative controls on organization would include:
  1. Determine the organizations (Business) Goals
  2. Determine the organizations structure
  3. Determine the organizations geographical layout
  4. Determine current and future staffing requirements
  5. Determine the organizations existing policies and politics
Technical Steps
Applications or systems generally do not act in isolation. Consequently it is necessary to consider more than just the primary application. By this it is meant that you also need to investigate how the application interacts with other systems. Some of the things to check this are going to include:
  1. Identifying Applications
  2. Map information flow requirements
  3. Determine the organizations data sharing requirements
  4. Determine the organizations network and server traffic access and access requirements
Characterization
In characterizing an organization we have a number of stages and that will quickly help to determine the risk stance taken. This means looking at the various applications and protocols deployed within the organization. For instance, have internal firewalls been deployed? Does centralized antivirus exist within the organization? The stages of characterization are generally conducted in an opposing order to a review. Rather than starting with policy this type of characterization starts with applications and works to see how well these fulfill the organization’s vision. The areas we need to consider are:
  1. Applications
  2. Network protocols
  3. Document the existing network
  4. Identify access points
  5. Identify business constraints
  6. Identify existing Policy and procedures
  7. Review existing network security measures
  8. Summarize the existing security state of the organization
This information is vital to be able to understand an organization’s requirements:
  • The need to be able to do to conduct your business,
  • What should the system’s security to set to permit, deny, and log, and
  • From where and by whom.
Next we need to consider the vulnerability and the attack. This is derived from survival models. I am hoping that we get a positive result from requests to the Honeynet project, but there are alternatives even without these.

Using a common framework such as the CIS application and OS risk levels to selected configurations will allow a more scientific determination. These can be aligned with Honeypot data and mapped to DShield results.

Ease of Resolution and Ease of Exploitation
Some problems are easier to fix than others, some are harder to exploit. Knowing how difficult it is to solve a problem will aid in assessing the amount of effort necessary to correct it. The following are given as human classifications, but the idea would be to use survival data. Set a vulnerability in the wild and how long does it take to be exploited? This would be a continuous function, but we can set values at points to make it easy to explain to people.

The following are common classifications in risk analysis. However they are at present subjective and biased metrics;

Trivial Resolution
The vulnerability can be resolved quickly and without risk of disruption.
The vulnerability can be exploited quickly and with a low risk of detection.

Simple
The vulnerability can be mitigated through a reconfiguration of the vulnerable system, or by a patch. The risk through a disruption of services is present, but diligent direct effort to resolve the problem is acceptable.

Moderate
The vulnerability requires a patch to mitigate and is a significant risk. for instance, an upgrade may be required.

Difficult
The mitigation of the vulnerability requires an obscure patch to resolve, requires source code editing or is likely to result in an increased risk of service disruption. This type of problem is impractical to solve for mission critical systems without careful scheduling.

Infeasible
An infeasible fix to a vulnerability is due to a design-level flaw. This type of vulnerability cannot be mitigated through patching or reconfiguring vulnerable software. It is possible that the only manner of addressing the issue is to stop using the vulnerable service.

Trivial Exploitation
The vulnerability can be exploited quickly and with a low risk of detection.

How to improve these...
The answer is to start creating Hazard/Survival distributions for the various attacks and threats. Using data that is readily available, complex but accurate mathematical forecasting and modeling may be conducted even now.

Monday, 3 November 2008

New Security and Forensic Concerns

Or Concerns of the Future...

The idea of a Memristor has been around for a long time, but they are only now starting to be built. HP's recent breakthroughs in this long touted technology will radically change the face of computing in the years to come and also allow Moore's law to continue as well as in accelerating the advance of storage and memory space.

Memristors combine several advantages of memory and disk based storage into a single unit. Basically, think of combining a flash hard drive and DRAM into one package.

Great, new tech, but how does this really impact forensics and security?

The answer is mind blowing when you think about it. Not only will the findimentals of computational theory change when long term and short term memory start to combine; but memory will become static.

What occurs when you pull the power cord on your computer now?

  • Now thinkwhat if the computer state remains the same (like a super-hybernate)
Think of memory forensics - this will be the norm as ALL storage will be memory.

We still have a few years - HP plans to offer these commercially by 2012 and some believe that these devices will replace the existing paradimes between 2014-2016. This may be a while, but things move quickly. Blink and say helo to tomorrow...

Sunday, 2 November 2008

SANS 504 by mentoring

A good friend of mine, Chris Mohen is mentoring the SEC 504 course from SANS starting in Jan 2009.

SANS SECURITY 504 is the "Hacker Techniques, Exploits & Incident Handling" Course and it is worth 36 CPE Credits.

This is the course that is associated with the GIAC GCIH certification. As GCIH # 6896 and a Gold certification holder I highly recomend this course and Chris as a Mentor. The real benifit to mentor sessions is that you can learn the material in depth over time.

See Chris' page for more details:
http://www.chris-mohan.com/?p=206