Saturday, 5 December 2009

People Vs Worms.

In the previous post, I presented the results of a series of hazard modelling experiments on Windows XP. This post presents the of attacks in classes. In particular, offing comparisions between those systems that have been compromised by an automated process (Worms etc) against those which have at least some level of interaction.

Snort was used as the IDS for this exercise. This provided the details of the attacks allowing an analysis of worm (automated) compromises against manual (attacker, script kiddie etc).

Separating the results using the CVS data (as loaded with Snort etc) we see a visible variation of the attack and survival plots for worms and that of manual attacks.
We see from the plot above that worms act faster against vulnerable systems and that interactive users (attackers) are more capable at compromising more secure systems.

This is more easily seen on an overlay plot (below).
Displayed above we have a plot of the survival time against automated processes (green) overlayed with that of manual processes (red). The Loess fit for each is also incorporated into the plot.

What we see from other results is that the more secure a system is (in this case patched of known vulnerabilities), the more likely that a compromise is manually initiated. Likewise, less secure (or patched and vulnerable) systems are exposed to more automated attacks (e.g. Worms).

Friday, 4 December 2009

Measures with meaning

In order to create a set of quantifiable measures of information security conditions, we need to have a set of fixed and repeatable standards against which we can measure. For this, I have selected the Centre for Internet Security (CIS) ratings and the SANS top 10 vulnerabilities.

The experiment was designed around 48 Windows XP SP2 computers. This comprised of 16 physical hosts and 32 virtual machines. The tests where run over a 600 plus day period from November 2007 to October 2009. When a physical host was compromised, it was taken offline for 10 days. In this period, the host was rebuilt in a slightly different configuration. The 32 virtual hosts where built with differing levels of patching. These hosts have been reverted to a VM snapshot following a compromise. At this point, they would be re-patched and reassessed.

The measures on the hosts are calculated using the CIS Windows XP Professional metrics. 16 of the hosts had a measured number of critical vulnerabilities left unpatched (ranging from 1 to 10 unpatched vulnerabilities per host). The particular vulnerability was randomly selected on each host. Each vulnerability was selected from the SANS Top 20.

The 32 virtual hosts were configured on a single high end Red Hat server running Snort. No filtering was conducted, but all attacks where logged. The survival time for the host is set as the time from when the host was placed as live on the network until a local compromise occured. Snort was used to record when the compromise occurred.

The 16 physical hosts where connected to a Cisco switch sitting behind a Redhat host running Snort and acting as a forwarding router.

Each host in both the physical and virtual configuration was configured on a '/29' network. This was assigned an internal IP in the 10.x.y.z address ranges with the Redhat host being assigned the lower IP address and the upper to the host being tested. Static NAT was used to pass a real IP address to the host in the range of 203.X.Y.194 to 203.X.Y.242 with a netmask of 255.255.255.192. The full address is not reported at this time in order to minimise any impact on ongoing experiments.

The iptables configuration on the 2 Redhat systems was configured to allow any IPv4 traffic from the Internet and to block any IPv6 traffic. Internet hosts where allowed to connect to any system on any port. The only restriction was designed to block traffic to and from the Windows XP hosts to any other host on the same network. This allowed the host to be compromised from the Internet but a compromised host could not see another host on the same network. The Windows XP firewall was disabled for all CIS scores less than 90 and for some hosts with scores greater than 90 (although it is difficult to create a host with a score greater than 90 and the firewall disabled).

This was done to create a level of independence with attackers having to compromise systems from the same way and not being able to "hop" across systems (as occurs in real compromises). The goal of this experiment was to record initial compromises and not the subsequent process (that being the goal of a separate and ongoing experiment).

The times and measures have all been recorded and analysed. No web browsing or other internal activity was conducted from the systems under test.
The scatterplot above is the plot of the measures score using the CIS scoring system against the time that it took to compromise the host. We see that there was a significant benefit in achieving a score of 80+. Any score of less than 40 was compromised relatively quickly. A score of 46 was compromised within 24 hours. All scores of 60+ remained uncompromised for at least a week. One host with a score of 59 on the CIS scale remained uncompromised for 98 days.
The greater the number of vulnerabilities that a system has, the faster it is compromised. No system with 6 or more unpatched network accessible vulnerabilities remained uncompromised for more than 15 days. A compromise occurred in as little as 4 days with systems with 2 vulnerabilities.Similar results have been recorded for the hosts in the VM group (blue) and the physical group (red) in the scatter plot above. A Loess best fit has been applied to this scatter plot marking the expected survival time by CIS scoring. As the score increases, the variance also increases, but this can be seen as a function of increasing survival times.

The next post will report the analysis of worm (automated) compromises against manual (attacker, script kiddie etc).

Tuesday, 1 December 2009

Type I error and monitoring intrusions

The following post contains a few notes on experiments into incident analysis and the rate of error in determination.

An experiment was conducted where known incidents where replayed. Existing PCAP capture traces from client sites with known attack and incident patterns are loaded into an analysis system for evaluation purposes. The OSSIM and BASE frontends to snort had been deployed for this exercise.

SQL scripts where altered to display a random lag into the responses and tcpdump was used to replay the PCAP trace as if it occurred 'live'. The analyst had to decide if each incident was worth escalating or should be noted and bypassed. The results of this process are reported below through a display of type I errors.
It is easy to see that as the response time of the system increases, so does the analyst's error rate. Basically, the lag in returning information to the analyst has a direct causal effect. The longer the lag between requesting the page and that where the page is returned and the greater is the error rate in classifying events.

To this we can add a Loess calculated plot of the expected error against time.
It this plot we can clearly see that the slope increases sharply after around 4 seconds. As such, it is critical to ensure that responses to queries are returned in under 3-4 seconds.