This is the original unpunlished research paper I completed in 2004 that let to a couple published papers in audit and security journals and also as a SANS project. I hope that I have become a little more diplomatic in my writing in the preceeding years (Nah...).
Here we show that “Ethical Attacks” often do not provide the benefits they purport to hold. In fact it will be shown that this type of service may be detrimental to the overall security of an organisation.
It has been extensively argued that blind or black box testing can act as a substitute for more in depth internal tests by finding the flaws and allowing them to be fixed before they are exploited. This article will show that not only is the premise that external tests are more likely to determine vulnerabilities is inherently flawed, but that this style of testing may actually result in an organisation being more vulnerable to attack.
“Ethical Attacks” or as more commonly described “(white hat) hacker attacks” have become widely utilised tools in the organisational goal of risk mitigation. The legislative and commercial drivers are a pervasive force behind this push.
This misconceived premise results in the mistrust of the very people entrusted to assess risk, detect vulnerabilities and report on threats to an organisation. Effectively this places the auditors in a position of censure and metaphorically “ties their hands behind their backs”.
Externally sourced auditors are charged at an agreed rate for the time expended. Both internal and external testing works to a fixed cost.
Further, audit staff are limited in number compared to the attackers waiting to gain entry through the back door. It is a simple fact the pervasiveness of the Internet has led to the opening of organisations to a previously unprecedented level of attack and risk. Where vulnerabilities could be open for years in the past without undue risk, systems are unlikely to last a week un-patched today.
The foundation of the argument that an auditor has the same resources must be determined to be false. There are numerous attackers all “seeking the keys to the kingdom” for each defender. There are the commercial aspects of security control testing and there are the realities of commerce to be faced.
It may be easier to give the customer what they perceive they want rather than to sell the benefits of what they need, but as security professionals, it is our role to ensure that we do what is right and not what is just easier.
What passes as an Audit
An “ethical attack” or “penetration testing” is a service designed to find and exploit (albeit legitimately) the vulnerabilities in a system rather than weaknesses in its controls. Conversely, an audit is a test of those controls in a scientific manner. An audit must by its nature be designed to be replicable and systematic through the collection and evaluation of empirical evidence.
This may result in cases where “penetration testing will succeed at detecting a vulnerability even though controls are functioning as they should be. Similarly, it is quite common for penetration testing to fail to detect a vulnerability even though controls are not operating at all as they should be” [i].
On the contrary, an attacker is often willing to leave a process running long after the budget of the auditor has been exhausted. A resulting vulnerability that may be obscure and difficult to determine in the timeframe of an “external attack” is just as likely (if not more so) to be the one that compromises the integrity of your system than the one discovered early on in the testing.
Though it is often cast in this manner, an external test is in no way an audit.
There are several methods used in conducting external tests,
- White box testing is a test where all of the data on a system is available to the auditor;
- Grey box tests deliver a sample of the systems to the auditor but not all relevant information;
- Black box tests are conducted “blind” with no prior knowledge of the systems to be tested.
To complete a “white box” test, the auditor needs to have evaluated all (or as close to all as is practical) of the control and processes used on a system. These controls are tested to ensure that they are functionally correct and if possible that no undisclosed vulnerabilities exist. It is possible for disclosed vulnerabilities to exist on the system if they are documented as exceptions and the organisation understands and accepts the risk associated with not mitigating these.
The prevalence of tools based tests generally limits the findings to well known vulnerabilities and common mis-configurations and is unlikely to determine many serious systems flaws within the timeframe of the checking process.
Black box testing (commonly also known as “hacker testing”) is conducted with little or no initial knowledge of the system. In this type of test the party testing the system is expected to determine not only the vulnerabilities which may exist, but also the systems that they have to check! This methodology relies heavily on tools based testing – far more so than Grey box tests.
One of the key failures of black box testing is the lack of a correctly determined fault model. The fault model is a list of things that may go wrong. For example a valid fault model for an IIS web server could include attacks against the underlying Microsoft Operating system, but would likely exclude Apache Web server vulnerabilities.
“It is often stated as an axiom that protection can only be done right if it's built in from the beginning. Protection as an afterthought tends to be very expensive, time-consuming, and ineffective”[ii]
What is an Audit? (Or what should an audit be)
An IT audit is a test of the controls in place on a system. An audit should always find more exposures than an “ethical attack” due to the depth it should cover. The key to any evaluation of an audit being the previous phrase “the depth it should cover”. Again budgetary and skills constraints effect the audit process.
Often it is argued that a good checklist developed by a competent reviewer will make up for the lack of skills held by the work-floor audit member, but this person is less likely to know when they are not being entirely informed by the organisation they are meant to audit. Many “techies” will find great sport in feeding misinformation to an unskilled auditor leading to a compromise of the audit process. This of course has its roots in the near universal mistrust of the auditor in many sections of the community.
It needs to be stressed that the real reason for an audit is not the allocation of blame, but as a requirement in a process of continual improvement. One of the major failings in an audit is the propensity for organisations to seek to hide information from the auditor. This is true of many types of audit, not just IT.
From this table, it is possible to deduce that a report of findings issued from the penetration test would be taken to be significant when presented to an organisation’s management. Without taking reference to either the audit or the control results as to the total number of vulnerabilities on a system, the penetration test would appear to provide valuable information to an organisation.
However when viewed against the total number of vulnerabilities, which may be exploited on the system, the penetration test methodology fails to report a significant result. Of primary concern, the penetration test only reported 13.3% of the total number of high-level vulnerabilities, which may be exploited externally on the test systems. Compared to the system audit, which reported 96.7% of the externally exploitable high-level vulnerabilities on the system, the penetration test methodology has been unsuccessful.
External penetration testing is less effective than an IT audit
To demonstrate that an external penetration test is less effective than auditing the data it is essential to show that both the level of high-level vulnerabilities detected as well as the total level vulnerabilities discovered by the penetration test are significantly less than those discovered during an audit.
As may be seen in Figure 1 - Graph of Vulnerabilities found by Test type and Figure 2 - Graph of Vulnerabilities found by exploit type that the total level of vulnerabilities discovered as well as a the high-level vulnerabilities are appreciably less in the penetration test results and from the audit results.
The primary indicator of the success of the penetration test would be both and detection of high-level vulnerabilities and the detection of a large number of vulnerabilities over all.
It is clear from Figure 3 - Graph of Vulnerabilities that the penetration test methodology, as reported, a smaller number of exploitable external vulnerabilities both as a whole and when comparing only the high-level vulnerability results.
It is not all Bad News
The key is sufficient planning. When an audit has been developed sufficiently, it becomes both a tool to ensure the smooth operations of an organisation and a method to understand the infrastructure more completely. Done correctly an audit may be a tool to not just point out vulnerabilities from external “hackers”. It may be used within an organisation to simultaneously gain an understanding or the current infrastructure and associated risks and to produce a roadmap towards where an organisation needs to be.
A complete audit will give more results and more importantly is more accurate than any external testing. The excess data needs to be viewed critically at this point as not all findings will be ranked to the same level of import. This is where external testing can be helpful.
After the completion for the audit and verification of the results, an externally (preferably white box) test may be conducted to help prioritise the vulnerable parts of a system. This is the primary areas where external testing has merit.
“Blind testing” by smashing away randomly does not help this process. The more details an auditor has, the better they may do their role and the lower the risk.
Just as Edsger W. Dijkstra in his paper “A Discipline of Programming” denigrates the concept of "debugging" as being necessitated by sloppy thinking, so to may we relegate external vulnerability tests to the toolbox of the ineffectual security professional.
In his lecture, "The Humble Programmer", Edsger W Dijkstra is promoting –
"Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give proof for its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmers’ burden. On the contrary: the programmer should let correctness proof and program to go hand in hand..."
Just as in programme development where the best way of avoiding bugs is to formally structure development, systems design and audit needs to be structured into the development phase rather that testing for vulnerabilities later.
It is necessary that the computer industry learns from the past. Similar to Dijkstra’s assertion that "the competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague...”[iv]. Security professionals, including testers and auditors need to be aware of their limitations. Clever tricks and skills in the creation of popular “hacker styled” testing are not effective.
As the market potential has grown, unscrupulous vendors have been quoted overemphasising dangers to expand customer base and in some cases selling products that may actually introduce more vulnerabilities than they protect against.
External testing is an immense industry. This needs to change. It is about time we started securing systems and not just reaping money in from them using ineffectual testing methodologies.
An audit is not designed to distribute the allocation of blame. It is necessary that as many vulnerabilities affecting a system as is possible are diagnosed and reported. The evidence clearly support to the assertion that external penetration testing is an ineffective method of assessing system vulnerabilities.
In some instances, it will not be possible or feasible to implement mitigating controls for all (even high-level) vulnerabilities. It is crucial however that all vulnerabilities are known and reported in order that compensating controls may be implemented.
The results of the experiment categorically show the ineffectiveness of vulnerability testing by "ethical attacks". This ineffectiveness makes the implementation of affected controls and countermeasures ineffective.
This type of testing results in an organisation's systems being susceptible and thus vulnerable to attack. The results of this experiment strongly support not using "ethical attacks" as a vulnerability reporting methodology.
The deployment of a secure system should be one of the goals in developing networks and information systems in the same way that meeting system performance objectives or business goals is essential in meeting an organisation’s functional goals.
I would like to thank Sonny Susilo for his help on this experiment and for BDO for their support. In particular I would like to that Allan Granger from BDO for his unwavering belief in this research.
Web Sites and Reference
S.C.O.R.E. – a standard for information security testing http://www.sans.org/score/
The Auditor security collection is a Live-System based on KNOPPIX http://remote-exploit.org/
Nessus is an Open Source Security Testing toolset http://www.nessus.org/
In support of the assertions made within this paper, an experimental research was conducted. The paper from this research has been completed and is available to support these assertions. First, the system tested is detailed as per the results of an audit. Next a scan of the system is completed as a Black, Grey and White box external Test.
The results of these tests below support the assertions made in this paper. The configuration of the testing tool has been tailored based on the knowledge of the systems as supplied.
[ii] “Fred Cohen”
[iii] Fred Cohen, http://www.sdmagazine.com/documents/s=818/sdm9809c/
[iv] Edsger W Dijkstra, EWD 340: The humble programmer published in Commun. ACM 15 (1972), 10: 859–866.