Saturday, 8 October 2011

We test insecurity but do not measure security

Right now, we test insecurity and believe that this makes us secure.

Even the methods are wrong. One of the fundamentals of science is that we cannot prove a negative. Some argue this, but they fail to understand the concept of proof. What we do is provide evidence to support a hypothesis. Basically, we select a likely postulate based on what the evidence at hand seems to tell us.

Now, what we cannot do is assert we have seen all failures, thus that no failures exist. More, we cannot assert we have seen all the vulnerabilities we can ever expect.

He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much as know what they are, he has no ground for preferring either opinion. [1]

This is cogent when we consider how we look at security testing. Do not get me wrong, penetration testing has a place. When conducted by a skilled (and it is by far an art and not a science) tester, penetration testing can have positive effects. It can be used to display the holes we have in a system and to have management and others take an issue seriously.

What an ethical attack or penetration test can not do is tell us we are secure.

The best we can hope for is that we have:

  • A skilled tester on a good day [3],
  • That we were fortunately enough to have the test find the main vulnerabilities within scope and time constraints [2],
  • That we happen to be lucky enough to actually find the flaws [4], and
  • That the flaw was open at the time of testing.
These of course are only the tip of the iceberg, but basically, what a penetration test tell us is that we have no glaringly open holes within the scope of the report (we hope).

That does not mean we are secure.

In an upcoming paper [5] to be presented at the 2011 International Conference on Business Intelligence and Financial Engineering in Hong Kong in December, we report the results of common system audits.

Not that I see this as winning myself any popularity with auditors and testers (and nor do I think I will be forking for an audit firm following the release of the paper ever again), but we show that many systems that are said to be secure as a result of passing a compliance check are not actually secure.

Basically, there are few incentives other than reputation to account for the actions of a tester and many with inadequate skills fill the field. The reason we believe is that there is little downside. It is easy even as a poorly skilled tester to maintain a business and gain work in this field.

It is an all too common state of affairs to see the software vendors blamed for the lack of security in systems, but it is rare to see the auditors and testers call to account. We propose the notion of negligence and tort-based responsibility against the inattentive auditor. This would have the auditor liable for the errors and failures with a comparative liability scheme proposed to enforce this such that the failure to implement controls in a timely manner or to hide information from the auditor would mitigate the auditor’s liability. 

This would require a radical rethinking of the ways that we currently implement and monitor information security and risk. In place of testing common checklist items such as password change policy and determining the existence of controls[1], a regime of validating the effectiveness and calculating the survivability of the system is proposed.

What we tested
In a review of 1,878 audit and risk reports conducted on Australian firms by the top 8 international audit and accounting firms, 29.8% of tests evaluated the effectiveness of the control process. Of these 560 reports, 78% of the controls tested where confirmed through the assurance of the organization under audit. The systems where validated to any level in only 6.5% of reports. Of these, the process rarely tested for effectiveness, but instead tested that the controls met the documented process. Audit practice in US and UK based audit firms does not differ significantly.

Installation guidelines provided by the Centre for Internet Security (CISecurity)[1] openly provide system benchmarks and scoring tools that contain the “consensus minimum due care security configuration recommendations” for the most widely deployed operating systems and applications in use. The baseline templates will not themselves stop a determined attacker, but can to demonstrate minimum due care and diligence. Only 32 of 542 organizations analysed in this paper deploy this form of implementation standards.
clip_image002
Figure. Patching, just enough to be compliant, too little to be secure.

The patch levels of many systems are displayed in the figure above. The complete data will be released in the paper [5].

What we do see however is that many systems are not maintained. Core systems including DNS, DHCP, Routers and Switches are often overlooked. In particular, core switches were found to be rarely maintained in any but a few organisations and even in Penetrations tests these are commonly overlooked (and it was truly rare to see these checked in an audit).

As Aristotle (350 B.C.E) noted:
The same is true of crimes so great and terrible that no man living could be suspected of them: here too no precautions are taken. For all men guard against ordinary offences, just as they guard against ordinary diseases; but no one takes precautions against a disease that nobody has ever had.”

Incomplete information is not to be confused with imperfect information in which players do not perfectly observe the actions of other players. The purpose of audit is to minimize the probability of incomplete information being used by management. For this to occur, information needs to be grounded in fact and not a function of simplicity and what other parties do.

Most security compromises are a result of inadequate or poorly applied controls. They are rarely the “disease that nobody has ever had.”
 
Businesses need to demand more thorough audits and results that are more than simply meeting a compliance checklist. These must include not only patching for all levels of software (both system and applications) as well as the hardware these run on. This failure of audits to "think outside the box" and only act as a watchdog could ultimately be perceived as negligence for all parties.

[1] Such control checks as anti-virus software licenses being up to date and a firewall being installed are common checklist items on most audits. Validating that the anti-virus software is functional or that the firewall policy is effective are rarely conducted.
[2] CIS benchmark and scoring tools are available from http://www.cisecurity.org/

References:
[1] John Stuart Mill, On Liberty
[2] Wright, C. (2006) “Ethical Attacks miss the point!” System Control Journal ISACA
[3] Wright, C. “Where Vulnerability Testing fails” System Control Journal ISACA (extended SANS RR paper linked)
[4] Wright,  C. (2005) “Beyond Vulnerability Scans — Security Considerations for Auditors”, ITAudit, Vol 8. 15 Sept 2005, The IIA, USA
[5] Wright, C. “Who pays for a security violation? An assessment into the cost of lax security, negligence and risk, a glance into the looking glass.”

About the Author:
Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

2 comments:

Vinoth said...

Absolutely right , auditors must look beyond checklists before they provide a report of asurance to the client.

Dr Craig S Wright GSE said...

What we need to do is to create our own processes to aid us in ensuring we cover all areas at a minimum, but not to be constrained by those or limited. A checklist should be an aid to ensure we cover all systems and not a limiter.