Saturday, 15 October 2011

Security News and Views Podcast

The weekly security Podcast, "Security News and Views" is up (MP3 and PDF). This week, we have many more SCADA issues and we continue to reward poor security practices.

http://craigswright.podbean.com/2011/10/15/security-news-and-views/

Changes in the cloud - Webinar Archive up


With IPv6 coming, there are many changes that people have not thought of. We look at the changing IT environment and how new technologies will make the cloud mainstream. We look at the nature of IPv6 and see how the distributed architecture and advanced in mobile computing will make everything cloud based, even the computer on your desktop, that is, if you even have a desktop any more.

This session will be presented at a high level and will give a management overview of the things people have not been telling you so far


Current Recorded Webinar:
Cloud.wmv
99MB
https://www2.gotomeeting.com/register/725285642

Friday, 14 October 2011

Logical Fallacies and the SCADA problem

The arguments for extreme events are interesting and one has to wonder about the motivation for these. The argument that a “STUXPOCALYPSE” will not occur and hence we need not worry about security of critical systems is astounding.

Straw man

This first logical flaw is an argument based on misrepresentation of an opponent's position.

The argument is not one of apocalypse, it is one of widespread damage. Here, 100 deaths and a few million dollars is considered widespread damage. This is quantitatively different from the end of the world. The point is not if an attack against SCADA will result in the end of civilisation as we know it. Even World War II did not manage this with all the damage it created and a SCADA cascade will not do something as dire.

So one has to wonder what the motivations and imperatives are for those who attempt to downplay the security concerns surrounding unsecured control systems.

Using rhetorical tricks in order to mask the concerns around the security of control systems and to downplay the nature of these threats smells a little fishy.

False dichotomy or the fallacy of bifurcation

This brings us to the next flaw in the arguments. The supposition that only two alternative statements are the only feasible options. This of course is not the truth and the arguments here abound. There are many more possibilities than those who seem to want to hide the security flaws in SCADA systems will allow.

The attack and compromise of a PLC using fine control is seen as the only issue. More, the only attack vector is promoted as those sites being easily found through a Google dork search.

First, those sites that are both online and result in a discovery using is simple search engine query are the vast minority of sites. For each site that is poorly configured enough to have been indexed through a search engine, many hundred exist online that have not been indexed.

In fact, none of the systems I have written about in recent weeks is accessible through a simple web search. That does not mean they are not accessible through the Internet.

NAT and simple technologies leave these systems obscured but online. Here we see that there is some avoidance of obscurity. This is a poor security control. It may help alleviate simple scanning worm attacks to some extent, but the reality is this is only to some extent.

Most of the attacks against Internet connected hosts are not being targeted against the sites you can find on a Google dork list. They are more and more targeted against internal systems where a compromised client system is leveraged to attack the internal systems.

An external attacker with a flash based exploit, a re-pinning attack against the clients JRE, or for that matter any number of malware and crimeware based exploits can bypass simple firewall and NAT controls. ATM networks associated with St George that are supposedly offline were impacted through a worm infestation. Rail Services Australia managed to have a scanning worm inside the secure network a few years ago and just recently we have seen the US army’s drone network compromised by a password sniffing trojan.

Just being behind a firewall or a NAT device does not make you offline. It does stop some of the simple Google dork searches, but these have only ever been the tip of the proverbial iceberg.

argumentum ad ignorantiam

Next, we have the oft cited claim that is assumed to be true (or false) simply as a result of having not been proven false (true). In some cases, these are claims that cannot be proven false (or true in the converse).

I face this one in court from time to time as well where it comes to the extreme. In one instance, the barrister for the opposing party to whom I was acting for as an expert witness decided as they could not attack the results I had obtained (the opposing expert having stated the same in a published paper that he neglected to note in court) attacked my beliefs. I have a degree in theology (as well as in law, various sciences, mathematics, management and more) and I am a trustee and from time to time a lay pastor.

I was told in court that I cannot be a good scientist as I believe in imaginary beings (I believe in God). Basically, we have here an argumentum ad ignorantiam, an argument that cannot be either proven or disproven through science. That does not stop it from being deployed as an argument.

At the same time, we see this time and time again in calls to leave things as they are, to let sleeping dogs lie and to remain with obscurity and our heads in the sand safe in the knowledge that what we cannot see (foresee) will not hurt us.

But for SCADA systems, we have “I do not see how therefor it cannot be”. In this, we look at the effects of attacking PLCs and the differences in these systems and simply forget that most of these are controlled from Windows based systems. That LynxOs, Windows CE and more act as agents.

Again, we assume this needs to be a nation state effort such as Stuxnet and forget that was a system designed for fine control and not simply chaos. Chaos is far easier to achieve than fine control. It takes a lot of effort, skilled people and technical knowledge to create a system that can be automated and left to run remotely.

Breaking a system… that is far simpler.

Red herring

One of my old favourites that is so often used is the attempt to distract one’s readership (listeners if live) by going off topic. That is to deviate from the topic at hand. In this, we can add a separate argument which the author believes will be simpler to address and to run from the topic at hand.

There is a qualitative difference between cyber-terror and kinetic terror events.

Yet we see responses such as “For that matter, one could just get some C-4 and get a job at the facility long enough to plant a bomb”. Well yes, we could and having completed a degree in Organic chemistry specialising in fuel sciences (over a decade ago now) I also know just how likely you are to remove several fingers in the attempt to make it.

Yes, it is possible (although not as simple as the movies would make out)  to obtain C4, Semtex and other forms of explosives containing RDX (cyclotrimethylene trinitramine) and PETN. But there is nothing on how these are peppered with 2,3-dimethyl-2,3-dinitrobutane (DMDNB) so as to both trace the source and also as a detective control.

Unfortunately for Bruce Willis, it is not actually as simple as it seems to sneak large quantities of C4 into Federal buildings unannounced anymore.

Fertiliser based explosives are easier, but even then you can expect to be investigated from time to time and there is a level of risk with any kinetic engagement these days. This is why for all the people out there wanting to blow things up in the US that it remains a rare event. It is not easy and not all terrorists want to blow themselves up in order to achieve an objective.

This is why cyber-terror is qualitatively different.

You can access an online system from anywhere in the world. The independent hackers (cough FSB sponsored) in Russia who attacked Estonia and Georgia never suffered any repercussions. In fact, it is not as simple as people think to organise a large scale kinetic attack. This requires a high degree of co-ordination and effort.

On the other hand, hackers have managed to obtain access to critical systems by accident. Here, we are not even thinking of the efforts of a former and disgruntled employee in attacking a water treatment plant, of course also getting caught as he was stupid.

Then, even the Large Hadron Collider and US Drone control stations have been compromised without any real repercussions for the lead perpetrators.

That is what is really different here. To blow up a facility, you need to spend a lot of time effort and money learning systems, building reputation and more where you most likely have only one attempt (and which as history shows us fails more times than it succeeds even if we remember the successes and forget the failed attempts).

To engage in a cyber-terror exercise on a vulnerable system requires skills that also allow an attacker to engage in cybercrime and to hence fund activities (and lifestyle) whilst remaining relatively anonymous. More, you can be seated comfortably anywhere in the world and as one detractor showed, you can even simply do a Google dork search for these systems and chose what you feel like opposing AFTER you have selected a target to attack.

Ad hominem

Staying with a Red Herring, we have a very special for of this, the  Ad hominem attack where we attack the person so as to avoid facing the actual argument.

Here, we see comments such as “Please go back to writing entry level forensics books”. Not that writing guides for people starting in a field should be seen as a detractor, but ignoring that that does not mean we also do not do high end academic research. But that would not suit the argument and would not allow the attack to seem as belittling.

This also comes in the form of an Appeal to ridicule where statements such as “For the apocalypse of stupid that will be happening thanks to the likes of CNN and the book of Langer and Wright.” are used as an argument and the attempt is made to present one’s opponent's argument as being ridiculous. It is not actually a valid argument, it is just a form of petty attack.

We see this in attempts to ridicule such as:

“When he opened the seventh seal, there was silence in heaven as the malware began changing PLC code”

From the book of Langer & Wright:  Revelation Chapter 1 Verse 1”

I guess this manages again to bring up back to the straw man that has been supposed. In arguing widespread damage, it seems that this must be a Revelation level event or nothing we should be concerned with? I wonder myself what ever happened to middle ground?

Appeal to motive is next and here we have a situation where the premise is dismissed through a question of motivation. That is by calling into question the motives of its proposer. The basis is to say that this is all about money or similar. There are a number of flaws with this argument, not least of which is that I donate most of the SCADA time and in making more work in this area simply make life more difficult for myself. Basically I do this as it helps the people I care for. Then, motive was never a valid argument in any event.

I am still awaiting many of the other Ad hominem attacks such as:

  • Poisoning the well: Here adverse information is stated in order to  discredit ones opponent. It can be true or not. It does not of course relate to the argument at hand. I did state one example above. Saying I believe in God (as a bad thing) as an example of why I cannot engage in scientific discourse (I also believe in evolution).
  • Appeal to spite. This is a specific type of appeal to emotion. In this fallacy, the argument is made based on an exploitation of the listener’s (reader’s) bitterness or spite towards the other (opposing) party and/or that party’s beliefs, position etc.

Argumentum ad nauseam

This is an argument such as “We have discussed the security issues around SCADA for years, and that nobody cares to discuss it anymore”.

Well, SOME people do not want to discuss this anymore. Then, nothing is making them do so. In fact, in actually engaging in the argument, they disprove this argument in their own actions.

onus probandi

This is the logical fallacy based on a premise that the other party need not prove their claim, that we must prove it is false. Not as a hypothesis or any other such thing, but just as a matter of fact.

They cannot of course and hence we see this again and again.

argumentum ad antiquitam

Here again we come to a conclusion supported that has its sole support in the matter of history. This is, it must be true as it it has long been held to be true.

The argument goes along the lines, we have not seen many SCADA attacks, thus there cannot be any SCADA attacks.

Well, the fact that we have not seen an event does not make it improbable. In fact, we have the issue here that the class of events in the 90’s was distinct from those in this decade. We are more connected and more systems are vulnerable.

fallacy of the heap

How about we improperly reject the claim that SCADA systems are at risk simply due to imprecision. That is, as we cannot state which systems will be attacked and we cannot state exactly when this may occur, that it cannot ever occur.

Ummm… It seems that there is a consistent flaw in all this.

I can add many more fallacies…

Ignoratio elenchi

This is the constant use of irrelevant conclusions that miss the point. In some cases, the argument is valid in itself. However, it does not actually address the issue in question. SCADA systems are running insecurely and the compromise of these systems can lead to a loss of life.

One such example would be the compromise of rail signalling systems. This could lead to a peak-hour collision of two oncoming commuter trains.

  • Is this the end of society as we know it? No.
  • Is this a tragedy? My God yes!

That is the point. Extending the loss of life to an argument where it is only valid if the entirety of society collapses is ludicrous at best.

Kettle logic

Here we see the use of multiple inconsistent arguments to defend a position.

EMP’s Man Made & Solar… Now There’s Your Apocalypse

Well… How about FUD?

Let us ignore the fact that making any real device that has a large scale effect is both difficult and expensive (and range limited) and jump to something that is truly FUD.

Economics 101

We have systems that are not difficult to secure. We say they are, but the reality ids people are the impediment and not the technology. In some cases, securing these systems will create a positive ROI from day 1.

More, we have a situation where small investments can forgo large losses.

The argument is not that civilisation will end, but that small incremental improvements, some that do not actually cost money or even time can make us much safer.

Economics is all about incentives. It is creating systems where people and groups do the right thing. Right now, we are creating externalities and not allowing those who have failed systems to be responsible for their failures.

The reason for this is that it costs money to implement a secure online system. If you can get away with not securing a system AND not have to face the consequences of a failure (when and not if) you have an economic advantage over another party by securing to a level that any reasonable groups would expect. 

I for one have to wonder at the vitriol that some individuals hold for society if they can simply treat the loss of life and property as inconsequential simply as it has not resulted in the complete collapse of society.

Incentives

Right now we incentivise poor security practices. Those firms and organisations involved with SCADA systems who actually care to secure their systems are penalised. When we create negative incentives in bailing SCADA operators out from the trouble they have caused in running insecure systems and yet fail to offer any positive incentives to those groups who actually act in a manner that is consistent with giving a damn, we create less secure systems.

So, SCADA systems are online. We seem to have agreement that you can even get these (and this is the tip of the iceberg again) with a simple SCADA search. These are systems that have large scale effects.

Yes, it may be true that damaging a Nuclear reactor in a manner that results in a meltdown is really beyond anything less than a nation state, but so what?

Loss of power to a city for a few days will result in lost lives (and I happen to care about the extremely young, old and infirm and others that seem to be overlooked in the opposing argument).’

Again, WHY are some people trying to defend poor practice and NOT take SCADA operators who are ILLEGALLY running systems online to task?

Why do some people want to continue to incentivise poor security?

Where does this leave us?

World War II was a global and catastrophic event, but the earth still stands. So, do I think the Earth and civilisation will come to an end due to SCADA flaws (or FUD such as EMP/HEMP devices)?

No!

What is at stake is the loss of life and property that will result from compromised SCADA systems. Not just PLCs as the opponents of this position like to try and presuppose, but Windows XP and other systems that act as controllers. A trojan on a Windows host allows an attacker to control the PLC without actually writing specialised malware such as Stuxnet.

You think this does not occur… Well there you are wrong. The dumping of sewerage in Queensland (here in AU) cost millions to clean, it cost businesses revenue, it cost jobs and it also meant that many people in the area where unable to enjoy their properties in safety.

Well, I am the Australian in this “debate” so I am wondering why it is the other side who is making the “don’t worry she’ll be right mate” assertions?

 

About the Author:

Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Cross-Posted.

Thursday, 13 October 2011

On altruism

There is an old saying, “don’t look a gift horse in the mouth”.

To those people who ask valid questions, offer constructive criticism (even if unfavourable) and more, I thank you sincerely. To the others, I have a rant to expound.

In writing, researching and publishing, this is something I have seen we need to learn as a profession in information security. Do not get me wrong, there are many professionals out there who actually take note of what they receive and are thankful for it.

That stated, there is a vocal minority in our field who need to learn this lesson and do us all a grand disservice in their petty bitching.

I have published a number of papers in the last few weeks and I do little to hide my email address so as would be expected, I have received comments. The majority of these have been favourable or at least constructive. There are around 10-15% of the vocal people in the industry who can learn a little about what they obtain for free.

It is not just me, I see this all the time.

I see people complaining that Facebook, a free service has changed their look and owes then something. Grow up.

In my case. The “children”have come back with the following comments concerning a paper and research I did with a colleague:

  • You only modelled system behaviour.  Without looking at the browser it does not mean much.

Well, actually it does. Science has rules to experiments. You do not get good results that can be used to show a causal effect unless you create experiments that are designed for this. This means we have to control for all of the variables as much as is possible baring those you are seeking to test.

  •  You have not reported on X (replace X with a number of things and outcomes). In collecting this data you should have also been able to report on types of attacks and more. 

Yes, you are correct, there is a lot of work that can be done on a set of pcaps containing data about attacks. I plan to do this in time and I will also be offering some material for students to do research on. That stated, there are only so many hours in a day.

  •  You could have covered more and made this valuable if you extended the research into X.

OK, my bitch time. The experiment in this paper was not conducted under a grant. It was funded through a company I used to own. I could have used the money to go on a vacation, buy a better car and many other things. I used it for the purpose of my research.

In fact, I used to own two sports cars and a boat. I sold all of these in order to do some of these experiments. That was MY choice, I wanted the answers and I do not regret it one iota.

That stated, if you want to have me do more. Fund me. If not, don't bitch about whether I have covered your pet project in my research. Remember this was MY research. I may be attached to a university, but this does not mean that I do not use my own funds when I choose to.

For all I hear people complain about them, I will thank Microsoft. The Microsoft Academic Alliance has allowed me to legally install and license hundred s of hosts in the experiments I have been doing.

Without this program, I would not have been able to have completed the tests.

  • You did not test Linux/Mac/Android….

Again, did you pay for the research?

I have limited time and limited funds. I work 80 plus hours and I donate around 60 hours of it. To simply maintain my credentials, I have 25 exams a year right now. If you want more covered, you either fund me or my research (and this is a point for some people, my research) will focus where I want to have it focused.

I do commercial research and more importantly, I work at a University where we will have lots of eager post graduate students wanting to do applied research. You are not paying us, but in funding research you get to ask a question and frame it as you want and seek the answer in a format you want. If you want to have a specific topic investigated, pay for it to be researched.

I do have papers on other topics, one such example being linked here.

I do many simple tests and experiments such as:

And again. Yes I censor comments. I am the only person who gets to swear on my blog. It is after all MY blog and if you do not like that, too bad.

Finally.

No, my CV is NOT up to date either. As I am not actually looking, I have not made an effort to maintain it.

To those people who offer support and even constructive criticism, I thank you sincerely.

Why test household appliances?

Now, to start I will admit I have been called insane and far worse for my hobby. What is my hobby if you do not know? I break into online household appliances.

Yes, it is strange, but there are worse things to do.

Interestingly, in the average western household, there are many things that already have Internet connectivity. The following are a few things I have in my home already that are connected to the Internet:

  • Panasonic BluRay player (this is actually REALLY annoying as the firmware ALWAYS alerts that it requires an update when in the middle of a movie)
  • Panasonic Flatscreen TV.
  • Kenwood Stereo
  • HP Printer (actually I killed this and it is not working)
  • Vacuum cleaner (wireless and self charging)
  • Camera
  • Picture frame
  • Microwave (I really have not discovered why this is connected, but it gets firmware updates).
  • Electronic Piano (you can save your music, load effects and more)

In addition, I am trying to have some IPv6 enabled wireless light globes sent from the US. All that is just the tip of the iceberg. Fridges, power meters, Washing machines and more are already connected.

The music and display devices are interesting. They all support media streaming on my Windows Home Network. So, a home san holds the media, and others can also listen to the same media stored on the same centralised home storage devices. Better, with an IPv6 tunnel (and a REALLY GOOD when available) Internet connection

How does this relate to security you ask?

Well, simple, any device is an avenue for an attack.

I stated that my TV and stereo are connected to the home network. They have credentials on the Windows “Home Group” (and I have not managed to have the Stereo work with Linux although the TV does). This means that the TV is an avenue for an attack.

Many of these devices run a cut down Linux kernel.

In doing this, I have managed to get myself in trouble. I had (I say had as I broke it and the company voided the warranty) a Jura coffee maker. The vulnerability alert was not the issue, what occurred was that I had not known that Jura was a client of BDO (a former employer). A shame really as BDO would not let me have the data on the coffee maker hack when I left. I guess I scared an accounting firm too much…

The craziest device was the Oral B wireless toothbrush. The reason for this is as “Separate Wireless SmartGuide: Helps promote optimised brushing performance”. I guess I am old fashioned. I just brush as I brush and no toothbrush is going to tell me otherwise.

Again… how is this relayed to security?

Yes, I will get to the point.

All of these devices have either a Linux embedded kernel or run Windows CE. Panasonic and Sony use embedded Linux. The Embedded Linux Wiki has a list of software emulators which can be used to develop exploits without always killing devices (as I have done many times in the past.

Embedded Linux is Linux. You can do MANY things on a cut down Linux host. In fact, my TV is more powerful than the Sun 3 series server I managed nearly two decades ago that ran the warehousing and logistics functions for a national distribution company. Some of the things to point out that embedded Linux allows include:

  • BusyBox has a range of tools for embedded Linux all ready and waiting to be installed.
  • NetCat. Yes, NC has been ported to run on your Sony TV.
  • Squid. You can run a proxy server.
  • SSHd (if you really need to although NetCat is sufficient and easier for the attacker)

Right now, I hear people panicking as their phones can be at risk. What about all the other avenues of attack?

In the future, we will have TVs as attack platforms. IPv6 is difficult to port scan – you have to find the devices first and their are too many addresses to scan using nmap.

The answer, attack the DHCPv6 and multicast mechanisms. If you can discover an IPv6 enabled appliance, this becomes far simpler. More, with “Home networks” and even the incorporation of devices into corporate workgroups, the device will give you a list of systems to attack and scan.

IPv6 changes the game in many ways. It makes scanning for hosts a thing of the past. That stated, the future hold new and novel attacks that we need to plan for now. One of these is attacking embedded Linux and Windows CE based devices.

So why do I attack appliances?

This is not new, but we are starting to see devices with Internet connectivity by default.

The reason is that appliances are the way we will attack networks in the future. We can make extremely secure IPv6 workgroups using tools such as the “secure server”settings in Windows Group Policy, but all things are only as good as the weakest link and right now, we are creating devices that will be those weak links.

So, when you start to see your light bulbs scanning your network… Remember, the future is now.

Planning and architecture matter and we need to consider the devices we are connecting to our networks. That is not JUST the hosts, but EVERYTHING!

About the Author:

Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Wednesday, 12 October 2011

How IPv6 and the Cloud will help us be more secure.

We are starting to move to IPv6 and the cloud. Right now, the uptake is minimal at best with very few early adopters for all of the hype. That stated, the climate is changing. Soon, IP addresses will be on everything.

Network systems work on an exponential growth curve. Things are exponentially less expensive each year and incrementally more powerful. This will drive applications and uses that people have not even thought of. The following are ideas that have serious  serious money already behind them and are just a few years from deployment:

  • Disposable communication tablets. Basically these could be dropped in places such as Iran and allow for communications no matter what the incumbent government tries to filter. Think $1 devices.
  • Milk, Coke cans and more in supermarkets with IP addresses and RFID. Why, well first as they can integrate this with smart appliances, but more importantly, merchandising and stock control. Who needs to do stock take when the store tells you what it contains.
  • Light bulbs with web and IP addressing. Well actually these are already available.

The thing is, there are many reasons why IP addressing will be used up quickly and these are but a few.That is one reason why we will move to IPv6. Mobility and security is another.

The catch-cry of the 21st century will be, Anytime, Anywhere.

Done correctly, IPv6 can make for extremely secure networks. Right now, using Group Policy and a number of other tools if you have Linux or Macs, it is already possible to make a secure mobile network. It is more difficult under IPv4 due to the constraints on the protocol and the nature of DHCP (against DHCPv6)

As much as I like Linux, I will just talk on Windows for this post as it becomes far to complicated to start going into all the possibilities when Linux, Macs, Windows and other devices are involved. Microsoft have already published a number of good IPv6 implementation guides as well. More, they have detailed processes for implementing IPv6 through group policy. In time I will expand this series of posts to incorporate Linux and other devices, but the economy of time constrains us all.

Secure Server

In the domain world, Windows allows for the simple deployment of secure server and client trust models. This can also be achieved using workgroups, but like with Linux, it is more complex.

Using Group Policy, you can set a client to only talk using encrypted sessions and only to trusted servers.

Why restrict clients this way?

In a large organisation, client peer to peer communications can be a means of malware dissemination. There is rarely any real need that cannot be achieved on the server to have clients talking to clients over the network directly. Rather,m they should be controlled via a server.

What this can allow is that only authorised client hosts are allow to communicate with servers.

In an organisation, even mobile users can be forced to communicate to company servers. This is where the cloud becomes important. With disk encryption, IPv6 with IPSec enabled and the right controls, each and every host is firewalled.

image

A mobile user (A) can be restricted to communications with only allowed systems (such as a home office (D), and the corporate servers (D).

All attempts to connect to the system with an untrusted host will be dropped as the host attempting this will not (we are assuming that keys have not been compromised here) be allowed to communicate through policy and host firewall rules. As stated, this is simple using Group Policy.

Here, the client host can be restricted to connecting to the organisational proxy server and no more.

Using DaaS (Desktop as a Service) with mobile tablet makes this even more secure if done correctly. The desktop can be configured to be accessed only from selected tablets with a key and at best, the loss of a tablet will provide only a key to the remote desktop which still needs to be authenticated to.

This restricts local access to the host as the “desktop”is stored in a data centre. The user cannot use local escalation attacks based on physical access to the system as they are never actually on the system.

More, if the user loses a tablet or other device, they are not actually connected to the system and files and the loss of the tablet will not lead to a loss of data (if configured correctly).

As the desktop is configured to only talk to the tablet and the organisational servers and all communications are encrypted, the location of the user does not actually matter and they can be truly mobile (as IPv6 allows). More, this allows the organisation to control access to the Internet through organisational proxy and email servers.

As strange as it may seem, a well defined and deployed cloud and IPv6 system can actually be far more secure than the traditional crunchy shell firewall model.

In coming posts, I will provide detailed instructions how this can be achieved.

About the Author:

Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Tuesday, 11 October 2011

Checklists make good better

We all like to believe that as we become better at a task that we no longer need help. That somehow we will remember everything. This is false and in this post I how to allay that belief with a couple quick results from some research I shall be publishing in the new year.

What was done was simple. We measured people with and without a checklist as they responded to incidents. These are the same individuals measured when they used a checklist and without. These are simply normal people in a number of organisations I have consulted to. Nothing special in itself, but results were measured based on times.

The results themself will of course vary as people have better and worse days. That stated, we can make a hypothesis that there would be no statistically significant difference in average results between the times taken from the start of an incident or event to the determination that an event had or had not occurred. This is the value we measured and was defined as time in minutes t.

We could even state that if t was larger for those with a checklist, the effect was negative and a checklist made things worse.

So, to define or variables and hypothesis.

We measured the following variables:

·         tij             Here i is the ith individual with the measurement j in minutes to determine a response and select if an event was an incident or not.

·          tij(check)  This is the subset of readings where the individual i used a checklist as measured in minutes

·         tij(free)     This represents the subset of readings where the individual i did not use a checklist.

Now, with these variables, we can calculate the following:

·         ti(ave)       This is the average response time in minutes for an individual i.

·         ti(check)    This is the average time for the individual to respond and determine if an event is an incident using the checklist.

·         ti(free)      This is the average time for the individual to respond and determine if an event is an incident without using the checklist.

Now, the test and hypothesis is very simple.

We define Ho as the null hypothesis and Ha as the alternative hypothesis. We state our hypothesis as follows:

Ho          ti(check) =   ti(free)

Ha           ti(check) <>  ti(free)

Or, the null hypothesis is that there is no difference in how long it will take an individual on average using a checklist in respond and determine an event is an incident or not and the alternative hypothesis is that the use of a checklist will result in a difference in how long the responder reacts. That is, the time with a checklist will be significantly different to that with a checklist.

Although each event will vary in nature and the responder will vary in ability through the day and at different points in their lives, the averages when taken over time should be the same. To ensure this, the responders used their own checklists based on the best practice as they determined and defined it.

The process to randomise if a checklist was used was simple, a coin toss determined if the responder used the checklist or not. There are limitations to this, but we all have to work within the constraints of the world and scientific studies on live companies and with actual incidents need to be measured in a manner that allows the organisation to function as it is being experimented on.

In the boxplot below we have displayed the results.

clip_image002[5]

Just looking at the two datasets, we see that there is a difference in the standard deviations with a larger range of values for the responses without a checklist then those recorded when a checklist was used. If we look at the statistics in R (our statistical package, we see a mean (average) value of 14.3602 minutes for responses without the use of a checklist and 14.00188 when a checklist is used.

> summary(ticheck)

   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.

  0.000   9.878  13.680  14.000  18.000  41.260

> summary(tifree)

   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.

  0.000   8.246  14.060  14.360  20.280  64.600

> 

The mean values are only 21 seconds different on average over a mean of around 14 minutes. This visual analysis does not give the real result. By conducting a Student’s t-test on the two datasets, we can see if a difference really exists or not. This is simple to do in R and the results are displayed below.

> t.test(tifree, ticheck)

 

        Welch Two Sample t-test

 

data:  tifree and ticheck

t = 3.3964, df = 20638.23, p-value = 0.000684

alternative hypothesis: true difference in means is not equal to 0

95 percent confidence interval:

 0.1515310 0.5651013

sample estimates:

mean of x mean of y

 14.36020  14.00188

> 

What this all means is that at the alpha = 5% level we have a p-value of 0.000684 and we are confident that there is a statistically significant difference in the means.

Although there is little difference in the mean value, there are some outliers where something has gone wrong. We see from the boxplot that there are occasions without a checklist where errors do occur and these cost time.

In responding to an incident, having a checklist helps even experienced professional incident responders. You may have worked many years and know your work inside out, but there are always things that you can overlook in a rush and in the moment.

So, the moral here is simple, create a checklist. A good incident responder is not afraid to use a checklist and follow a process.

Science does not mean we have to have huge budgets and it also does not need to be difficult. Simple experiments can be delivered by most people. In the DIT (Doctor of IT) program we have launched at CSU, we hope to have many experiments and in time, turn computer science back into a science from the art form it has become.

In the full paper, we will also be looking at accuracy and other measures, but this will have to wait until the paper is released.

Monday, 10 October 2011

Entrepreneurship and Innovation in Audit

Sayes’ law of economics shows us that gains in productivity offset any economic equilibrium leaving the general state of the economy one being of flux or change. In this, the undertakings that survive are those that embrace change. This requires entrepreneurial thought and constant innovation.

In contradiction to the common belief that entrepreneurs necessarily start new businesses, Sayes’ definition of an entrepreneur was one that shifts the means of production from less productive to more productive enterprises. In this, the entrepreneur is anyone who increases and undertakings productivity.

Change does not happen as quickly as people believe even in this time of rapid prototyping. Through the nature of compound interest, small incremental changes result in large subsequent results. Currently, and for a number of years, technology research and productivity innovations that have delivered between 3 and 6% each year for the last decade. This may seem small, but when you consider that a 5% yearly compounding rate over the last 10 years together has made an incremental 50%+ increase on productivity from just a decade ago.

But we cling to the flotsam of old industry and practice and lower the level of growth and productivity as we strive to maintain the status quo.

From my observations, many entrenched industries seem to be increasing productivity at a rate of between 1 and 3% per anum (if even this). At this rate, not only can they fail to maintain equilibrium in the long run, but within a decade will likely lose up to 50% of their business to the new and developing forms of collaborative and truly global enterprises.

Even in accounting, KPMG and several groups within PWC are actively researching “the future of the financial audit” and this has come to found strong ties into systems audit practices. Deloitte's "third generation audit" is focused on a similar line. Director's at Deloitte have been quoted with saying, “Expect Web-based audits. In the future, a company's financial accounts and data will be completely digitized. The Web will act as host. That will allow auditors to sit in one location and access all necessary corporate information and transactions”. While the technology for this exists and while there are small-scale experiments under way, the large audit firms believe that widespread Web-based audits are only "realistically six, seven years down the road."

A question to ask is are we ready for this and how are we to ensure that we secure the data?

Existing research has resulted in advanced CAAT technologies now known as DATs (digital audit techniques). DATs are consistently detecting over 90% of all financial statement frauds. The big four firms are starting to implement these technologies. These technologies will be commercial on a wide scale usage within the next decade.

DATs have also shown and accuracy of over 96% on analysis of non-fraud financial statements. When teams are developed implementing both traditional audit techniques and the use of advanced technologies and mathematical formulations, the accuracy has exceeded 99.8%.

Current figures put traditional audit techniques at a level of 8% accuracy in the determination of financial statement fraud.

There has been a lot of discussion in the audit industry concerning productivity of late. Most audit firms operate in isolated pockets of technical skills. We (as auditors) embrace our skills close to ourselves and do not share them. We do not seek ways to work together.

Not only are DAT based audits more accurate, but they are faster and more productive. This is not incrementally more productive, rather studies have shown that they are capable of being up to 90% more productive than existing audit techniques.

This allows a firm to concentrate more on adding value to their client, not simply formulating a checklist, but actually determining security holes and the roots of fraud.

To make these types of productivity gains, we don’t need to work harder we need to follow the oft stated idiom that we need to work smarter. We need to look at working with each other and thinking about how we can better implement technology.

The future of security and anti-fraud technologies will align far closer than today and it is possible that the security auditor will also start to be involved with using large datasets in the determination of financial systems fraud. After all, these do align. The combination of business process controls and software controls is one that has already begun and in the next decade, we can expect to see changes in the ways we engage business audits and control reviews.

These techniques are not going to go away. Change is pervasive, either we embrace it in an entrepreneurial manner or it will steam roller us.

About the Author:

Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Webinars - Lectures

The 4th free lecture in the series on Cyber(crime, terror and espionage) is available. It is free and the full course counts as 24 CPE/CPD hours.

https://www2.gotomeeting.com/register/217048578

Sunday, 9 October 2011

How I got into systems testing

Personally, I was born to be corrupted and forced into the black pit of Pen Testing. I still rail against it from time to time… but it does pay the bills.

I have fought against it for many years. I have completed research that demonstrated that there are better (economically more viable) white and crystal box testing methods. I showed how these could find more information and security flaws that pen testing by a mean factor of three times, but few people where interested.

The problem was that pen testing for all its difficultly is still far sexier than any crystal box method we can come up with. Perception and the network effect do more than technical superiority. Otherwise we would be using pass phrases of 20plus characters and not complex eight character passwords that have stuck as a legacy of UNIX password fields.

Then came the mortgage and the need to face reality.

I had to consider things economically.

My grandfather who passed 18 months ago was an early cryptographer and computer scientist. This was way back in the never-never time when MacArthur was still a general and Japanese cypher machines where state of the art.

I was introduced to VMS/VAX and a precursor of Unix in 1979 – the same year I managed to get my first email (UUCP based) and a connection using a 300/75 modem from a remote terminal using my Grandfathers account.

I learned ASM and C and over time, he would pay me based on the number of good clean lines of code I wrote for him.

In time, I started looking around the systems I was connected to. I first got him into trouble and them myself as some of the systems he worked on where US .gov and .mil sites and as a 9-13 year old I did not rate getting a clearance.

The issue was I ran out of programming tasks and started looking at systems. As a pre-teen, there is only so much trouble one can get into.

I spent the next 5-6 years being berated by my mother; programming and having my family tell me I am wasting my time on this Arpanet thing and that I need to consider a real profession such as law… (I have gotten my LLM but never had the inclination to practice).

As time passed, my programming skills became less and less in demand. As I said, I cannot do graphics to save myself. I just do small fast algorithmic libraries and reversing.

I needed a place to go, a job with a future that could have more opportunity than a back room library coder on a Solaris system (where I was in the early 90’s). I had been working on and with early ISP’s from nearly 20 years hence now and I had been working with really early firewalls. I started with the Australian Stock Exchange for that reason. I was the only person they could find who could write Gauntlet proxies as they had a TIS firewall on BSD and had already started wanting to make what was in effect an early web application gateway.

That got me a start deep into security from the general ISP work I had done.

But in time, Checkpoint and simple firewalls started to replace gauntlet and people with no programming skills and a Cisco cert started to undercut what I could charge.

So I looked at a way I could leverage the knowledge I had. What else could I put years of breaking stuff and library coding to use as?

So pen testing, forensics and generally annoying the buggery out of all those I meet seems to have come as a natural progression.

I program in C and Assembly and cannot do graphical programming to save myself. When it comes to reversing I am fine as long as I am not pissed, but then this is a function of scotch and wine unlike John Strand and sterno…

So what other choice in life did I have?