Friday, 19 June 2009

Ron Lynam

Last night my grandfather died at the age of 93. This follows a long battle with Parkinson's disease.

He will be sorely missed by family and friends.

My grandfather was an electronics expert and engineer in the early days of this discipline. It was through him I gained a love and passion for computers and electronics. It was because of him that I first learnt to program. He taught me Assembly and C (which at the time I generally used to reverse engineer games such that my sisters could never win).

It was through him that I obtained my first Unix terminal account on a dial up modem (75/300 baud) in 1979. As a consequence, it is him I owe my current career to.

Pop, I will miss you.

Monday, 15 June 2009

Blackboxes are vulnerable!

To an extent all software is a blackbox. As a consequence, the quote listed as the title is commonly made, but problematic. It may be true (Popper will attest to the inability to determine anything absolutely), but this adds no value.

The analysis of software is an NP infeasible problem. Turning and then Distraka demonstrated proofs that the state of a system can never be fully known. You are making presumptions as to the level of knowledge an open system holds and as to the level of testing.

Crystal box testing is a better option (I have published papers on this in the past), but the option is not always (nor truely) available. What is missing is the complexity/simplicity issue. These issues are mixed with the issues of security.

It is never possible (nor feasible) to absolutely know the state of an open system. You just have a lower cost of testing and rectification.

As for DoS. There is always a way to DoS a system. The issue here is how much evidence you create and why you do it. Hit any system with a sustained attack from 1,000,000 bots and it goes down. End of story.

This is not an argument about complexity impacting security.

What matters at the end is the best way to minimize long term costs.

Sunday, 14 June 2009

Simplicity and Complexity in Security

There is a small correlation between the effects of security and simplicity. This is complexity for its own sake will not add and may remove security. Likewise, simplicity for its own sake will not add security.

For the most part, the functions of simplify and security are perpendicularly polar. This is, although there is a feedback effect one aligns with a proverbial X and the other a proverbial Y axis. The real issue also comes to a definitional framework. I speak of simplicity in a Chaos/Complexity theory framework.

Simplicity adds to security in human system interactions. Complex systems (in general) are more prone to catastrophic failure. That is they are more brittle. To understand this, you need to think how a brittle material fails. High carbon steel structures can be remarkable strong, but if they exceed their threshold only once, they shatter. Low carbon steel will bend at a lower threshold, but can be reformed. If the strength of a brittle material is sufficiently high, it does not matter that it can fail catastrophically, as the conditions will never be met.

As in materials science, some of the most robust systems as hybrid composites. This is a combination of systems. This in itself is a form of complexity, but in the sense that simple and complex systems are enmeshed.

The same applies to information systems. Brittle but strong systems can survive extremely well as long as they are sufficiently resilient to have a capacity to withstand any attack.

The flaw here is in software. Although we pose an NP Infeasibility issue, software validation can be great use. It also poses a cost. A verified system (such as in the old rainbow A tables) can be extremely resilient. The cost of such a system is however extraordinarily high.

All software is complex by nature. So the issue is not of pure simplicity. Even in the face of open source code, software is not evaluated. It requires a level of complexity to account for the failings in software. Malcode, bugs, vulnerabilities etc all account for the major flaw in any system design. To account for this correctly requires the introduction of multiple systems. This reduces simplicity through introduced complexity, yet adds to systematic security. An example is given through an introduction of dual layers of firewall technologies with separate vendors such that neither ever suffers the same software flaw (and all firewalls have had compromises). Another example is the use of multiple anti-virus engines (such as a email gateway with one product and a separate engine on the email server and data store).

To contrast this, this increase in security from multiple systems is impacted through the addition of additional human factors. More systems and complexity make it more difficult for a single individual to run a security system (e.g. firewalls or AV) as they require knowledge in multiple platforms and thus spend less time on either platform. This can also lead to an introduction of more people. Two specialists can be used (1 for each vendor). This allows the individual to specialise in a particular area again, but also takes interaction with the other person (which was previously achieved by one individual).

The requirement to act in concert adds stress points to the security solution making it more brittle. This can be tempered. Training and drilling staff leads to a team approach where the effect is in an individual team rather than a group mindset. This tempering adds to security. This may (and generally does if you exclude the cost of failure) increase costs (esp. As contingency costs are not attributed to project costs).

Training in itself is a source of complexity in its development. The paradox is that this addition of complexity can create a simpler and more robust system.

"There is a delicate balance in making security work."
There is a point solution that makes a balance. Security is a dynamical (spelt correctly) system. Although point equilibria exist, they do not exist in time.

Nodal minima do exist - sometimes for long periods of time, but these require tuning and updates.

" blackboxes are prone to be vulnerable "
Not necessarily. There are many B2 and higher (on the old rainbow table US classification scheme) systems that can operate as a black box. These can be fit for purpose designed systems. For instance, you can even create a secure software based system using vulnerability prone software.

Many systems are blackboxes, this is not the sole cause of security issues. In addition, open source also has just as many flaws.

SANS SEC 709

Last year I had SEC 709 on my course list and signed up for this in Las Vegas following my GSE exam. The name of this course is:

Developing Exploits for Penetration Testers and Security Researchers

This is one of the most advanced and highly focused pen testing and vulnerability assessment courses available (if not the most intense). Steve Simms has excelled himself with this one.

As the page states, this is a must do course for any of the following categories of security professionals:
  • Incident handlers looking to take the next step in understanding exploitation in its most technical form
  • Network and system security professionals looking to understand the methods used to write exploit code and discover vulnerabilities
  • Programmers and code review engineers looking to understand the threat of exploitation and how to write Proof of Concept (POC) code to demonstrate exploitation techniques
  • Certification-holders looking to improve and put their practical knowledge to the test
  • Anyone looking to build credibility and take a technical course on advanced hacking techniques
I recommend this wholeheartedly for anyone wanting to do more than click a button and print a form based report.

My only disappointment is that the course has grown and developed with a mass of new material since Las Vegas last year and that I missed out on all that is now being offered in addition to what I completed..