Saturday, 17 November 2007

Card Merchants and Retailers Miss Security Deadline

Compromises of TJX and high-level merchants such as Polo Ralph Lauren and Lexis-Nexis have focused the spotlight on card data security. VISA and MasterCard have formulated a compliance strategy, which has been ratified by the other card vendor’s to minimise these risks.

These strategies, which are considered good practice within much of the online industry, will come as a drastic change to many merchants. Though these changes should be looked upon as a good thing for the most part, complying with some of the PCI provisions could be difficult for midsize and small merchants.

Card Companies impose Identity Theft Counters and Compliance dates are PAST
The deadline for compliance with the payment card industry in data security standard or PCI is looming (and in many cases they have already past) and many retail merchants are still unaware. Retailers, online merchants, data processes and other businesses that process credit card data have only weeks to become compliant with the standard.

Banks and other issuers of the credit cards will be responsible for ensuring that companies comply with PCI. These providers face fines of up to $500,000 per incident, if a data compromise occurs due to a failure to implement the standards.

The orginal June 30 deadline has passed long ago and the extended deadlines for Tier 2 and Tier 3 merchants are fast approaching. Tier 2 and Tier 3 merchants are those that except anywhere between 20,000 and 6 million card transactions to year.

Though compliance is expected, retailers in the Tier 4 category of the PCI program do not face a deadline as yet. The card companies have been working through a risk-based approach, and no deadline has been determined concerning the smaller merchants as yet.
What is the PCI Anyway?

The Payment Card Industry Data Security Standard, or PCI, lists 12 items that retailers, online merchants, data processors and other businesses that handle credit card data will have to start meeting by June 1. PCI Data Standard combines components of MasterCard's SDP security compliance program and Visa's Cardholder Information Security Program (CISP)
Specifications of the program require that merchants:

  1. Install and maintain a working network firewall to protect credit card data from other networks, including the Internet.

  2. Keep security patches up to date on all systems involved with credit card data.

  3. Encrypt stored credit card data.

  4. Encrypt data sent across networks using acceptable methods.

  5. Use and regularly update anti-virus software.

  6. Restrict access to data by business "need to know."

  7. Assign a unique User ID to each person with computer access to data to provide accountability.

  8. Do not use vendor-supplied defaults for system accounts and passwords and other security parameters.

  9. Monitor and log access to data by unique User ID.

  10. Test security systems and processes.

  11. Implement and maintain a security policy and processes. This includes assigning responsibility within the organisation

  12. Restrict physical access to cardholder information.
The PCI program applies not only to online merchants, but also mail-order, telephone order (MOTO) third party processing agents, "card-not-present" processes, and anyone who stores cardholder data on an electronic system.

Most small merchants will need to conduct an external vulnerability assessment to be compliant.
Why comply with these standards?
VISA argues that the program will provide merchants with a competitive edge. They point to consumer studies which show that customers would prefer to deal with merchants they feel safe with.

For the smaller merchants, this is basically a risk issue. These retailers need to address at the cost of implementing control systems against the cost of business and particularly the cost of not complying.

How does this affect my business?
Many POS systems used by retailers store credit card information for up to a month for backup or settlement reasons. Under the PCI requirements, this information needs to be encrypted.
Retailers will need to review, what data they capture and forward when they scan a credit card in stores. Merchants who store card data for automated processing later, will need to carefully review the systems and the controls around them.

For most small retailers, a quarterly external vulnerability assessment is a basic requirement. With the level of threats on the Internet these days, this can only be a good thing.
How can they make me comply?

The card companies are primarily pushing PCI through the acquirer is such as the banks. As the principle underwriters of the merchants, the banks and other acquirers are responsible for the fines and don’t want to have to accept the liability. Many acquirers are making PCI compliance part of the merchant agreements.

How risk is assessed
For small merchants, the following table will give some idea of the risk and compliance requirements:

* DSS is the Data Security Standards

The scan and questionnaire requirements do not apply to Retail Merchants with terminal applications not connected to a network and/or the Internet and that do not accept, process, store, transmit or view credit card data via a Network or the Internet. POS devices which are networked or which store information for backup or historical processes are not exempt.

Merchants exempt from the scan and questionnaire requirements are still required to comply with PCI Security Standards regarding management and storage of credit card data.

The following table lists the requirements:

*Merchants that use virtual terminals qualify as level 4 merchants.

VISA US has stated that fines can also be issued "if a member knows or suspects a security breach with a merchant or service provider" and doesn't "take immediate action to investigate the incident and limit the exposure of cardholder data".

Friday, 16 November 2007

An Introduction to Text Data Mining

Text data mining or just Text mining, involves the discovery of novel and previously unknown information using computer systems to analyse and extract data from a variety of text sources. Text mining allows the researcher to link extract information in order to create or test hypothesises. Text mining differs from the broader field of data mining or knowledge discovery in databases (Fayyad & Uthurusamy,1999), and the data sources textual collections and documents. It is interested in the derivation of patterns that may be found and unstructured textual data rather then formalised database records.

There are many similarities between data mining and text mining (Hastie, Tibshirani & Friedman, 2001) and in fact text mining has developed through much of the seminal work on its counterpart. In particular it maintains a strong reliance on pre-processing routines, pattern discovery algorithms, and presentation layer elements. Visualisation tools and data mining algorithms are commonly used in text mining with many software programs integrating both data and text functions simultaneously (Berry & Linoff, 1997).

One of the primary differences between the generalised field of data mining and text mining comes from the presumption that data sets used in data mining exercises will be stored in a structured format. Pre-processing operations in text mining are generally focused on transforming unstructured textual data into a format that is more readily interrogated. Additionally, text mining relies heavily on the field of computational linguistics (Fayyad & Stolorz,1996).

There is a strong relation between information retrieval, text mining and Web data mining. The special important properties of text that drive through grammatical syntax and the growing repositories of textual data (such as the Internet/Web) have driven interest in this emerging field. In particular, advances in computational linguistics have further fuelled advances in text mining leading to the development of new techniques and algorithms (Hastie, Tibshirani & Friedman, 2001).

Text Data Mining vs. Information Retrieval / information, Access
Although only one of many factors, a driving force behind the growth of text mining has been the Web (Hastie, Tibshirani & Friedman, 2001). The growth of Internet commerce has created large repositories of documents, customer information, records and other information. On top of this advances in scientific research, academic publications and professional journals provide increasing amounts of unstructured content. With the millions of new abstracts being published every year, knowledge discovery is increasingly becoming reliant on text mining operations.

Relating Text Mining and Computational Linguistics
Text mining extrapolates as data mining text collections to a series of processes that are analogous two processes used in data mining numerical data. In particular, the field of corpus-based computational linguistics has numerous overlaps (Hearst, 1997). Computational linguistics uses empirical methods to compute a wide range of statistics from a large range of official documents often in disparity collections. This process was developed in order to discover data patterns that could provide new or novel results.

These patterns may be then further used in the creation of algorithms that are designed to provide solutions to ongoing problems within natural language processing (Armstrong, 1994). Some of the main issues in this field include part-of-speech tagging, word sense disambiguation, and bilingual dictionary creation.

Church & Liberman (1991) proposed that there is great interest in the field of computational linguistics to word patterns and distributions. In particular they note that word combinations resembling “prices, prescription, and patent”' may be expected to be grouped with the medicinal meaning of “drug”. It is further noted that “abuse, paraphernalia, and illicit”' correlate with the use of the word “drug” in the sense of an illicit substance.

Category Metadata
Text categorization is the process used to condense the particular content of a document into a set of pre-defined labels. It has been asserted (Fayyad & Uthurusamy, 1999) that text categorization should be considered text data mining. Fayyad & Uthurusamy (1999) consign the classification of astronomical phenomena as data mining although this is predominantly the analysis of textual data.

Hearst (1997) however believes that this process “does not lead to discovery of new information,…” but “rather, it produces a compact summary of something that is already known”. This process is thus in his view not generally a component of text data mining.

Hearst (1997) does however note that “there are two recent areas of inquiry that make use of text categorization and do seem to fit within the conceptual framework of discovery of trends and patterns within textual data for more general purpose usage”.

He notes these to be:

  1. A body of work associated with Reuters newswire that utilises text category labels to find “unexpected patterns among text articles” where “the main approach is to compare distributions of category assignments within subsets of the document collection”.
  2. The DARPA Topic Detection and Tracking initiative which includes the task called On-line New Event Detection “where the input is a stream of news stories in chronological order, and whose output is a yes/no decision for each story, made at the time the story arrives, indicating whether the story is the first reference to a newly occurring event”.
Exploratory Data Analysis
Tukey (1977) suggested that a way to scrutinize text data mining is as a progression of exploratory data analysis that leads to the unearthing of previously unidentified information. It can also be used to provide solutions to problems when the solution is not at present available.

It may also be held that the typical exercise of reading textbooks, journal articles and other papers assists in the invention process by uncovering innovative information, being that an essential component of research is about this. The goal however with Text mining is to utilize text for discovery in a more substantial way.

Unstructured data
Text is generally considered to be unstructured (Cherkassky, 1998). However, nearly all documents demonstrate a rich amount of semantic and syntactical structure that may be used to form a framework in structuring data. Typographical elements such as punctuation capitalisation white space carriage returns for instance can provide a rich source of information to the text miner (Berry & Linoff, 1997).

The use of these elements can aid the researcher in determining paragraphs titles, dates etc. These in turn may be used to formulate structure in the data. This of course returns to the field of computational linguistics in an attempt to give meaning to groups of words or phrases and layout.

Characters, Words, Terms and Concepts
At the most basic level text mining systems take input from raw documents in order to create output in the form of patterns, trends and other useful output formats. The result is that Text mining often becomes an iterative process through a loop of queries, searches and refinements that lead to further sets of queries, searches and refinements (Fieldman & Sanger, 2007). Each of these iterative phases, the output should move closer to the desired result.

In Text Mining, the general model of classic data mining is roughly followed (Fieldman & Sanger, 2007):

1. Pre-processing tasks,
a. Document Fetching/ Crawling Techniques,
b. Categorisation,
c. Feature/Term Extraction

2. Core mining operations,
a. Distributions,
b. Frequent and Near Frequent Sets,
c. Associations,
d. Isolating Interesting Patterns,
e. Analysing document collections over time.

3. Presentation and browsing functionality, and
a. Pattern Identification,
b. Trend Analysis,
c. Browsing Functionality

  • Simple Filters,
  • Query Interpreter,
  • Search Interpreter,
  • Visualization Tools,
  • GUI,
  • Graphing.

4. Refinement.
a. Suppression,
b. Ordering,
c. Pruning,
d. Generalisation,
e. Clustering.

Pre-processing includes routines processes and methods required to prepare data for a text mining systems core knowledge discovery operations and will generally take original data and apply extraction methods to categorise a new set of documents represented by concepts.

Core mining operations include pattern discovery trend analysis and incremental knowledge discovery algorithms and form the backbone of the text mining process. Together, pre-processing and core mining are the most critical areas for any text mining system. If these stages are not correctly implemented, the data that is produced and visualised will have little value (Fieldman & Sanger, 2007). In fact, the production of incorrect data could even result in negative consequences.

When analysing data, common patterns include distributions concept sets and associations may include comparisons. The goals this process being too figuratively uncover any “nuggets” from undiscovered relationships.

Presentation layer components include GUI and pattern browsing functionality and may include access to character and language editors and optimisers. This stage includes the creation of concept clusters and also the formulation of annotated profiles for specific concepts of patterns.
Refinement (which is also called post-processing) techniques include methods that filter redundant information and cluster closely related data. This stage may include suppression ordering pruning generalisation and clustering approaches aimed at discovery optimisation.

Summary and Future
Although text is difficult to process, it can be extremely rewarding. Without even looking to the future, vast repositories of valuable information may be currently found. The difficulty is in finding the proverbial needle in a haystack.

Computational linguistic tools are currently available, but they have long way to go and sophisticated language analysis needs to be developed further. The accumulations of statistical techniques that compute meaning and apply this to sections of text offer prove promising. However, there is a great amount of research that needs to be completed before the true value of text mining will come to the forefront (Hastie, Tibshirani & Friedman, 2001).

This leaves us with a future are still some way off but which continues to entice us with its potential. The growing volumes of textual documents, research news and records are already beyond the capabilities of any individual to search. If we are to continue to move forward at the rate of technological advancement that we have been moving, accessing this information is crucial. Text mining may provide the solution.

References

See Comment No. 1

Where Vulnerability Testing fails

This is the original unpunlished research paper I completed in 2004 that let to a couple published papers in audit and security journals and also as a SANS project. I hope that I have become a little more diplomatic in my writing in the preceeding years (Nah...).

Abstract
Here we show that “Ethical Attacks” often do not provide the benefits they purport to hold. In fact it will be shown that this type of service may be detrimental to the overall security of an organisation.

It has been extensively argued that blind or black box testing can act as a substitute for more in depth internal tests by finding the flaws and allowing them to be fixed before they are exploited. This article will show that not only is the premise that external tests are more likely to determine vulnerabilities is inherently flawed, but that this style of testing may actually result in an organisation being more vulnerable to attack.


Introduction
“Ethical Attacks” or as more commonly described “(white hat) hacker attacks” have become widely utilised tools in the organisational goal of risk mitigation. The legislative and commercial drivers are a pervasive force behind this push.
Many organisations do not perceive the gap in service offerings they are presented. It is often not known that “external testing” will not and by its very nature can not account for all vulnerabilities let alone risks.
For this reason, this article will address and compare the types and styles of security testing available and critique their shortfalls.

To do this we shall firstly look at what is an audit or review, what people are seeking from an audit and the results and the findings whether perceived or not. To do this it is necessary to both explore the perceptions and outcomes of an audit and review against the commercial realities of this style of service from both the provider and recipient’s perspective.

Next, it is essential to detail the actualities of a black box style test. The unfounded concerns that auditors are adverse to an organisation, derived from the ill-founded concept of the auditor as the “policeman”, have done more to damage any organisation than those they seek to defend themselves against.

This misconceived premise results in the mistrust of the very people entrusted to assess risk, detect vulnerabilities and report on threats to an organisation. Effectively this places the auditors in a position of censure and metaphorically “ties their hands behind their backs”.
Often this argument has been justified by the principle that the auditor has the same resources as the attacker. For simple commercial reasons this is never the case. All audit work is done to a budget, whether internal or externally sourced. When internal audit tests a control, they are assigned costs on internal budgets based on time and the effectiveness of the results.
Externally sourced auditors are charged at an agreed rate for the time expended. Both internal and external testing works to a fixed cost.
An external attacker (often referred to wrongly as a “hacker”) on the other hand has no such constraints upon them. They are not faced with budgetary shortfalls or time constraints. It is often the case that the more skilled and pervasive attacker will spend months (or longer) in the planning and research of an attack before embarking on the execution.

Further, audit staff are limited in number compared to the attackers waiting to gain entry through the back door. It is a simple fact the pervasiveness of the Internet has led to the opening of organisations to a previously unprecedented level of attack and risk. Where vulnerabilities could be open for years in the past without undue risk, systems are unlikely to last a week un-patched today.

The foundation of the argument that an auditor has the same resources must be determined to be false. There are numerous attackers all “seeking the keys to the kingdom” for each defender. There are the commercial aspects of security control testing and there are the realities of commerce to be faced.

It may be easier to give the customer what they perceive they want rather than to sell the benefits of what they need, but as security professionals, it is our role to ensure that we do what is right and not what is just easier.

What passes as an Audit
An “ethical attack” or “penetration testing” is a service designed to find and exploit (albeit legitimately) the vulnerabilities in a system rather than weaknesses in its controls. Conversely, an audit is a test of those controls in a scientific manner. An audit must by its nature be designed to be replicable and systematic through the collection and evaluation of empirical evidence.

The goal of an “ethical attack” is to determine and report the largest volume vulnerabilities as may be detected. Conversely, the goal of an audit is to corroborate or rebut the premise that systems controls are functionally correct through the collection of observed proofs.
This may result in cases where “penetration testing will succeed at detecting a vulnerability even though controls are functioning as they should be. Similarly, it is quite common for penetration testing to fail to detect a vulnerability even though controls are not operating at all as they should be” [i].
When engaged in the testing of a system, the common flaws will generally be found fairly quickly during testing. As the engagement goes on, less and less (and generally more obscure and difficult to determine) vulnerabilities will be discovered in a generally logarithmic manner. Most “ethical attacks” fail to achieve comparable results to an attacker for this reason. The “ethical attacker” has a timeframe and budgetary limits on what they can test.

On the contrary, an attacker is often willing to leave a process running long after the budget of the auditor has been exhausted. A resulting vulnerability that may be obscure and difficult to determine in the timeframe of an “external attack” is just as likely (if not more so) to be the one that compromises the integrity of your system than the one discovered early on in the testing.
Though it is often cast in this manner, an external test is in no way an audit.

What is External Testing Anyway?
There are several methods used in conducting external tests,
  • White box testing is a test where all of the data on a system is available to the auditor;

  • Grey box tests deliver a sample of the systems to the auditor but not all relevant information;

  • Black box tests are conducted “blind” with no prior knowledge of the systems to be tested.
White box tests are comparatively rare these days as they require the auditor to retain a high level of skill in the systems being tested and a complex knowledge of the ways that the systems interact. White box testing requires an in depth testing of all the controls protecting a system.
To complete a “white box” test, the auditor needs to have evaluated all (or as close to all as is practical) of the control and processes used on a system. These controls are tested to ensure that they are functionally correct and if possible that no undisclosed vulnerabilities exist. It is possible for disclosed vulnerabilities to exist on the system if they are documented as exceptions and the organisation understands and accepts the risk associated with not mitigating these.
One stage in the evaluation of a known vulnerability in a “white box” test is to ensure that it is fully disclosed and understood. One example of this is ensure that a vulnerability with a particular service where the risk has been mitigated through the use firewalling is not accessible from un-trusted networks.

Grey box testing is the most commonly employed external testing methodology used. An example would be an external test of a web server where the tester was informed of a firewall system but not of the Honeypots, NIDS (Network Intrusion Detection System) and HIDS (Host-Based Intrusion Detection System) which where deployed.

Grey box tests are more likely to occur then “white box” tests due to budgetary or contractual constraints. It is not often that an organisation is willing to check all the controls that they have in place to a high degree, rather trusting in selective products and concentrating their efforts on the controls they have the least confidence in.

This testing methodology is used as it requires a lower level of skill from the tester and is generally less expensive. Grey box tests usually rely on the testing of network controls or user level controls to a far greater degree than in “white box” test. Both Black and Grey box tests have a strong reliance on tools reducing the time and knowledge requirements for the tester.
The prevalence of tools based tests generally limits the findings to well known vulnerabilities and common mis-configurations and is unlikely to determine many serious systems flaws within the timeframe of the checking process.

Black box testing (commonly also known as “hacker testing”) is conducted with little or no initial knowledge of the system. In this type of test the party testing the system is expected to determine not only the vulnerabilities which may exist, but also the systems that they have to check! This methodology relies heavily on tools based testing – far more so than Grey box tests.
One of the key failures of black box testing is the lack of a correctly determined fault model. The fault model is a list of things that may go wrong. For example a valid fault model for an IIS web server could include attacks against the underlying Microsoft Operating system, but would likely exclude Apache Web server vulnerabilities.

In black box tests “you rarely get high coverage of any nontrivial function, but you can test for things like input overflows (for example, by sending enormous input strings) and off-by-one errors (by testing on each side of array size boundaries when you know them, for example), and so forth”[i]
After all is said and done, most vendors do not even do the above tests. Rather they rely on a toolset of programs such as NMAP or Nessus to do the work for them.
“It is often stated as an axiom that protection can only be done right if it's built in from the beginning. Protection as an afterthought tends to be very expensive, time-consuming, and ineffective”[ii]
Commonly, after all this (even using white box testing techniques), the tester will not always find all the intentionally placed vulnerabilities in a system.

What is an Audit? (Or what should an audit be)
An IT audit is a test of the controls in place on a system. An audit should always find more exposures than an “ethical attack” due to the depth it should cover. The key to any evaluation of an audit being the previous phrase “the depth it should cover”. Again budgetary and skills constraints effect the audit process.

The level of skills and knowledge of audit staff on selective systems will vary. The ensuing audit program, that is developed, will thus also vary based on the technical capabilities of the auditor on the systems they are evaluating. Further, the levels of knowledge held by the auditor or staff connecting the information needed to complete the audit will also affect the result.

One of the key issues in ensuring the completeness of an audit is that the audit staff are adequately trained both in audit skills as well as in the systems they have to audit. It is all too common to have auditors involved in router and network evaluations who have never been trained nor have any practical skills in networking or any network devices.

Often it is argued that a good checklist developed by a competent reviewer will make up for the lack of skills held by the work-floor audit member, but this person is less likely to know when they are not being entirely informed by the organisation they are meant to audit. Many “techies” will find great sport in feeding misinformation to an unskilled auditor leading to a compromise of the audit process. This of course has its roots in the near universal mistrust of the auditor in many sections of the community.

It needs to be stressed that the real reason for an audit is not the allocation of blame, but as a requirement in a process of continual improvement. One of the major failings in an audit is the propensity for organisations to seek to hide information from the auditor. This is true of many types of audit, not just IT.

For both of the preceding reasons it is important to ensure that all audit staff have sufficient technical knowledge and skills to both ensure that they have completed the audit correctly and to be able to determine when information is withheld.

From this table, it is possible to deduce that a report of findings issued from the penetration test would be taken to be significant when presented to an organisation’s management. Without taking reference to either the audit or the control results as to the total number of vulnerabilities on a system, the penetration test would appear to provide valuable information to an organisation.

However when viewed against the total number of vulnerabilities, which may be exploited on the system, the penetration test methodology fails to report a significant result. Of primary concern, the penetration test only reported 13.3% of the total number of high-level vulnerabilities, which may be exploited externally on the test systems. Compared to the system audit, which reported 96.7% of the externally exploitable high-level vulnerabilities on the system, the penetration test methodology has been unsuccessful.

External penetration testing is less effective than an IT audit
To demonstrate that an external penetration test is less effective than auditing the data it is essential to show that both the level of high-level vulnerabilities detected as well as the total level vulnerabilities discovered by the penetration test are significantly less than those discovered during an audit.

+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Figure 1 - Graph of Vulnerabilities found by Test type
As may be seen in Figure 1 - Graph of Vulnerabilities found by Test type and Figure 2 - Graph of Vulnerabilities found by exploit type that the total level of vulnerabilities discovered as well as a the high-level vulnerabilities are appreciably less in the penetration test results and from the audit results.

+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Figure 2 - Graph of Vulnerabilities found by exploit type

The primary indicator of the success of the penetration test would be both and detection of high-level vulnerabilities and the detection of a large number of vulnerabilities over all.

It is clear from Figure 3 - Graph of Vulnerabilities that the penetration test methodology, as reported, a smaller number of exploitable external vulnerabilities both as a whole and when comparing only the high-level vulnerability results.

+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Figure 3 - Graph of Vulnerabilities

It is not all Bad News
The key is sufficient planning. When an audit has been developed sufficiently, it becomes both a tool to ensure the smooth operations of an organisation and a method to understand the infrastructure more completely. Done correctly an audit may be a tool to not just point out vulnerabilities from external “hackers”. It may be used within an organisation to simultaneously gain an understanding or the current infrastructure and associated risks and to produce a roadmap towards where an organisation needs to be.

A complete audit will give more results and more importantly is more accurate than any external testing. The excess data needs to be viewed critically at this point as not all findings will be ranked to the same level of import. This is where external testing can be helpful.
After the completion for the audit and verification of the results, an externally (preferably white box) test may be conducted to help prioritise the vulnerable parts of a system. This is the primary areas where external testing has merit.

“Blind testing” by smashing away randomly does not help this process. The more details an auditor has, the better they may do their role and the lower the risk.

Summary

Just as Edsger W. Dijkstra in his paper “A Discipline of Programming” denigrates the concept of "debugging" as being necessitated by sloppy thinking, so to may we relegate external vulnerability tests to the toolbox of the ineffectual security professional.

In his lecture, "The Humble Programmer", Edsger W Dijkstra is promoting –
"Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give proof for its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmers’ burden. On the contrary: the programmer should let correctness proof and program to go hand in hand..."

Just as in programme development where the best way of avoiding bugs is to formally structure development, systems design and audit needs to be structured into the development phase rather that testing for vulnerabilities later.

It is necessary that the computer industry learns from the past. Similar to Dijkstra’s assertion that "the competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague...”[iv]. Security professionals, including testers and auditors need to be aware of their limitations. Clever tricks and skills in the creation of popular “hacker styled” testing are not effective.

As the market potential has grown, unscrupulous vendors have been quoted overemphasising dangers to expand customer base and in some cases selling products that may actually introduce more vulnerabilities than they protect against.

External testing is an immense industry. This needs to change. It is about time we started securing systems and not just reaping money in from them using ineffectual testing methodologies.

Conclusion

An audit is not designed to distribute the allocation of blame. It is necessary that as many vulnerabilities affecting a system as is possible are diagnosed and reported. The evidence clearly support to the assertion that external penetration testing is an ineffective method of assessing system vulnerabilities.

In some instances, it will not be possible or feasible to implement mitigating controls for all (even high-level) vulnerabilities. It is crucial however that all vulnerabilities are known and reported in order that compensating controls may be implemented.

The results of the experiment categorically show the ineffectiveness of vulnerability testing by "ethical attacks". This ineffectiveness makes the implementation of affected controls and countermeasures ineffective.

This type of testing results in an organisation's systems being susceptible and thus vulnerable to attack. The results of this experiment strongly support not using "ethical attacks" as a vulnerability reporting methodology.

The deployment of a secure system should be one of the goals in developing networks and information systems in the same way that meeting system performance objectives or business goals is essential in meeting an organisation’s functional goals.

Acknowledgments

I would like to thank Sonny Susilo for his help on this experiment and for BDO for their support. In particular I would like to that Allan Granger from BDO for his unwavering belief in this research.

References
Web Sites and Reference
S.C.O.R.E. – a standard for information security testing http://www.sans.org/score/
The Auditor security collection is a Live-System based on KNOPPIX http://remote-exploit.org/
Nessus is an Open Source Security Testing toolset http://www.nessus.org/

Appendixes
In support of the assertions made within this paper, an experimental research was conducted. The paper from this research has been completed and is available to support these assertions. First, the system tested is detailed as per the results of an audit. Next a scan of the system is completed as a Black, Grey and White box external Test.

The results of these tests below support the assertions made in this paper. The configuration of the testing tool has been tailored based on the knowledge of the systems as supplied.

Endnotes
[i] http://www.sdmagazine.com/documents/s=818/sdm9809c/
[ii] “Fred Cohen”
[iii] Fred Cohen, http://www.sdmagazine.com/documents/s=818/sdm9809c/
[iv] Edsger W Dijkstra, EWD 340: The humble programmer published in Commun. ACM 15 (1972), 10: 859–866.

Thursday, 15 November 2007

Digital Forensic Book Now Available

Just an announcement of a book being available and off the press.

“The Official CHFI Study Guide (Exam 312-49)” is off the press and available now.
This is available from Amazon at http://www.amazon.com/Official-CHFI-Study-Guide-312-49/dp/1597491977 as well as your choice of Book sellers.

Book Description
This is the official CHFI study guide for professionals studying for the forensics exams and for professionals needing the skills to identify an intruder's footprints and to properly gather the necessary evidence to prosecute.

The EC-Council offers certification for ethical hacking and computer forensics. Their ethical hacker exam has become very popular as an industry gauge and we expect the forensics exam to follow suit.

The material is presented in a logical learning sequence: a section builds upon previous sections and a chapter on previous chapters. All concepts, simple and complex, are defined and explained when they appear for the first time. This book includes the following special chapter elements: Exam objectives covered in a chapter are clearly explained in the beginning of the chapter, Notes and Alerts highlight the crucial points, Exams Eye View section at the end of each chapter emphasizes the important points from the exams perspective, Key Terms present the definitions of key terms used in the chapter, Review Questions section at the end of each chapter that contains the questions modelled after the real exam questions based on the material covered in the chapter. The answers to these questions are presented with explanations in an appendix. Also included is a full practice exam modelled after the real exam. The answers to the exam questions are presented with full explanations.

* The only study guide for CHFI, provides 100% coverage of all exam objectives.
* Full web-based practice exam with explanations of correct and incorrect answers

An analysis of the Australian Computer Crime and Security Survey 2006.

I was going to wait to there was a new survey, but the Government has decided that it will not fund this report any longer and AUSCERT have cancelled these reports (I wonder why - what is a little Bias between friends?). I will have to pick on an FBI survey now.

The Australian Computer Crime and Security Survey 2006 has been taken as authoritative in the IT industry. There are a few flaws in their methodology however that bias the results and damage the effectiveness of the report.

The survey has the aim of discovering “what or who are the most potentially dangerous internal and external threats to an Australian organisation?” such that the threats to the National Information Infrastructure (NII) can be assessed. The paper presents that it “presents a snapshot of Australian computer crime and security trends now and in the future.”

Contrary to the survey method, this question needs to be specified in terms of a sector and industry. It is unlikely that broad brush statements which have been made will impact equally or reflect the true threats.

The survey seeks to furnish additional data of use in developing strategies of risk management in order to assess the correct level of investment in security. A good understanding of the probability of the impact and likelihood of an attacker impacting an undertaking would be an economic boon. This could allow one to evaluate an undertaking’s risk through a comparison of the experiences of other undertakings with comparable systems and characteristics. Such associations could enable a competitive analysis and of the principles of due care and diligence in defending the undertakings assets.

The survey was sent to organisations using reply paid envelopes. These organisations were chosen by ACNielsen using a selection of organisations and where ACNielsen had existing data concerning the IT managers of those undertakings.

In selecting the 2,024 organisations that the survey was sent to, no attempt was made to either randomise their selection or to ensure that the sample was representative of the population as a whole. Rather, members of the Trusted Information Sharing Network (TISN) were targeted and given a preference. TISN members were invited to complete the survey using an online survey web site. Members of the TISN are however not representative of the Australian information security population.

Of the 2,024 reply paid envelopes, only 238 responses (11.75%) where received. Of the over 1,000 TISN members (the exact number is not disclosed), 151 (or less than 15%) online submissions were received.

The survey was anonymous and no information regarding the source of the organisation or the names of the parties was collected. Further, no method was initiated to restrict TISN respondents from submitting multiple responses. In the case of TISN members, many of these organisations have multiple information security personnel all of whom could have submitted the survey online and all of whom were invited to do so.

The target population and sampled population
The target population consisted of 2,024 organisations that were in the existing database of ACNelson with a designated IT manager. Additionally, members of the Trusted Information Sharing Network (TISN) were targeted.

The stated goal of the research was to have a Sample population which was representative of the overall population of organisational information technology users (including government, business and industry and other private sector organisations).

Of the 2,024 reply paid envelopes, only 238 responses (11.75%) where received. Of the over 1,000 TISN members (the exact number is not disclosed), 151 (or less than 15%) online submissions were received.

The survey’s authors reported an overall 17% response rate. In this they did not take the number of TISN members into account. They have taken 389 responses and divided this by the 2,024 survey sent by reply paid envelope to come up with the 17% figure. However, when the approximate number of TISN members is included in the total, the number of organisations invited to participate exceeds 3,024. This in reality he is a 12.8% or lower response rate.
A copy of the distribution of responses is attached below:




It should be further noted that “notable changes in survey demographics, size and industry representation” have occurred on a year-to-year basis. Significant variations across the responses have occurred each year from the initial survey in 1996 to the last survey in 2006.

The survey conclusions
"Interestingly, having more security measures did not mean a reduction in attacks. In fact there was a significantly positive correlation between the number of security measures employed and the number of Denial of Service (DoS) attacks."

The survey found that around 20% of organisations have experienced electronic attacks in the last 12 months. This is stated to be far lower than the previous two surveys (with 35% and 49% in 2005 and 2004 respectively).

It was asserted that 83% of organisations that were attacked electronically were attacked through external attacks. Conversely, only 29% when aged to be attacked through internal attacks.

It was noted that there was an overall reduction in the level of attacks noted. The survey included infections from viruses and worms as an attack. 45% of respondents who had been attacked stated that they were attacked in this manner.

It was also asserted that 19% of respondents who reported computer crime to Australian law enforcement agencies had resulted in charges being laid. This statistic does not correspond to the number of “cyber crime” convictions in any justifiable manner. It is thus difficult to believe these findings.

Are the conclusions are justified?
Nothing is noted in either survey to account for the confidence of the tests/survey and the calculated type I error. According to the responses, approximately 1 in 3 Australian organisations are be critical to national infrastructure, if this were the case, with a number of systems failures that occur in that a daily basis astray should be in a constant state of panic.
The justifications of the survey cannot be considered valid. The available public references however cite this report as authoritative.

Antivirus, antispyware, firewalls, and antispam are principally designed to defend against external threats. In considering internal threats it is necessary to take into account file access controls, role based access, segregation of duty (SOD), logging, configuration of the log file, internal use of IDS/IPS, change management, change monitoring of critical systems, administration of user access with regard to new users, role changes, and terminations, reconciliations of access, roles, and SOD, background checks[1], and other such controls.

The survey has taken no account for randomness. The selection method is not randomised and there is no validation to ensure that the TISN members which did submit the survey did not do it multiple times. In fact, several TISN members have multiple security personnel, all of whom were invited to participate, but who would have been counted as a single organisation.

If the survey had been randomised, they would have been an equal chance of all members of the population being selected. However, where the population is predefined as in the survey and the Sample is drawn non-randomly, than the Sample must be described as being biased.

There is no reason to conclude that the respondents are a representative sample of information security practitioners, or that the undertakings they are attached to are representative of the population of organisations within Australia.

Sources of bias within the survey?
The overall response rate for this survey was reported at approximately 17% (389 of 2,024). However, the true rate was 12.8% or lower. Of the 2,024 reply paid envelopes, only 238 responses (11.75%) where received. Of the over 1,000 TISN members (the exact number is not disclosed), 151 (or less than 15%) online submissions were received and no account was made to ensure that multiple people from the same organisation did not respond.

Nothing was noted as to whether the 12.8% who did answer where different in any other way from the 87% who did not respond.

In this instance, there is certainly clear bias towards TISN members. There is no indication that TISN members who are likely to be running critical infrastructure networks are in any way representative of the Information Security initiatives which are associated with the population of Australian organisations as a whole.

The failure to either isolate, separately report or otherwise take into account the unmonitored responses from TISN members must make them a potentially confounding variable. Further, there is nothing to suggest that “people who respond the surveys” in this case are representative of the larger population of all people involved with information security from all companies and other organisations, as was desired.

For instance, what if the individuals who where willing to answer the survey are also those who either have better security in place and thus better detective mechanisms and logging then those who don’t respond. In this the respondents could be confounding the reported quality of security and number of cyber crime based attacks. The biased Sample could be misleading us into overestimating or underestimating the number of, or impact of cyber crime as it impacts the organisations within Australia. We could also be overestimated the level of information security controls in place within the desired population.

Next, the comparisons of the surveys in different years are based on changing demographics as reported. These changing demographics do not reflect the changes in the population as a whole.
Different individuals and different methodologies have been used in the survey responses from prior years. This may be introducing further confounding variables as responses may change over time, with the nature of people surveyed, or the survey techniques.

Next, the surveys were carried out with different questions and were asked by different research groups in each of the years. The difference in questions asked over the years is a possible confounding variable in itself. Nothing has been done to take the confounding changes in methodology into account in the survey.

When the survey respondent rate is undersized, there is a possibility that the Sample turns out to be biased. The authors of the survey need to lessen this chance by warranting that the Sample incorporated an apposite representation of magnitude, industry sector and location and by weighting the data accordingly.

Interviews and other social-sciences research methodologies can suffer from a systematic tendency for respondents to shape their answers to please the interviewer or to express opinions that may be closer to the norm in whatever group they see themselves belonging to. Thus if it is well known that every organization ought to have a business continuity plan, some respondents may misrepresent the state of their business continuity planning to look better than they really are.

In addition, survey instruments may distort responses by phrasing questions in a biased way; for example, the question “Does your business have a completed business continuity plan?” may have a more accurate response rate than the question, “Does your business comply with industry standards for having a completed business continuity plan?” The latter question is not neutral and is likely to increase the proportion of “yes” answers.

The sequence of answers may bias responses; exposure to the first possible answers can inadvertently establish a baseline for the respondent. For example, a question about the magnitude of virus infections might ask “In the last 12 months, has your organization experienced total losses from virus infections of (a) $1M or greater; (b) less than $1M but greater than or equal to $100,000; (c) less than $100,000; (d) none at all?” To test for bias, the designer can create versions of the instrument in which the same information is obtained using the opposite sequence of answers: “In the last 12 months, has your organization experienced total losses from virus infections of (a) none at all; (b) less than $100,000; (c) less than $1M but greater than or equal to $100,000; (d) $1M or greater?”

The sequence of questions can bias responses; having provided a particular response to a question, the respondent will tend to make answers to subsequent questions about the same topic conforms to the first answer in the series. To test for this kind of bias, the designer can create versions of the instrument with questions in different sequences.

Another instrument validation technique inserts questions with no valid answers or with meaningless jargon to see if respondents are thinking critically about each question or merely providing any answer that pops into their heads. For example, one might insert the nonsensical question, “Does your company use steady-state quantum interference methodologies for intrusion detection?” into a questionnaire about security and invalidate the results of respondents who answer “Yes” to this and other diagnostic questions.

Ultimately, as rare as they may be in the real world, independent substantiation of responses offers strong evidence as to the truth of a respondents answers.

[1] COBIT V4.0 (ISACA) provides a detailed list of internal information systems controls.

Wednesday, 14 November 2007

Why do wireless attacks occur?

Many people think that it is easy to track down and catch an attacker who has made an illicit access using your Wireless Network. In this post I will explain why it is not so easy.

First, it is not necessary to broadcast to monitor wireless traffic. The attacker can remain passive. In a passive attack, the attacker could be siting on a hilltop 200km away from your site. They do not need to send any packets back to you, just to sniff the secrets going by. In this case there is next to no hope of catching the attacker – or for that matter even knowing that you are being attacked in the first place. This is where the value of a good security infrastructure comes into play.

What most people think of and where we have at least a small chance is the active attacker. This is the attacker who is actively interacting with your network and systems and not just waiting for traffic to randomly float past. One thing to remember, passive attackers can become active attackers without notice.

So let’s address this in detail.

To do so, let’s look at the threats first, we have:

  • Friendly – unprotected wireless networks deployed in ignorance.
  • Malicious – This is either a malicious rouge attacker or a planted rouge network or AP.
  • Unintended – Equipment deployed without authorisation and likely incorrectly configured (this group commonly includes Infrastructure rogues).
The friendly and unintended threats are easy to find. They will either be an AP or wireless card in the local proximity. These are easy to trace. As such we can ignore these for the purpose of this post.

There are a variety of means to discover rogues on the wireless network. These include:

  1. Wired-side AP fingerprinting

  2. Wired side MAC prefix analysis

  3. Wireless-side warwalking

  4. Wireless-side client monitoring

  5. Wireless-side WLAN IDS
If your intention is to test your own mal-functioning or mis-configured equipment on your network, then there is no crime. If you know it is not your device and you attack it in full knowledge, then a crime is the result. For example, you can run a Nessus AP Fingerprint Scan on your own (or what you believe is your own) equipment with impunity (assuming permissions and rights).

In the case of an attacker external to the network, we can ignore options 1 and 2. If the attack was a rogue device (an AP for instance) on the wired-side network, scanning is legally ok. The scanning of your own equipment is an acceptable legal option. This still does not allow the right to actively attack the device on discovering it is a rogue. This is a matter of intention.

As for Wireless-side analysis… This is easy to do, but it is time consuming, error prone (there is a risk of false-negatives and a good chance of false positives) and is likely to bypass or incorrectly correlate moving targets.

Kismet will allow you to save filters based on the BSSID’s and MAC addresses discovered. Kismet would then be configured to ignore all authorised networks. This allows the creation of a baseline. The baseline allows for the alerting of exceptions – that is unauthorised AP’s. (In rfmon mode, Kismet will be virtually undetectable by conventional methods).

AiroPeek NX is a commercial option for those companies that do not like to use open source software. Either method is time consuming and requires an audit for a “point in time” event. Warwalking can not be set to wait and report on exceptions.

AirWave RAPIDS is a commercial option to conduct both wired-side and wireless-side monitoring and assessment. It monitors and reports on wireless activity and flags (and alerts) new networks as potential rogue AP’s. This is an expensive option with a license required for all clients. There are also issues. Either poor monitoring facilities will result or wireless networking will be impacted for the hosts.

There are Wireless-side LAN IDS deployments. Aruba is an example. Again these are costly and require that a sensors is deployed at all facilities using wireless (and if you really want to be safe those that do not as well).

None of this helps us find the rogue – we only find out that one may exist.

So how can we discover the rogue you ask finally?

First there is a manual analysis process using the signal-to-noise ratios (SNR). SNR is maximised when the devices are associated. In this, the idea is to map the SNR and locate the antenna (note the antenna and not the rogue itself). These techniques rely heavily on guess-work. Kismet and a GPS will help.

Directional analysis makes this a little easier. This requires a directional antenna and RSSI (Radio Signal Strength Information which is the signal and noise levels associated with a wireless device). Channel hoping should be disabled when doing this and it is essentially a matter of trial and error.

Rapfinder (open source) is a tool that aids in this process. AirMagnet is a commercial tool (handheld) that is designed to locate the source of the radio signal (as you get closer the clicks increase in frequency like a Geiger counter).

Next we get to triangulation. Even this is not 100% accurate due to RF interference, signal loss and radio signal distribution patterns (which vary based on the physical position). Aruba AirMonitor with 3 sensors will find local AP’s with a fair degree of accuracy.

However, this takes us to the point. An attacker is not always going to be placed locally. The range with a good yaggi high gain antenna is a radius of over 10km. That is over 300 square km. So have fun searching, it is about 30,000 households, businesses etc ...

It is not a flippantly easy task to track a wireless attacker. People get lucky, this is about how it works.

Tuesday, 13 November 2007

Does PCI-DSS affect your organisation?

The Payment Card Industry Data Security Standard (PCI-DSS), also known as the “digital dozen”, is a standard developed to ensure acceptable levels of security are maintained over information transmitted and stored by any organisation that processes credit or debit card information. If your organisation is a merchant, third party service provider or a financial institution and you store, process or transmit any credit card/debit card information, then you are required to comply with the PCI-DSS standard.

In order to reduce the number of incidents of credit card fraud, the Payment Card Industry has developed a standard (which is a requirement under the merchant contract), that places the accountability for securing credit card information, with the merchant that handles the information. These organisations are contractually bound to comply with the standard.

Consequences of Non-Compliance
In March 2007, TJX Companies INC., a corporation that owns a large chain of department stores in the US and other countries learned this the hard way. Hackers were able to breach the network at TJX and were able to compromise at least 45.7 million credit cards and debit cards. The company now faces more than a dozen class action lawsuits.

Why PCI has teeth in Australia
There is growing confusion as to whether the PCI-DSS standard can be enforced in Australia. It was found that in 2006, there were five breaches, but there were no fines issued. This was reportedly due to workers involved being “innocently ignorant.” However, with the release of PCI-DSS version 1.1, VISA has confirmed that PCI DSS will be mandated in Australia.

If your company accepts or stores credit card information, it is required to comply with the PCI-DSS based on the criteria described above. Your financial Auditors should inquire as to the PCI-DSS compliance status and controls. Be forewarned. Providing false information to the Auditors may have even more severe risks as a provision in the Corps Act S1309(1) “False Information” makes it a criminal offence to issue a false report to directors and auditors, which includes any reports to IT auditors or reports issued as a part of the financial audit.

Images for other posts



Monday, 12 November 2007

ICMP Safe? Not since Loki.

In the past ICMP was thought to be no more than an annoyance. It could be used for DoS and Network Intelligence, but little more. Or so people thought...

Then came Loki.
Loki was initially presented in August 1996 using a publication in Phrack. It was the first widely available implementation of a covert shell. The inspiration was to exploit the data field in ICMP type 0 [Echo Reply] and ICMP type 8 [Echo Request] packets in order to implement a synchronous command shell as a proof in concept of an embedded covert channel.

Loki acts as one would generally expect from a client/server application. An attacker could compromise a host and install a Loki server which will respond to traffic sent by Loki client.

Loki is not in wide use any more, and if it is in use, it is likely that payloads are being encrypted. The reason for this statement is that Loki traffic is not being widely detected by IDS/IPS devices. Even if it is encrypted there are ways that you can distinguish these types of covert shells.

First, there should be a disparity between the number of Echo reply and Echo Request packets detected. Depending on traffic flow there may be more of one of the other of these packets. Additionally, ICMP Echo request and Echo reply payload sizes should be the same. With a covert shell they generally are different. The reason for this is that you are just "echoing" back the packet payload that was received in the case of valid traffic. Loki and other covert ICMP channels will show wide variation in the size of the payloads just as other command shell traffic demonstrates this variability.

It should also be remembered that a clever attacker could still insert NOP's (No Operation) into the packet to pad it such that all traffic remains the same size.

Loki has been credited as the foundation of the fundamental component of TFN ("Tribal Flood Network"). TFN is a distributed denial of service (DDoS) assault tool. TFN utilised encrypted ICMP type 0 packets as its control channel. With a "butt-plug" module for Back Orifice 2000 (BO2K) that offers remote-control embedded within an ICMP based conduit, one can see that the concept of using ICMP covert channels has developed into a standard and common idea within the blackhat community.


There are new ICMP based covert shells, but Loki was the start.

The question remains, why let ICMP through the firewall unchecked?

Exploiting Loose Source Routing

"Source routing is an IP option which allows the originator of a packet to specify what path that packet will take, and what path return packets sent back to the originator will take. Source routing is useful when the default route that a connection will take fails or is suboptimal for some reason, or for network diagnostic purposes."

For more information on source routing, see RFC791.


If we take that the normal traffic flow from the attacker to the server goes via "router a", "router b", "router c", a firewall and finally to the victim we have our standard scenario for routing traffic over the Internet.

By exploiting source routing, it should be plain and obvious that an attack from a trusted host would be more likely to succeed. To do this, many people think that you need to actually compromise the trusted host. With source routing this is not the case.

The routing could be made to go via "router a", "router b", "trusted host", the firewall and finally to the victim using the source IP of the trusted host.


If for instance, the external trusted host is allowed through the firewall ruleset based on source IP address, the attacker could bounce off this host in order to gain access to the internal network. This attack works as the Trusted host retransmits the packet using its own IP address as the source address.

But it gets worse. Traffic can be source routed directly to many low end firewalls, which then forward traffic to the internal network using their internal IP address as the new packet source.


Many of the low end "Nat Based Firewalls" available in the market are not true firewalls. Rather than setting up ACL's to filter traffic they rely on a combination of NAT and private IP addressing to protect the internal network. Many of these boxes will respond to lose source routing. That is they will forward packets that are received through source routing to their internal network. The most insidious part of this is that they will use the internal IP address of the firewall as the source address of the packet which may even enable them to bypass many host-based firewall rules.

Having to guess the internal network of the victim may slow the attacker down a little, but being that the majority of these boxes default to an internal network of 192.168.0.x or 192.168.1.x makes this easier. Further, as I have mentioned in a previous post, it is generally simple to have the router respond with its internal interface. This of course is a dead giveaway to the internal network.

Many tools (even NC - Netcat) support a source route option. This allows the attacker to select the path that is taken to the host and also the return path. So setting the attack up the attacker will source route to the trusted host which will be the last system outside the target's router or firewall .

Due to source routing, packets sent to the trusted host follow the reverse of the source route used to reach the trusted host and return to the attacker - even if they are using a "non-routed" public IP address.

Source route allows the packets to follow a set path. It does not require the standard routing protocols and is thus dangerous. Source routing is used in a number of multicast protocols (still) and many are loath to disable it.

There are two primary types of source routing - Loose Source Routing and Strict Source Routing. ISS has a good paper on this topic. Have a read of it.

Sunday, 11 November 2007

And back to Lisarow till another weekend at the farm...

And again we are back from the farm for another week in the city.

After my Hernia a few months ago, the garden fell into a state of true entropy. Kelvin and his laws huh? Those darn laws of thermodynamics. They apply to all. If you want order, you have to constantly input energy, in this case mine.

I have finally caught up - no thanks to a small arthropod, Ixodes holocyclus or the paralysis tick. I managed to collect one at the farm a few weeks back. Ticks are common (mainly cattle ticks) but this is the first paralysis tick I have had. No ataxia, but sore and a lump (and then some - and there for weeks) for a bit.

The fun of the country. At least I have not cut anything off in years (and the finger was reattached perfectly and I still play piano just as badly as ever).

I have replanted the garden. Weeding was a bugger to say the least. Now it is compost.

I missed a summer harvest due to a late start, so we have to buy vege's like everyone else for a little while. We still have a good variety of citrus and the stone fruit are coming along well.

I have a good amount of salad vegetables, onions, peas and beans as well as the perennial plots and herbs (we are never short of herbs). The chilli's, elderberry and atamoya (custard apple) are going well and we expect a good crop. The atamoya is the tree with the leaves and fruit coming in for the summer (in the centre of the photo).

In case you have not guessed I had the yard landscaped. Unlike most people, I turned it into an urban vegetable plot with about 40 sqr metres associated to vegetables and herbs.

And on the 7th day...

Well it is Sunday. The day of the week I will refrain from Security related posts and move to a very important aspect of life; the Farm that my wife and I own and manage. Information security is still one of my day’s tasks, but not in the post. I have a few writing tasks on the books I am authoring and a few question writing assignments for a number of certifications. I am also doing so academic research into using CART (a digital analysis - data mining technique) for use in digital forensics to predict probabilistically the sender of a spoofed email.

Today I am here at the farm. I am not just an academic; I like to learn about everything. As such I also do trade studies. I am in the final stages of completing my automotive trade certificate. That is an automotive mechanic.

This weekend I am working on my "Clutches and Transmissions" Assignment. So who knows why in an automotive transmission when the impeller is rotating faster than the turbine, as can occur if the engine is accelerating quickly, when the torque is at its maximum? Something to let you research if you really want.

The world is full of grandure, if we take the time to see it.
One of the benefits of the farm is it helps maintain that sense of reality that is often lost. We forget that with all the technology we make, we are still a mote in the universe. We forget how small we are in the scheme of things in our small view of the world. We need to look from a greater perspective.

It is hard to remember the important things and we have to at times take a break to see what we have done, where we are going and how it all fits together. This is some of why I "do" security work. It is about making sure that we can ensure that the world is just that little better.

To this end I am rebuilding a couple more computers. This is adding Ram, checking the hardware etc. By the time they are done they are up to about the standards of a typical low-medium end new machine. Burnside distributes these to needy families. As of this weekend, these include the 250th computer I have donated (30% new when I have no time and the others rebuilt and saving landfill. Some have gone to SVCS in the past as well).

All this allows me to have a time to get away and reflect on what matters in life. This means we can see how small we are and how big the world is. It allows me to focus and see my small place in humanity.

Please do not focus on what I do here - this is not the point. Helen and the good staff at Burnside, Hastings are the hero's of the piece.

I spend a little money and give up a little time. Helen and her staff give aid to people in need every day. Her and people like her are the real champions of the post.

So as the Sun Sets slowly into the west, I will leave you with today's final image and a concluding remark, what have you done this week to try and repay the gift of just being alive?