Saturday, 20 August 2011

Add-on’s to help make your browser more secure.

A multitude of security extensions exist that enhance the native capability of both IE8/9 and FireFox (and Chrome and Safari). These range from open-source projects to commercial offerings suitable for enterprise deployment. Some of the most well-known browser security extensions include:

  • Exploit Prevention Labs' LinkScanner (
  • Google toolbar (
  • McAfee's SiteAdvisor (
  • Netcraft's Anti-Phishing Toolbar (
Exploit Prevention Labs offers a commercial version of its product, LinkScanner Pro that is designed to “inspects each web page when you visit for exploits, hacked pages and malicious lures.” This extension also modifies the results returned from most major search engines such that a colour-coded icon appears alongside each entry returned from the engine. This rating system is designed to aid users in avoiding sites that pose an active threat.

LinkScanner’s proxy technology will sandbox the URL visited by the user (in real time) on their own system if the site has not been previously analysed. This will return an analysis of any hidden exploit code and the particular exploit if it is known.

The Google toolbar and Netcraft's Anti-Phishing Toolbar (see Figure 1) each provide a set of controls for both IE and FireFox that are designed to minimise the likelihood of a successful phishing attack. By presenting the user with a rating for the site and detailed information about the sites domain and content, the user is far less likely to be fooled into entering their information into a rogue system.

Figure 1: The Netcraft Anti-Phishing Toolbar[1]
McAfee's SiteAdvisor is offered as a free (though limited in functions) and commercial product. Extensive tests of websites are conducted by McAfee, with the results being displayed to the user through the extension. This can be loaded in an enterprise environment to block user access to suspicious or dangerous websites.

The FireCat project[2] for FireFox is designed as a “mindmap collection of the most efficient and useful FireFox extensions oriented application security auditing and assessment.” With the forthcoming version 2.0 of this project, FireCAT will incorporate an advanced “management of plug-ins, instant download from security-database, ability to add new extension, extension version checker, FireFox 3.X compatible extensions.”

Figure 2: The FireCatEextension Map for FireFox

The security controls and extensions developed both commercially and as open-source releases for FireFox are multitudinous. Extensions such as No-Script (see Figure 3) and Firebug[3] can be used to turn the browser into a Malware analysis platform. These extensions (and others of the same class) allow a sophisticated user to finely control the actions of scripts and active code in their browsers.
Figure 3: About NoScript


NoScript blocks the execution of all executable web content (JavaScript, Java, Flash, Silverlight, and other plugins). The user has to explicitly allow and whitelist sites for NoScript to accept them.

NoScript can force the browser to always use HTTPS when establishing connections to some sensitive sites, in order to prevent man-in-the-middle attacks.

It also have anti-repinning controls, Anti-XSS and Anti-ClickJacking protections built in.

To learn more about NoScript, visit the following pages:

[1] See The image was sourced from this site.
[2] See for the image in figure 2 and for the FireCat project.
[3] See

Microsoft decides that local attacks do not matter…

A new security vulnerability in Microsoft’s implementation of IPv6 has been released. There is a good report on this here.
Right now, I have only seen local DoS attacks from this and no exploit code or conditions, but that is still far from ideal.

The vulnerability is found in Windows 7's handling of IPv6 and has been acknowledged as an issue by Microsoft. They have no plans to actually fix this issue however.

Microsoft have stated that as any exploitation requires local network access, it is less critical.
The Windows 7 remote procedure call (RPC) function has a flaw in how it handles malformed DHCPv6 requests. Basically, old issues are coming back to haunt us as we start to move from IPv4 to IPv6. I am wondering how the PoD attack will progress in IPv6 right now…

This type of thinking is truly short-sighted!
We really need to stop thinking this is ONLY a local network exploitation. There are several reasons for this:

  1. Internal attackers also exist.
  2. Cloud based system place systems directly on the Internet
  3. Home users are often exposed
  4. Attack escalation
Of particular concern is attack escalation. Once an attacker has breached a perimeter, they are going to attack internal systems and increase the reach and expanse of their compromise.

Vulnerabilities such as this one allow attackers to expand the scope of any breach they succeed in achieving. When we allow this type of vulnerability to remain, we all lose.

The attitude of vendors needs to change and it is only through consumer pressure that they will change.

What can we do?
There remain some work-arounds even if Microsoft have dropped the ball and decided that they do not care for the security of their users (SHAME). A couple simple controls are listed below:
  1. If you are not using it, disable it.
  2. Firewall the hosts. This also means local systems inside the network.
It is simple to disable IPv6 and if you are not using this protocol, it is a wise move. To do this, simply bring up your adaptor properties:

We see in the image above in the highlighted regionof the network controls that allows us to change the protocols. Click the "Properties" tab. You need to have local administrator privilages to do this change.

Then un-tick the option for IPv6. This is the line saying “Internet Protocol Version 6”.

For those few people already using IPv6, you need to start thinking of and installing host based firewall controls that stop access to the stack from untrusted systems. RPC is not a friendly protocol that you want all users on the Internet connecting to in any event.

If you are a business domain, then you have the option of pushing these controls to users through Group Policy.

Shame, Microsoft, Shame. I had thought that this attitude was leaving the culture of Microsoft. Again, Shame.
I encourage people to write (email) Microsoft's security team and to tell them just how displeased you are with this lack of concern for your systems security and safety.

Friday, 19 August 2011

The Security Certification Jungle

Holding (amongst others) the following industry certifications, GSE, CISSP (ISSAP & ISSMP, and a book author), CISA, CISM, CCE, GNSA, G7799, GWAS, GCFA, GLEG, GSEC, GREM, GPCI, MCSE and GSPA (a few Cisco ones and many more) I like to think that I have some idea concerning IT Security certifications.

In a world of unconstrained growth in certifications and a lucrative business model for those who do it right, it is no surprise that Information Technology certificates have become as common as sand on the beach. Yet the real secret to this industry, as with any other built on a foundation of trust, is the choice of which certification one selects. The reason for this is that the right mix of certifications can be a immense boast for your career.

I have been tasked with writing an article on this topic as I am uniquely positioned to do so. I have a personal goal to complete all of the GIAC certifications this year and have certifications from most of the major vendors in IT. My insanity is collecting knowledge and in this quest I can say that GIAC is the leading vendor-neutral digital forensic and security certification. GIAC has changed much over the years and in my 15 years collecting SANS training and the associated GIAC certifications I have experienced and seen these changes.

The vast majority of certification bodies and vendors fail to adequately guard and maintain the value of the certification they offer, SANS and GIAC are not in this league. As a member of the ethics council for GIAC, a voluntary position, and through a long association with the people in SANS, I have seen the efforts that go into upholding the integrity of the tests. Many vendor certifications have become paper tiger certificates with Brain-dump sites giving access to actual exam questions.

The position of these vendors is that this is discouraged, but where GIAC has been involved, the level of effort in protecting the integrity of their exams is truly amazing. I have seen fake brain dumps for GIAC material, but any person using these will fail; they rarely even mention the material that is actually in the exams. The result is a certification that aligns with the knowledge in the SANS courseware and which offers the most in-depth and comprehensive security training and certification bar none. GIAC demonstrates a commitment to its certifications through marketing, publicity and an adherence to the integrity of the program that mean these will hold value not just now, but for the long term.

GIAC offers a means for people to corroborate their skills and knowledge by becoming certified with a qualification that actually demonstrates their knowledge. This is in a comprehensive range of areas related to information security and has a vendor neutral approach with over 30 individual certification training from entry level to expert. Even when a certification is focused on a particular vendor (such as the GCWN), they approach the subject in a "warts and all" manner that offers insight into the issues faced in a real production environment.

For this reason, GIAC certifications have immense value. The majority of companies I have been involved with (and, over the last couple of decades, in audit and incident response this is a large number) rate IT professionals who proactively seek the opportunity to expand their skills and expertise greatly. I can say that one of the finest methods available for these professionals to demonstrate their commitment to their career is through the completion of industry-recognized certifications that demonstrate a technical capability.

The format of the GIAC exams is again in flux with the introduction of a new test methodology. While this is a change, it is one for the better. It is a move towards testing proficiency in assigned tasks and away from rote learning. GIAC was good here before, but this new format is looking at aligning the skills being certified to the needs of enterprise as has not been achieved before in less than an expert level hands on certification (such as Cisco's CCNE or the GSE).

Like all aspects in life, certifications come under the economic law of diminishing returns. This is, the more one achieves, the less value one gains from achieving more. When you get a GSE certification, you have something that can be used to demonstrate hands on ability as you have been tested in a long grueling process to confirm you can do what you profess. This has value to employers who are willing to pay for skills that can directly benefit them.

At present, the level of awareness of the GIAC GSE exam is limited, but those who have knowledge of this certification hold it in high esteem and are willing to pay a premium for professionals with it. Of course, there is not a great level of economic value in attempting to gain all the certifications as I am doing, but one needs to have a goal and something to achieve.

The great benefit of the GIAC certifications comes from a combination of a technical focus that is second to none coupled with a series of focused certification paths. To gain the maximum value from this, you should choose a sub-field such as Audit, Security Management or Forensics within the wider field of information security and focus on this. That is, aim to be the best security auditor, pen tester, forensic analyst or whatever other desired field you want to take. By focusing on a selected roadmap, you have the complete set of skills needed in one place to achieve that through GIAC. Other vendor certifications will help as well, but for a technical focus, you cannot look past GIAC.

If there is an area in which GIAC could do better, it is in the promotion of its higher level certifications. GIAC has two levels of accreditation for most of the certifications it offers. These are the Silver (requiring an exam) and the Gold (with the requirement of a peer reviewed paper also being added). In addition, the three platinum level certifications are the ideal means of demonstrating a true depth of information security knowledge, yet they are hardly known outside the information security community and are rarely considered as a goal in themselves.

The requirement of a peer reviewed paper to achieve a Gold level certification may seem unnecessary to many technical people, but it both adds weight to the student’s ability in the field as well as demonstrating that they have the capability to communicate their findings. The point here being that you can be the best Pen Tester in the world, but if you cannot report your findings to management in a coherent manner, you may as well not be doing the job. Gold level certifications also stand as college credit. A Masters program offered by the SANS institute is based on the completion of several Gold level certifications from GIAC. Stressing the path to postgraduate qualifications would seem a logical end that needs more attention.
I completely agree with the quote from, " The Master degree programs provides a comprehensive array of courses that allows students to gain technical mastery of technologies and processes that set apart the leading security practitioners in the field. "

As far as a certification path goes, a series of certifications that are both in demand and which demonstrate technical ability is great. When you also consider that these can be used as credit towards a Masters degree, you have a set of certifications that go a long way to starting or enhancing your career.
In my career, both my staff and I are heavily involved with forensic and incident response work. GIAC adds enormous value here. It is common to have opposing "experts" offering reports in court. When you see some of the common mistakes that these uncertified people do again and again, you start to see the reason for learning a common methodology. On top of that, GIAC certifications need to be renewed and the material is continually updated and aligned with the SANS courseware (which is updated with current technical trends and knowledge).

In my career and having to present in court, the value of GIAC's quality control and depth is beyond question and reproach. Having a vendor certification (such as EnCE) in forensics does not hurt, but these do not teach the fundamentals and focus on using the particular product. Personally, I would rather hire a forensic analyst with no experience on a product (let alone a vendor certification) and a GIAC forensic certificate. It is easy to teach a person with the fundamentals the product, but it is not always easy to teach a person with product knowledge the why of what they need to do. The same applies to all aspects of security, knowing why and having a wide range of in-depth technical skills wins hands down over any particular vendor certification.

John Bambenek, one of the GIAC exam developers and GIAC certificate holder has noted,
"By shifting from recall questions to analysis/application questions, it requires the students to apply what they have learned instead of merely memorize the text."
The changes mentioned earlier in this article to the GIAC exam format will only further enhance the value of the certification to employers. Of course, holding a certification that is in demand is a means to become a sought after professional. As also noted, the GIAC gold format allows you to have a published paper to your name. This is both a means to marketing yourself with well read papers being a means of getting your name out to others and to demonstrating your ability to communicate your findings and knowledge.

Chalmer Lowe, also GIAC certified and another exam developer stated,
"Most of us don't run into problems where we work that are simply "recall" problems...most problems we run into involve some level of analysis and then some level of knowledge application. The format for the new SANS questions models what we experience in real life...a problem exists, the student needs to analyze the facts, determine the possible solutions and then apply the correct solution. The ability to answer these questions correctly in the test will be a good indicator of the student's ability to perform similarly in their work environment."

The new GIAC exam format will increase the chances of being able to identify those professionals with a high level of understanding and not just a good recall. Application of knowledge and skills is crucial to employers and the leadership taken by GIAC will make them the preeminent certification for years to come.

As Chalmer further held, "when a student sits down for a GIAC exam, they are expected to be able to analyze problems and apply their knowledge, not simply recall random facts".

Achieving GIAC certification provides an abundant array of benefits. These include improved career prospects and superior earning power. Specialization in information technology will only continue in this coming decade and GIAC certification is a way for professionals with advanced skills and up-to-date knowledge to show that they have what it takes to succeed. Not only do they provide for personal accomplishment, but GIAC's range of certifications is an ideal means to increase your opportunities for career advancement.

Having technical skills and being able to demonstrate these will get you so far, but if you really want to be hunted, you need to demonstrate your ability. Doing a GIAC gold paper or even going for the GSE and GIAC based Masters Degree is a great way to fast-track your career.

Password-Cracking Tools

Today, I will start what will be an ongoing thread on password auditing.

Though stronger authentication methods are now in use these days such as tokens, smart cards and biometrics, most of the organisations still rely on passwords as an authentication method. Thus, one of the critical aspects to look at when conducting security audit is the password policy and the use of strong passwords of the organisation. Ironically, weak passwords are still considered at the top among the vulnerabilities.

Things to check are:

  •  Password management – the password policy such as password length and complexity, maximum password age, password history
  •  Account lock out policy – the number of log-in failed attempts that is allowed before the account is lock out, lock out duration
  •  Blank passwords – this should not be allowed
  •  Non-expiring passwords
  •  Force log off after a period of inactivity 
 To audit passwords, we can use tools such as DumpSec and Hyena, or view the Security Template, Local Security Policy or Group Policy settings that apply to the machine. However, no matter how strong the policy is, it would not be effective if there is one account in the system that has a weak password. We, in addition to the above-mentioned tools need to use password cracking tools to verify that weak password does not exist in the system. Using password cracking tools, we can assess the strength of the organisation’s password (for example, passwords that are vulnerable to dictionary-based attacks), particularly the administrator’s username and password for the network, and verify congruence with the security policy. Some of these password-cracking tools are Rainbow Crack, Cain & Abel Brutus, and John the Ripper to name a few. We can use these tools, with consent from the clients, or the clients can use them in their internal monitoring of their password policy management.

Hackers usually employ two types of attacks to crack passwords and therefore gain unauthorised access to IT resources: brute force and table precomputation. In brute force, an attacker tries all possible keys to encrypt a known plaintext for which he has the corresponding ciphertext. For table precomputation, precomputed encryptions of a chosen plaintext are already stored in a file or table.

RainbowCrack uses table precomputation. It precomputes and stores all possible password - LanManager hash pairs in files called "rainbow table". Any time the password of a LanManager hash is required, RainbowCrack just search at the precomputed tables and find the password in seconds. A rainbow table can be generated or even purchased. Its size can be at least a gigabyte. The more complex the password to crack is, the larger the size of the rainbow table that should be used.

RainbowCrack was developed by Zhu Shuanglei, and implements an improved time-memory trade-off (doing the long time computation in advance and store the result) cryptanalysis attack which originated in Philippe Oechslin's OphCrack.

How to use RainbowCrack?
Note: RainbowCrack can be detected by the antivirus program as spyware, preventing you to use it. Also, we can only use rainbowcrack after we have obtained the file containing the password dump of the target system that we want to test. We can get the password dump using pwdump or fgdump utility tools.

1. First, we need to download RainbowCrack (the latest) and then save the zip file in C:Audit Tools folder for our demo purposes. Below is a screen shot showing the contents of the zip file that we downloaded:

We can see three .exe files namely rcrack, rtgen, and rtsort. Rcrack is the RainbowCrack itself. Rtgen is used to generate our own rainbow table. Rtsort should be used after using rtgen, as rcrack does not use unsorted rainbow tables. For the command syntaxes on using these .exe files, the htm files rcrackdemo and rcracktutorial can be of great help, plus we can always use the “-?” in the command line to inquire on how the command should be typed.

The .txt files on the folder are samples of the password dumps that we can use to practice using the rainbow crack.

The only .rt file on the folder is the rainbow table that we started to generate using the rtgen.exe.

2. Generating our own rainbow table using rtgeg.exe:

As we already mentioned, we can either purchase or generate our own rainbow table. Using rtgen.exe, we will however generate our own rainbow table with the simplest configuration there could be, i.e for cracking passwords with character set as alpha. The configuration we are referring to as well as the commands to generate the tables are:

So we use the table precomputation commands for us to generate our own .rt files

And with so much time spent (that is why rainbow tables can be purchased!), we were able to generate the five files of rainbow table that we can use to crack the simplest configuration for passwords– alpha (ABZDEFGHIJKLMNOPQRSTUVWXYZ) and the size of them are 125mb each.

3. Sorting rainbow tables using rtsort.exe

To speed up the search of rainbow table, we need to sort them in advance. Also, rcrack only accept sorted rainbow tables. The commands used to sort the five .rt files are:

rtsort lm_alpha_0_2100x8000000_bla.rt
rtsort lm_alpha_1_2100x8000000_bla.rt
rtsort lm_alpha_2_2100x8000000_bla.rt
rtsort lm_alpha_3_2100x8000000_bla.rt
rtsort lm_alpha_4_2100x8000000_bla.rt

4. Cracking the LM (LanManager) hash in the sample “random_alpha.txt” file using rcrack and the sorted rainbow tables:

In the command line, get the dir of rainbowcrack folder (otherwise using the *.rt command would not work). After this is done, we can type the command to run rainbowcrack. The command is: rcrack [rainbow table filenames or use the *.rt ] –f [pwdump or fgdump file]. For our demo purposes, the command in cracking the sample “random_alpha.txt” file is shown in the screen shot below. In almost 11 seconds (total cryptanalysis time), rainbow crack was able to crack 10 passwords that use just the Alpha characters.

Some Limitations of RainbowCrack:
  1. RainbowCrack does not support rainbow table file equal or larger than 2GB. This is a limitation as we are using 32-bit value to store the file size. In fact, the rtgen utility will never allow you to generate a file with 134217728 or more rainbow chains, the rtsort and rcrack simply does not support large file.
  2. Salt is used to randomize the stored password hash. With different salt value, same password yields different hash value. The time-memory trade-off technique used by RainbowCrack is not practical when applicable to this kind of hash.

Thursday, 18 August 2011

More on Secure coding

I am Continuing from a previous post on teaching secure coding as I getting ready to teach introductory C programming soon.

I am picking on some of Chapter 2 of the same book which is:
Teach Yourself C in 21 days by SAMS.

AGAIN, WHY must we teach bad code practices?

I will look at exercise 2.5 today. This is another example of the types of education and training we are providing in code development.

As I stated last time… It is through bad text-books that we have bad code and clueless developers!

In the book, right from the start they make liberal use of printf(). This chapter thewy also introduce “gets()”. Exercise 2.5 on page 40 of this book is listed below:

1: /* EX2-5.c */ 2: #include
3: #include

4: int main()
5: {
6: char buffer[256];
7: printf( "Enter your name and press enter\n" );
8: gets( buffer);
10: printf( "\nYour name has %d characters and spaces!",
11: strlen( buffer ));
12: return 0;
13: }
14: /* End of Program */
Again, it works, so what is the problem you ask?

We see below a strange result from this. Here, we have a buffer overflow.

Line 6 of the code above has a buffer set at 256 characters. We entered a sting longer than that and we have again crashed the program. This is the cause of buffer overflows. it comes from not considering the fundamentals right from the start.

Yes, I have entered a ridiculously long name. This is an issue as we have not though of the exceptions to our use case. What about the abuse case? Attackers do not act as we want them to, this is why they are acting maliciously. We cannot leave errors and expect people to act nicely, we need to start thinking about what can go wrong and stop it from the beginning!

What we need to do is set the size of the buffer from the start. That is, change gets() to fgets() as is noted below:
gets (name);
fgets (name, 10, stdin);
This is a simple change to line 8 as we have listed below:
1: /* EX2-5.c */ 2: #include
3: #include

4: int main()
5: {
6: char buffer[256];
7: printf( "Enter your name and press enter\n" );
8: fgets( buffer, 256, stdin);
10: printf( "\nYour name has %d characters and spaces!",
11: strlen( buffer ));
12: return 0;
13: }
14: /* End of Program */
Here, a simple change at line 8 has limited the input we will accept to a size that fits with the buffer we have configured in line 6..

I have not fixed all of the issues here. Some of the problems that I noted in the last post remain, but the buffer overflow from the gets() function has at least been fixed.
As I stated, NEVER use gets() – always use fgets(). This is not a complex change and takes just a few seconds. This simple change to how we teach developers makes such a vast difference!

I will reiterate the lesson from the other day:
Basically, STOP using strcpy(), strcat(), sprintf(), index(), or strchr() on buffers that might not contain a trailing NUL byte, or that could ever contain embedded NUL bytes (binary data). 

Instead… Use:
  • strncpy() or memcpy() or memmove() in place of strcpy()
  • memchr() in the place of index() or strchr()
  • snprintf() and never think of using sprintf()
  • fgets() and never ever consider using gets()
  • Just forget that bzero() and bcopy() ever existed
The C FAQ: or

For further information on the sprintf() problem and snprintf() solution see Section 12.21 of the FAQ.

“You speak of communism not socialism”ts

You speak of communism not socialism
It was noted in the post above that I speak of communism when I note the effects of socialism. This is false. I speak of socialism generally. Of the deaths of this insidious form of corruption people love to forget, but they are the legacy of greed that is socialism. The corruption and theft of freedom.

India was never communist, but they managed to starve tens of millions of people to death.

Henry Hazlitt wrote of the issues in "Socialism and Famine" in Newsweek, August 31, 1964. We seem to have forgotten all this over time. India blamed "speculators" and "hoarders" and announced the imposition of strict controls on the purchase, sale, storage, and transportation of grains. This only made it worse.
Instead of allowing markets to bring grain in from overseas markets, the socialist regime in India restricted markets and called for people to do more.

The thing is, you cannot tell people just work more and expect it, we need incentives and rewards.
The socialist government in India placed price ceilings rice; instigated price controls measures on matches, oil, kerosene, sugar, and vegetable oils. The end result was that businesses making these products no longer saw a profit and the market shrunk.

Instead of an abundance of food and goods, there was a dearth with black markets springing up for those with money.

Forced industrialisation, monetary control and inflation with a government that printed money to pay for its insane social polices created a famine in India of immense proportions. The starvation, the riots and the polices in general where socialist and directly caused the death of 10’s of millions of people.

Since 1954 US shipped millions of tonnes of food aid to India. This was given to the socialist Indian government as they did not trust private businesses or agencies to distribute it. This was finally distributed not to those in need, but on political ground. It took so long to make these decisions that rats had consumed more than half of this food at the docks and storage facilities.

In the 1980’s we again have an excellent example of socialism at work in Ethiopia. This completely man-made famine in "the breadbasket of Africa" was a consequence of socialism, confiscations and nationalisations. People seem to forget time and again that the abolishment of incentive, the punishment of productivity and subsidising of irresponsibility results in people not taking risks, not producing.

The implementation of socialist policy caused the starvation in Northern Africa, not a lack of food.

As Ludwig von Mises stated in “Socialism: An Economic and Sociological Analysis” (1951 Yale University Press);
"Socialism is not the pioneer of a better and finer world, but the spoiler of what thousands of years of civilisation have created. It does not build; it destroys. For destruction is the essence of it. It produces nothing, it only consumes what the social order based on private ownership of the means of production has created . . ."
So Thomas, I mean socialism kills. In all its forms, from the basic left to communism and those forms that lie between, socialism has killed 100’s of millions of people needlessly.

Wednesday, 17 August 2011

More on Databases

Data access auditing is a surveillance control. By monitoring access to all sensitive information contained within the database, suspicious activity can be brought to the auditor’s awareness. Databases commonly structure data as tables containing columns (think of a spreadsheet, only more complex). Data access auditing should address six questions:

1. Who accessed the data?
2. When was the data accessed?
3. How was the data accessed? (This is what computer program or client software was used?)
4. Where was the data accessed from (this is the location on the network or Internet)
5. Which SQL query was used to access the data?
6. Was it the attempt to access data successful? (And if yes, how much data was retrieved?)
The evidence available to the auditor is provided:

  • Within the client system (this may be infeasible – such as in web based commerce systems),
  • Within the database (including the logs produced by the database that are sent to a remote system), or
  • Between the client and the database (such as firewall logs, IDS/IPS devices and host based events and logs). 

Auditing within the client entails using the evidence available on the client itself. Client systems can hold a wealth of database access tools and the logs that these create. These logs may contain lists of end-user activity that a user has performed on the database. In respect of web based systems, the web server itself may be treated as a client of sorts.

To obtain an adequate audit trail from client systems alone, all data access must have occurred using client tools under the control of the organization conducting the audit. In the event that data access can transpire using other means, it is rare that sufficient evidence will be available. This option by itself is the entirely worst option available to the auditor, but it can provide additional evidence in support of the other methods. This is chiefly used in the event of a forensic investigation.

Auditing within the database is often problematic due to:
  • A limited audit functionality of many database management systems (DBMS),
  • Inconsistent DBMS configurations and types being deployed throughout an organization, and
  • Performance losses due to enabling the audit mechanisms
Auditing within the database is without doubt better than auditing within the client, however, the best approach is a combination of auditing the client, network and the database.

Auditing between the client and the database entails monitoring the communication between the client and the database. This involves capturing and interpreting the traffic between the client and the database. Software is available for this and it may be used to provide data access auditing. The biggest issues with this type of data access auditing are:

  • Encryption between the client and the database server,
  • Privacy considerations and rights to view data, and
  • Correlating large volumes of data that also need to be parsed and processed to be useful. 

SQL Injection
SQL injection is covered in more detail in the chapter on web exploits. SQL Injection has three primary goals:
1. Accessing information,
2. Destroying data, and
3. Modifying data.

The goal of the attacker and the likelihood of each will vary dependant on the composition of the organization running the database. The most common form of SQL injection is through the addition of the SQL command, “OR 1=1” to an input field. The addition of this clause to the last part of a query may make the query true.

For example, with a query such as:
“SELECT * FROM users WHERE username = ‘administrator’ and password = ‘password’
An attacker could attempt to add ‘OR ‘’ = ‘ changing the SQL statement to:
“SELECT * FROM users WHERE username = ‘administrator’ and password = ‘password‘OR ‘’ = ‘
This could potentially allow the attacker to bypass the database authentication.

The tools used to audit databases range from CASE (Computer Aided Software Engineering) Tools through to the more familiar network and system test tools covered throughout the book. In addition to the database itself, it is important to test:
1 File system controls and permission,
2 Service initialization files,
3 The connection to the database (such as access rights and encryption

Specialized Audit software
Three popular database auditing solutions include:
  •  DB Audit (SoftTree Technologies),
  •  Audit DB (Lumigent Technologies), and
  •  DbProtect (Application Security).
DB Audit ( is easy to tailor and does not require installation of any additional software or services on the database server or network. It supports Oracle, Microsoft SQL Server, Sybase ASE, Sybase ASA and IBM DB2. It is implemented on the database back-end to reduce the risk of back door access that would be unrecorded.

Lumigent Audit DB ( provides comprehensive monitoring and auditing of data access and modifications. It provides an audit trail of who has accessed or modified what data, and supports best auditing practices including segregation of duties. Audit DB supports IBM DB2, Microsoft SQL Server, Oracle and Sybase databases.

DbProtect by Application Security ( uses a network-based, vulnerability assessment scanner to test database applications. It also provides structured risk mitigation, and real-time intrusion monitoring, coupled with centralized management and reporting. DbProtect provides security and auditing capabilities for complex, diverse enterprise database environments.
CASE (Computer Aided Software Engineering) Tools

Case tools can be a great aid to auditing database systems. CASE or Computer Assisted Software Engineering tools not only help in the development of software and database structures but can be used to reverse engineer existing databases and check them against a predefined schema. There are a variety of both open source and commercial CASE tools. In this chapter we’ll be looking at Xcase (

Many commercial databases can run into the gigabyte or terabyte in size. Standard command line SQL coding is unlikely to find all of the intricate relationships between these tables, stored procedures and other database functions. A CASE tool on the other hand can reverse engineer existing databases to produce diagrams that represent the database. These can first of all be compared with existing schema diagrams to ensure that the database matches the architecture that it is originally built from and to be able to quickly zoom in on selected areas.

Visual objects, colors and better diagrams may all be introduced to further enhance the auditor’s capacity to analyze the structure. Reverse engineering a database will enable the auditor to find out the various structures that have been created within the database. Some of these include:
  • · The indexes,
  • · Fields,
  • · Relationships,
  • · Sub-categories,
  • · Views,
  • · Connections,
  • · Primary keys and alternate keys,
  • · Triggers,
  • · Constraints,
  • · Procedures and functions,
  • · Rules,
  • · Table space and storage details associated with the database,
  • · Sequences used and finally the entities within the database.
Each of the tables will also display detailed information concerning the structure of each of the fields that may be viewed at a single glance. In large databases a graphical view is probably the only method that will adequately determine if relationships between different tables and functions within a database actually meet the requirements. It may be possible in smaller databases to determine the referential integrity constraints between different fields, but in a larger database containing thousands of tables there is no way to do this in a simple manner using manual techniques.
Fig 1 Display database schema.
When conducting an audit of a database for compliance purposes, it is not just security functions such as cross site scripting and sequel injection that need to be considered. Relationships between various entities and the rights and associated privileges that are associated with various tables and roles also need to be considered. The CASE tools allow us to visualize the most important security features associated with a database. These are:

1. Schemas restrict the views of the database for users,
2. Domains, assertions, checks and other integrity controls defined as database objects which may be enforced using the DBMS in the process of database queries and updates,
3. Authorization rules. These are rules which identify the users and roles associated with the database and may be used to restrict the actions that a user can take against any of the database features such as tables or individual fields,
4. Authentication schemes. These are schemes which can be used to identify users attempting to gain access to the database or individual features within the database.
5. User defined procedures which may define constraints or limitations on the use of the database,
6. Encryption processes. Many compliance regimes call for the encryption of selected data on the database. Most modern databases include encryption processes that can be used to ensure that the data is protected.
7. Other features such as backup, check point capabilities and journaling help to ensure recovery processes for the database. These controls aid in database availability and integrity, two of the three legs of security.
CASE tools also contain other functions that are useful when auditing a database. One function that is extremely useful is model comparison.

Fig 2 Reverse Engineer existing databases into presentation quality diagrams in minutes.

Case tools allow the auditor to:
· Present clear data models at various levels of detail using visual objects, colors and embedded diagrams to organize database schemas,
· Synchronize models with the database,
· Compare a baseline model to the actual database (or to another model),

Case tools can generate code automatically and also store this for review and baselining. This includes:
· DDL Code to build and change the database structure
· Triggers and Stored Procedures to safeguard data integrity
· Views and Queries to extract data

The auditor can also document the database design using multiple reporting options. This allows for the printing of diagrams and reports and the addition of comments to the reports and user defined attributes to the model.

Data management features allow the auditor to validate the data in the database being reviewed against the business rules and constraints defined in the model and generate detailed integrity reports. This can be extended further to access and edit the data relationally using automatic parent/child browsers and lookups and then to locate faulty data subsets using automatically generated SQL statements. These provide valuable sources of errors and help in database maintenance – making the audit all the more valuable.

Model comparison involves comparing the model of the database with the actual database on the system. This can be used to ensure change control or to ensure that no unauthorized changes have been made for other purposes. To do this, a baseline of the database structure will be taken at some point in time. At a later time the database could be reverse engineered to create another model and these two models could be compared. Any differences, variations or discrepancies between these would represent a change. Any changes should be authorized changes and if not, should be investigated. Many of the tools also have functions that provide detailed reports of all discrepancies.

Many modern databases run into the terabytes and contain tens of thousands of tables. A baseline and automated report of any differences, variations or discrepancies makes the job of auditing change on these databases much simpler. Triggers and stored procedures can be stored within the CASE tool itself. These can be used to safeguard data integrity. Selected areas within the database can be set up such as honeytoken styled fields or views that can be checked against a hash at different times to ensure that no-one has altered any of these areas of the database. Further in database tables it should not change. Tables of hashes may be maintained and validated using the offline model that has stored these hash functions already. Any variation would be reported in the discrepancy report.

Next the capability to create a complex ERD or Entity Relationship Diagram in itself adds value to the audit. Many organizations do not have a detailed structure of the database and these are grown organically over time with many of the original designers having left the organization. In this event it is not uncommon for the organization to have no idea about the various tables that they have on their own database.

Another benefit of CASE tools is their ability to migrate data. CASE tools have the ability to create detailed SQL statements and to replicate through reverse engineering the data structures. They can then migrate these data structures to a separate database. This is useful as the data can be copied to another system. That system may be used to interrogate tables without fear of damaging the data. In particular the data that has migrated to the tables does not need to be the actual data, meaning that the auditor does not have access to sensitive information but will know the defenses and protections associated with the database. This is useful as the auditor can then perform complex interrogations of the database that may result in damage to the database if it was running on the large system. This provides a capability for the auditor to validate the data in the database against the business rules and constraints that have been defined by the models and generate detailed integrity reports. This capability gives an organization advanced tools that will help them locate faulty data subsets through the use of automatically generated SQL statements.

Tuesday, 16 August 2011

Problems with teaching secure coding

I am getting ready to teach introductory C programming soon. From this, I have an old gripe that many who promote secure coding have. Why must we teach bad code practices?

There are many examples of this in Text book and other industry guides. The one I will pick on here is:
Teach Yourself C in 21 days by SAMS.

This book has a few errors and is in need of an update since MsDOS is no longer in common use, but it is still being sold and even used in a few classrooms.

The first real exercise starts with an error on line 36, but this is not my issue, typos happen.

The issue is not simply typos by a long way. It comes to what we are starting off by teaching. Instead of starting with good practices, they are starting by installing poor and insecure coding onto those who will be our future programmers.

It is through bad text-books that we have bad code and clueless developers!

In the book, right from the start they make liberal use of printf(). This starts with Exercise 1.2 on page 22 of this book:

1: #include
3: int radius, area;
5: int main()
6: {
7: printf ( “Enter radius (i.e. 10): ” );
8: scanf( “%d”, &radius );
9: area = (int) (3.14159 * radius * radius);
10: printf ( “\n\nArea = %d\n”, area );
11: return 0;
12: }
Well it works, so what is the problem you ask?

We see below a strange result from this. Here, we have a negative value returned from a large input value.

I have also read that “The only difference between sprintf() and printf() is that sprintf() writes data into a character array, while printf() writes data to stdout, the standard output device.”

Do we really need to teach this? Can we not start by teaching developers what the security concerns are from day 1?

Next, I have seen sprintf () offered as a means to fix problems with printf().

The Syntax of sprintf () is:
int sprintf (char *string, const char *format [,item [,item]...]);
We can see that printf() is really just the following:
    char buf[20];
sprintf(buf, "%d", num); puts(buf);
This code block is functionally the same as the following:
printf( "%d", num)
Now, let us put this into our original program as some other texts have stated is a more secure method.

1: #include
2: int radius, area;
3: int main(void)
4: {
5: char buf[20]; /* Set a buffer max length*/
6: printf ( "Enter radius (i.e. 10): " );
7: scanf( "%d", &radius );
8: area = (int) (3.14159 * radius * radius);
9: /* the old printf line is to be replaced...
10: printf ( "\n\nArea = %d\n", area );
11: */
12: sprintf(buf, "\n\nArea = %d\n", area);
13: puts(buf);

14: return 0;
15: }
Here, line 10 from the segment above has been expanded into lines 5, 12 and 13.

I did not fix the input line, but I wanted just to highlight a single problem (and there are a couple even in this small 12 line code segment).

Again, if the input is out of bounds (and we have not checked this in any way, our program suffers from a buffer overflow condition. It is vulnerable to at least a DOS if not an actual exploit.

We see this code failing in the figure above. So, the solution can be worse than the original.

The first function, printf() writes to SDOUT. The one we just used sprintf() writes to a buffer first. In both of these cases, we are not teaching good practices.

What then?
We have another function. snprintf() will write at most size-1 of the characters printed into the output string.

With snprintf(), if the return value is greater than or equal to the size argument, the string was too short and some of the printed characters were discarded.

int snprintf(char *str, size_t size, const char *format, ...);

The real difference is that snprintf() doesn't suffer from the same buffer overflow problems as we had in the functions printf() and even sprintf().

snprintf() is a length limited version of sprintf().

Using this, our code now becomes (in Visual Studio we would use sprintf_s() instead… What can I say, Microsoft):
1: #include
2: int radius, area;
3: int main(void)
4: {
5: char buf[20]; /* Set a buffer max length*/
6: printf ( "Enter radius (i.e. 10): " );
7: scanf( "%d", &radius );
8: area = (int) (3.14159 * radius * radius);
9: /* the old printf line is to be replaced...
10: printf ( "\n\nArea = %d\n", area ); */
11: int buffer = 32000 /* Set a max buffer size limit*/
12: snprintf(buf, buffer, "\n\nArea = %d\n", area);
13: puts(buf);
14: return 0;
15: }
Here we have added a buffer size. We are still far from complete and I will fix this up in coming posts and get us out of the mess the text books are putting us into completely.

The special format string "%n" causes printf() to write to memory (printf() is the only format string that behaves quite this way, but is not the only function we should avoid). Malicious attacks using this flaw can lead to the ability to write somewhere in your process's address space.

In summary
  1. Never ever use sprintf(); only use snprintf().
  2. Never call any XXXprintf() functions with a single argument.
Rather than using:
printf("hello world");
write puts("hello world");
It may be true that there is little you can do to attack printf("hello world");, but the fact remains that this is the practice we start to get developers into.

They then are used to this and when the need to do some XXXprintf() function using a string input, they will do something such as
The contents of “some_string” are untrusted. One day, an attacker will find a way to exploit this.
We should start teaching people good practice from day 1!

Basically, STOP using strcpy(), strcat(), sprintf(), index(), or strchr() on buffers that might not contain a trailing NUL byte, or that could ever contain embedded NUL bytes (binary data).  
Instead… Use:
  • strncpy() or memcpy() or memmove() in place of strcpy()
  • memchr() in the place of index() or strchr()
  • snprintf() and never think of using sprintf()
  • fgets() and never ever consider using gets()
  • Just forget that bzero() and bcopy() ever existed
The C FAQ: or

For further information on the sprintf() problem and snprintf() solution see Section 12.21 of the FAQ.

To be continued ...

Data Access Auditing

Data access auditing is a surveillance control. By monitoring access to all sensitive information contained within the database, suspicious activity can be brought to the auditor’s awareness. Databases commonly structure data as tables containing columns (think of a spreadsheet, only more complex). Data access auditing should address six questions:

  1.  Who accessed the data?
  2.  When was the data accessed?
  3.  How was the data accessed? (This is what computer program or client software was used?)
  4.  Where was the data accessed from (this is the location on the network or Internet)
  5.  Which SQL query was used to access the data?
  6.  Was it the attempt to access data successful? (And if yes, how much data was retrieved?)
The evidence available to the auditor is provided:
  • Within the client system (this may be infeasible – such as in web based commerce systems),
  • Within the database (including the logs produced by the database that are sent to a remote system), or
  • Between the client and the database (such as firewall logs, IDS/IPS devices and host based events and logs).
Auditing within the client entails using the evidence available on the client itself. Client systems can hold a wealth of database access tools and the logs that these create. These logs may contain lists of end-user activity that a user has performed on the database. In respect of web based systems, the web server itself may be treated as a client of sorts.

To obtain an adequate audit trail from client systems alone, all data access must have occurred using client tools under the control of the organization conducting the audit. In the event that data access can transpire using other means, it is rare that sufficient evidence will be available. This option by itself is the entirely worst option available to the auditor, but it can provide additional evidence in support of the other methods. This is chiefly used in the event of a forensic investigation.

Auditing within the database is often problematic due to:
  • A limited audit functionality of many database management systems (DBMS),
  • Inconsistent DBMS configurations and types being deployed throughout an organization, and
  • Performance losses due to enabling the audit mechanisms
Auditing within the database is without doubt better than auditing within the client, however, the best approach is a combination of auditing the client, network and the database.

Auditing between the client and the database entails monitoring the communication between the client and the database. This involves capturing and interpreting the traffic between the client and the database. Software is available for this and it may be used to provide data access auditing. The biggest issues with this type of data access auditing are:
  • Encryption between the client and the database server,
  • Privacy considerations and rights to view data, and
  • Correlating large volumes of data that also need to be parsed and processed to be useful.

Monday, 15 August 2011

The Finer Points of Find

The *NIX “find” command is probably one of the system security tester’s best friends on any *NIX system. This command allows the system security tester to process a set of files and/or directories in a file subtree. In particular, the command has the capability to search based on the following parameters:

  •  where to search (which pathname and the subtree)
  •  what category of file to search for (use “-type” to select directories, data files, links)
  •  how to process the files (use “-exec” to run a process against a selected file)
  •  the name of the file(s) (the “-name” parameter)
  •  perform logical operations on selections (the “-o” and “-a” parameters) 
One of the key problems associated with the “find” command is that it can be difficult to use. Many experienced professionals with years of hands-on experience on *NIX systems still find this command to be tricky. Adding to this confusion are the differences between *NIX operating systems. The find command provides a complex subtree traversal capability. This includes the ability to traverse excluded directory tree branches and also to select files and directories with regular expressions. As such, the specific types of file system searched with his command may be selected.

The find utility is designed for the purpose of searching files using directory information. This is in effect also the purpose of the “ls” command but find goes far further. This is where the difficulty comes into play. Find is not typical *NIX command with a large number of parameters, but is rather a miniature language in its own right.

The first option in find consists of setting the starting point or subtrees under which the find process will search. Unlike many commands, find allows multiple points to be set and reads each initial option before the first ”-“ character. This is, the one command may be used to search multiple directories on a single search. The paper, “Advanced techniques for using the *NIX find command” by B. Zimmerly provides an ideal introduction into the more advanced features of this command and is highly recommended that any system security tester become familiar with this. This section of the chapter is based on much of his work.

The complete language of find is extremely detailed consisting of numerous separate predicates and options. GNU find is a superset of the POSIX version and actually contains an even more detailed language structure. This difference will only be used within complex scripts as it is highly unlikely that this level of complexity would be effectively used interactively.:
  •  -name True if pattern matches the current file name. Simple regex (shell regex) may be used. A backslash (\) is used as an escape character within the pattern. The pattern should be escaped or quoted. If you need to include parts of the path in the pattern in GNU find you should use predicate ”wholename”
  •  "-(a,c,m)time" as possible search may file is last "access time", "file status" and "modification time", measured in days or minutes. This is done using the time interval in parameters -ctime, -mtime and -atime. These values are either positive or negative integers.
  •  -fstype type True if the filesystem to which the file belongs is of type type. For example on Solaris mounted local filesystems have type ufs (Solaris 10 added zfs). For AIX local filesystem is jfs or jfs2 (journalled file system). If you want to traverse NFS filesystems you can use nfs (network file system). If you want to avoid traversing network and special filesystems you should use predicate local and in certain circumstances mount
  •  “-local” This option is true where the file system type is not a remote file system type.
  •  “-mount” This option restricts the search to the file system containing the directory specified. The option does not list mount points to other file systems.
  • “-newer/-anewer/-cnewer baseline” The time of modification, access time or creation time are compared with the same timestamp in the file used as a baseline.
  •  “-perm permissions” Locates files with certain permission settings. This is an important command to use when searching for world-writable files or SUID files.
  •  “-regex regex” The GNU version of find allows for file name matches using regular expressions. This is a match on the whole pathname not a filename. The "-iregex" option provides the means to ignore case.
  •  “-user” This option locates files that have specified ownership. The option” –nouser” locates files without ownership. In the case where there is no user in “/etc/passwd” this search option will find matches to a file's numeric user ID (UID). Files are often created in this way when extracted from a tar acrchive.
  •  “-group” This option locates files that are owned by specified group. The option, “-nogroup” is used to refer to searches where the desired result relates to no group that matches the file's numeric group ID (GID) of the file
  •  “-xattr” This is a logical function that returns true if the file has extended attributes.
  •  “-xdev “ Same as the parameter “-mount primary”. This option prevents the find command from traversing a file system different from the one specified by the Path parameter.
  •  “-size” This parameter is used to search for files with a specified size. The “-size” attribute allows the creation of a search that can specify how large (or small) the files should be to match. You can specify your size in kilobytes and optionally also use + or - to specify size greater than or less than specified argument. For instance: 
  1. find /usr/home -name "*.txt" -size 4096k
  2. find /export/home -name "*.html" -size +100k
  3. find /usr/home -name "*.gif" -size -100k
  •  “-ls” list current file in “ls –dlis” format on standard output.
  •  “-type” Locates a certain type of file. The most typical options for -type are:
  1. d A Directory
  2. f A File
  3. l A Link 
Logical Operations
Searches using “find“ may be created using multiple logical conditions connected using the logical operations (“AND”, “OR” etc). By default options are concatenated using AND. In order to have multiple search options connected using a logical “OR” the code is generally contained in brackets to ensure proper order of evaluation.

For instance \(-perm -2000 -o -perm -4000 \)
The symbol “!” is used to negate a condition (it means logical NOT) . “NOT” should be specified with a backslash before exclamation point ( \! ).

For instance find . \! -name "*.tgz" -exec gzip {} \;
The “\( expression \)” format is used in cases where there is a complex condition.

For instance find / -type f \( -perm -2000 -o -perm -4000 \) -exec /mnt/cdrom/bin/ls -al {} \;
Output Options
The find command can also perform a number of actions on the files or directories that are returned. Some possibilities are detailed below:
  • “-print” The “print” option displays the names of the files on standard output. The output can also be piped to a script for post-processing. This is the default action.
  • “-exec” The “exec” option executes the specified command. This option is most appropriate for executing moderately simple commands. 
Find can execute one or more commands for each file it has returned using the “-exec” parameter. Unfortunately, one cannot simply enter the command.

For instance:
  •  find . -type d -exec ls -lad {} \;
  •  find . -type f -exec chmod 750 {} ';'
  •  find . -name "*rc.conf" -exec chmod o+r '{}' \;
  •  find . -name core -ctime +7 -exec /bin/rm -f {} \;
  •  find /tmp -exec grep "search_string" '{}' /dev/null \; -print
An alternative to the “-exec” parameter is to pipe the output into the “xargs” command. This section has only just touched on find and it is recommended that the system security tester investigate this command further.

A commonly overlooked aspect of the “find” command is in locating files that have been modified recently. The command:

find / -mtime -7 –print
displays files on the system recursively from the ‘/’ directory up sorted by the last modified time. The command:

find / -atime -7 –print
does the same for last access time. When access is granted to a system and with ever file that is run, the file times change. Each change to a file updates the modified time and each time a file is executed or read, the last accessed time is updated.

These (the last modified and accessed times) can be updated using the touch command.

A Summary of the find command 

Effective use of the find command can make any security assessment much simpler. Some key points to consider when searching for files a detailed below:

  • Consider where to search and what subtrees will be used in the command remembering that multiple piles may be selected
o find /tmp /usr /bin /sbin /opt -name sar
  • The find command allows for the ability to match a variety of criteria
  1.  -name search using the name of the file(s). This can be a simple regex.
  2.  -type what type of file to search for ( d -- directories, f -- files, l -- links)
  3.  -fstype typ allows for the capability to search a specific filesystem type
  4.  -mtime x File was modified "x" days ago
  5.  -atime x File was accessed "x" days ago
  6.  -ctime x File was created "x" days ago
  7.  -size x File is "x" 512-byte blocks big
  8.  -user user The file's owner is "user"
  9.  -group group The file's group owner is "group"
  10.  -perm p The file's access mode is "p" (as either an integer/symbolic expression)
  • Think about what you will actually use the command for and consider the options available to either display the output or the sender to other commands for further processing
  1. -print display pathname (default)
  2. -exec allows for the capability to process listed files ( {} expands to current found file )
  • Combine matching criteria (predicated) into complex expressions using logical operations -o and -a (default binding) of predicates specified.

Legal disclaimers on email

The how’s and why’s of email disclaimers.

In a recent post, I noted the following as one of the items to implement in securing mail relays.

Add a legal disclaimer to all e-mails. All e-mails, both incoming and outgoing should have a disclaimer. This is a simple thing to add to an e-mail that will save a lot of grief down the track. It may not stop something bad from happening but least it limits the liability of the organization to a small extent.

I was asked to debate this point and I shall endeavour to do so in this post.

There are a number of situations where a disclaimer aid in protecting an organisation. One of these is in allowing an organisation to hold some control over documents that have left the organisation and the protection of trade secrets. There is also a defence from the point of view of copyright infringement to some extent as well.

Copyright infringement issues
Mann and Belzley’s[1] position holds the least cost intermediary liable is likely to be upheld under existing UK, US and Australian law. The positions held by the court in Telstra v Apra[2] and Moorhouse v UNSW[3] Define the necessary conditions to detail public dissemination and infringement through a sanctioned arrangement. The public dissemination of music clips sent to another user or group of users through email could be seen as being analogous to the copying of a manuscript with the organisation’s disclaimer being held as an inadequate control if this is all that has been done. It is clear that the provision of technical controls, monitoring and issuing of notices by the organisation would be also be needed for the disclaimer to be effective and for it to be seen that the organisation has made an attempt at controlling copyright infringement than enforcing infringements against individuals within the organisation.

Several cases have occurred in the US involving ISPs or other service providers that hosted copyright material made available to those accessing the site. The distribution by email can be seen as analogous to some of these. A significant decision was made in Religious Technology Center v Netcom On–line Communication Services, Inc[4]. The case involved the posting of information online which was disseminated across the Internet. The postings were cached by the hosting provider for several days, and robotically stored by Netcom’s system for 11 days. The court held that Netcom was not a direct infringer in summary judgment[5]. It was held that the mere fact that Netcom’s system automatically made transitory copies of the works did not constitute copying by Netcom. The court furthermore discarded arguments that Netcom was vicariously liable. The Electronic Commerce (EC Directive) Regulations 2002[6] warrants that the equivalent outcome would be expected in the UK[7].
The US Congress has acted in response with a number of statutes by and large that are intended to protect the intermediary from the threat of liability.[8] The Digital Millennium Copyright Act (DMCA)[9] envelops the possibility of liability from copyright liability. The DMCA is prepared such that it exempts intermediaries from liability for copyright infringement whilst they adhere to the measures delineated in the statute. These in the main compel them to eliminate infringing material on the receipt of an appropriate notification from the copyright holder. The email disclaimer can constitute an appropriate notification.

Here, the organisation can be seen as an intermediary as long as they are taking steps to control the dissemination of copyright materials inside and through the organisation.

Trademark Infringement
A trademark infringement refers to the unauthorized use of a protected trademark or service mark, or use of something very similar to a protected mark. The success of any legal action to stop (or injunct) the infringement is directly related to whether the defendant's use of the mark causes a likelihood of confusion in the average consumer. If a court determines that a reasonable average consumer would be confused then the owner of the original mark can prevent the other party from making use of the infringing mark and even possibly collect damages. A party that holds the legal rights to a particular trademark can sue other parties for trademark infringement based on the standard “likelihood of confusion[10].

There are a number of ways that trademark infringements could occur on the Internet. An ICP could add metatags to increase traffic (either with or without the client’s explicit permission) and equally, a client of an ISP could embed violating material into its WebPages. An ISP caching this information may inadvertently cache this material even after a take down order had been applied to the original offender.

Disclaimers can add to the level of notification and the protection of trademarked intellectual property. In themselves again, the disclaimer does little to stop infringement, but it adds to the evidence that will be available to be used in court for the defence of that mark.

The first claims in the UK of defamation using e-mail as a means of distribution occurred in the mid 1990’s. In one, the Plaintiff alleged that the Defendant published a message using a computer system asserting that the Plaintiff had been sacked for incompetence. The case did not include the service provider as a defendant. In another case and more widely publicised case[11], a police officer on complaining to his local branch of a national supermarket chain about an allegedly bad joint of meat was dismayed to discover that the store had distributed an e-mail communication to other branches of the chain. The subject of the e-mail stated; “Refund fraud -- urgent, urgent urgent”. He settled with the chain for a substantial sum as damages and an apology in open court from the supermarket management.

This issue has also occurred in the US. Litigation was started against CompuServe[12], an intermediary, as a result of assertions made in an electronic newsletter[13]. CompuServe successfully argued that its responsibility was comparable to that of a library or a book seller. In Stratton-Oakmont, Inc. v Prodigy Service Co.[14], the plaintiff asserted that a communication distributed by an unidentified third party on Prodigy’s “Money Talk” anonymous feedback site damaged the plaintiff’s IPO due to the libellous nature of the message. It was asserted that this resulted in a substantial loss.

Prodigy filed a motion for summary judgment. It asserted that the decision in CompuServe[15] applied making them the simple distributor of the communication and hence not liable for the substance of the message. The court determined that Prodigy was a publisher as they implemented editorial control over the contents of the “Money Talk” site. As the editors used screening software to eliminate offensive and obscene postings and used a moderator to manage the site, they could be held accountable for the posting of a defamatory statement. Prodigy settled but subsequently unsuccessfully attempted to vacate the judgment. The Communications Decency Act (CDA)[16] was subsequently enacted in the US to present a defence to intermediaries that that screen or block offensive matter instigated by another. The CDA presents, inter alia, that the intermediary may not be determined to be the publisher of any matter presented by another. Further, an intermediary shall be liable for any deed engaged in “good faith” to limit the spread of “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable” materials[17].

Users view the Internet as if it was a telephone service with no enduring record. E-mails frequently contain imprudent declarations and japes. These communications offer an evidential confirmation absent in a telephone exchange. Deleted e-mail can persist in a variety of locations and forms, including back-up tape or disk, on the ISP and may have been forwarded to any number of other people. Any of these are subject to disclosure in litigation[18].

Western Provident v Norwich Union[19] concerned a libel by e-mail. Communications exchanged within Norwich Union by its staff libellously concerned Western Provident’s financial strength. The case settled at a cost of £450,000 in damages and costs. For electronic distributions, the moderators of bulletin boards and Internet service providers are implicated only if they exercise editorial control or otherwise know directly of a libellous communication. In Godfrey v. Demon Internet[20], Godfrey informed the ISP of the existence of a libellous communication on a site managed by Demon. Demon did not act to remove the communication for the period of two weeks that such communications were made available on the site. The court asserted that as soon as Demon was alerted to the communication they ought to have acted. It was held that:

The transmission of a defamatory posting from the storage of a news server constituted a publication of that posting to any subscriber who accessed the newsgroup containing that posting. Such a situation was analogous to that of a bookseller who sold a book defamatory of a plaintiff, to that of a circulating library which provided books to subscribers and to that of distributors. Thus in the instant case D Ltd was not merely the owner of an electronic device through which postings had been transmitted, but rather had published the posting whenever one of its subscribers accessed the newsgroup and saw that posting”.[21]
Shevill v Presse Alliance[22] established that in the European Union where an international libel is committed, an action for libel may be initiated against the publisher. This may be commenced either in the country that the publisher is based or in any other country where the publication was disseminated and where the Plaintiff had experienced damaged reputation. There is little reason to doubt that principles applicable to libel through the press will apply equally to computer libel.

Australian defamation laws are complicated by a state based nature in that they differ across each jurisdiction in content and available defences. Various Australian state laws include offence provisions for both civil defamation and criminal defamation. Civil liability transpires as a consequence of publications that are expected to harm a person's reputation and the penalties are monetary. Criminal liability transpires as a consequence of publications that concern society, including those with a propensity to imperil the public peace, and penalties in the majority of jurisdictions incorporate incarceration. Significant distinctions exist between civil and criminal defamation law in relation to both liability and defences.

The Western Australian Supreme Court decided in Rindos v. Hardwick[23] that statements distributed in a discussion list can be defamatory and lead to an action. The court thought that it was inappropriate to apply the rules differently to the Internet from other means of communications. The court acknowledged the instigator’s accountability for defamatory proclamations broadcast across a discussion group[24]. The matter of the liability of other participants on the list was not considered during the trial.

It is considered unlikely that an ISP would scrutinize all material presented across its network[25] and this may not be economically feasible[26]. Mann & Belzley address this through “targeting specific types of misconduct with tailored legal regimes[27]. These regimes would leave the ISP responsible for the defamatory publications of its users where they have failed to take reasonable action to mitigate these infringements. The existing law in Australia leaves all parties considered to be a “publisher” liable[28]. Cases do exist[29] where ISPs have removed content proactively.

The common law defence of innocent dissemination exists in Australia. Thompson v Australian Capital Television[30] demonstrated this when Channel 7 asserted that transmission of a “live” show to the ACT retransmitted from Channel 9 NSW in effect placed it as a subordinate publisher that disseminated the material of the real publisher devoid of any material awareness or influence over the content of the show. They argued that this was analogous to a printer or newspaper vendor.

The High Court held that the defence of innocent dissemination is available to television broadcasts as well as printed works. In this instance it was held that the facts demonstrated Channel 7 maintained the capacity to direct and oversee the material it simulcasts. The show was broadcast as a live program through Channel 7's choice. They chose this format in full knowledge that a diffusion of the show would be next to instantaneous. The where further conscious of the nature of the show, a “live-to-air current affairs programme[31] and understood that this program conceded an elevated risk of transmitting defamatory material. It was decided by the facts that Channel 7 was not a subordinate publisher on this occasion.

The Federal Broadcasting Services Act 1992[32] affords a legislative defence to an ISP or Internet Content Host (ICH) that transmits or hosts Internet based content in Australia if they can demonstrate that they were reasonably unaware of the defamatory publication. s.91(1) of Schedule 5 to the Broadcasting Services Act[33] grants that a law of a State or Territory, or a rule of common law or equity, has no effect to the extent to which the ISP “was not aware of the nature of the internet content”.

The BSA[34] defines "internet content" to exclude "ordinary electronic mail". This is a communication conveyed using a broadcasting service where the communication is not "kept on a data storage device". Consequently, the s.91 defence will not be offered in cases concerning such material. In such cases, an ISP or ICH may be still attempt to rely on the defence of innocent dissemination. The applicability of the common law defence of innocent dissemination remains to be determined by the Australian courts.[35] As a consequence, any reliance on these provisions by an ISP or ICHs carries a measure of risk.

Harassment may occur through all forms of media, the Internet is no exception. Junk mail, sexually offensive e-mails and threats delivered through online means (including both e-mail and instant messaging) are all forms of harassment. The inappropriate accessing of sexually explicit, racist or otherwise offensive material at the workplace is another form of harassment. This includes the sending of unwelcome messages that may contain offensive material to another co-worker.

E-mail Crimes and Violations
In reality, e-mail crime is not new. Instead, the Internet has enabled many old crimes to be reborn. Many morally violating acts such as child pornography have become far more widespread and simpler due to the ease and reach of e-mail. Many traditional crimes such as threats and harassment, blackmail, fraud and criminal defamation have not changed in essence, but the ease of e-mail has made them more prevalent.

Distributing a Virus or other Malware
The Internet allows an individual to either inadvertently or purposely disseminate malware (such as a virus) to other systems globally. The potential impact could encompass the “infection” or compromise of millions of hosts. This has occurred. A “harmless experiment” by Cornell University student Robert Morris involved the release onto the Internet of a type of malware called a “worm” that compromised over 6,000 computers and required millions of dollars worth of time to eradicate. As several “non-public computers” run by the US Government were damaged[36] , Morris was prosecuted under the US Computer Fraud and Abuse Act (CFAA). He was convicted notwithstanding his declaration that he had no malicious objective to cause damage.

It is probable that a service provider or content hosting entity will face a degree of liability dependant on intention. If malware is intentionally posted such as in the Morris’ case, no uncertainty as to whether the conception and insertion of the malware was deliberate exists. Morris stated that he did not intend harm, but the fact remained that he intentionally created and released the worm. In the United States both Federal and State legislation has been introduced to deal with the intentional formation and release of malware.

In the UK, the introduction of malware is covered by section 3 of the Computer Misuse Act[37]. The Act states that a crime is committed if a person “does any act which causes an unauthorised modification of the contents of any computer” and the perpetrator intends to “cause a modification of the contents of any computer” which may “impair the operation of any computer”, “prevent or hinder access to any program or data held in any computer” or “impair the operation of any such program or the reliability of any such data”. The deliberate introduction of any malware will meet any of these requirements by taking memory and processing from the system and feasibly damaging the system. It is also necessary for a successful prosecution to demonstrate a “requisite knowledge”. This is knowledge that any modification he intends to cause is unauthorised”. With the volume of press coverage concerning the damage that can be caused by malware and the requirements for authorisation, it is highly unlikely that an accused party would be able to successfully argue ignorance as to authorisation.

Malware is generally distributed unintentionally subsequent to its initial creation. Thus an ICP or an ISP would not be found criminally liable under either the Computer Fraud and Abuse Act or the Computer Misuse Act for most cases of dissemination. For the majority of content providers on the Internet, there exists no contractual agreement with users browsing the majority of sites without any prospect of consideration. The consequence being that the only civil action that could succeed for the majority of Internet users would be a claim brought on negligence. Such a claim would have to overcome a number of difficulties even against the primary party who posted the malware let alone going after the ISP.

It would be necessary to demonstrate that the ISP is under a duty of care. The level of care that the provider would be expected to adhere to would be dependant on a number of factors and a matter for the courts to decide and could vary on the commerciality of the provider and the services provided. The standard of due care could lie between a superficial inspection through to a requirement that all software is validated using up-to-date anti-virus software on regular intervals with the court deciding dependant on the facts of the initial case that comes before the courts. The duty of care is likely to be most stringently held in cases where there is a requirement for the site to maintain a minimum standard of care, such as in the case of a payment provider that processes credit cards. Such a provider is contractually required to adhere to the PCI-DSS as maintained by the major credit card companies[38] and would consequently have a greater hurdle in demonstrating that they where not negligent in not maintaining an active anti-virus programme.

Loss of an entirely economic nature cannot be recovered through an action for negligence under UK law. There is a requirement that some kind of “physical” damage has occurred. The CIH or Chernobyl virus was known to overwrite hard-drive sectors or BIOS. This could in some cases render the motherboard of the host corrupt and unusable. In this instance the resultant damage is clearly physical; however, as in the majority of Internet worms[39], most impact is economic in effect. Further, it remains undecided as to whether damage to software or records and even the subsequent recovery would be deemed as a purely economic loss by the courts.

It may be possible to initiate a claim using the Consumer Protection Act[40] in the UK and the directives that are enforced within the EU[41]. The advantage to this approach is that the act does not base liability on fault. It relies on causation instead of negligence in determining the principal measure of liability. The act rather imposes liability on the “producer” of a “product”. A “producer” under the act includes the classification of importer, but this definition would only be likely to extend to the person responsible for the contaminated software such as the producer or programmer. It also remains arguable as to whether software transmitted electronically forms a “product” as defined under the act.

Prevention is the key
The vast majority of illicit activity and fraud committed across the Internet could be averted or at least curtailed if destination ISP and payment intermediaries implemented effective processes for monitoring and controlling access to, and use of, their networks. Denning (1999) expresses that, "even if an offensive operation is not prevented, monitoring might detect it while it is in progress, allowing the possibility of aborting it before any serious damage is done and enabling a timely response[42].

As is being noted above, there are a wide variety of commonly accepted practices, standards and means of ensuring that systems are secured. Many of the current economic arguments used by Internet intermediaries are short-sighted to say the best. The growing awareness of remedies that may be attained through litigation coupled with greater calls for corporate responsibility[43] have placed an ever growing burden on organisations that fail to implement a culture of strong corporate governance. In the short term the economic effects of implementing sound monitoring and security controls may seem high, but when compared to the increasing volume of litigation that is starting to incorporate Internet intermediaries, the option of not securing a system and implement in monitoring begins to pale.

Basically, disclaimers only offer support to other controls. They do not add value in themselves, but do reinforce the value and effect of existing and implemented controls. Those actions noted in the disclaimer need to be followed up on and the execution of these needs to be monitored and recorded. Basically, if you state that you are implementing a control, you need to affirm the control and maintain evidence of this for the notification to be effective.

Disclaimers are not a control in themselves, but add weight and enhance other controls that have been deployed. There will always be times when Anti-Virus fails; staff send documents they have no rights to send and more. If the organisation maintains and is vigilant with other controls, a disclaimer adds weight to help defend an action in tort for negligence and also be used to deflect liability from the organisation as a whole and to return it to the infringer, where it should lie.

In summary…
Disclaimers do have value but only in selected instances. Disclaimers can enhance the value of existing controls but likewise detract from cases where there are no controls.

[1] Mann, R. & Belzley, S (2005) “The Promise of the Internet Intermediary Liability” 47 William and Mary Law Review 1 at 27 July 2007]
[2] Spar, D. (2001) at 11-12
[3] 47 U.S.C. § 230(c)(1) (2004) (This sections details the requirements of the CDA that do not apply to ISPs).
[4] 907 F. Supp. 1361 (N.D. Cal. 1995)
[5] See also, System Corp. v Peak Computer Co., F.2d 511 (9th Cir. 1993), in which it was held that the creation of ephemeral copies in RAM by a third party service provider which did not have a license to use the plaintiff’s software was copyright infringement.
[6] Statutory Instrument 2002 No. 2013
[7] The act states that an ISP must act “expeditiously to remove or to disable access to the information he has stored upon obtaining actual knowledge of the fact that the information at the initial source of the transmission has been removed from the network”. The lack of response from Netcom would abolish the protections granted under this act leaving an ISP liable to the same finding.
[8].With some minor exceptions, other countries have also seen broad liability exemptions for internet intermediaries as the appropriate response to judicial findings of liability. The United Kingdom Parliament took no action after the Queen’s Bench in Godfrey v. Demon Internet Ltd, QBD, [2001] QB 201, held an Internet service provider liable as the publisher at common law of defamatory remarks posted by a user to a bulletin board. In the U.S., §230 of the CDA would prevent such a finding of liability. Similarly, courts in France have held ISPs liable for copyright infringement committed by their subscribers. See Cons. P. v. Monsieur G., TGI Paris, Gaz. Pal. 2000, no. 21, at 42–43 (holding an ISP liable for copyright infringement for hosting what was clearly an infringing website).
In 2000, however, the European Parliament passed Directive 2000/31/EC, available at, which in many ways mimics the DMCA in providing immunity to ISPs when they are acting merely as conduits for the transfer of copyrighted materials and when copyright infringement is due to transient storage. Id. Art. 12, 13. Further, the Directive forbids member states from imposing general duties to monitor on ISPs. Id. Art. 15. This Directive is thus in opposition to the British and French approaches and requires those countries to respond statutorily in much the same fashion as Congress responded to Stratton Oakmont and Religious Technology Centers. Of course courts are always free to interpret the Directive or national legislation under the Directive as not applying to the case at hand. See, e.g., Perathoner v. Pomier, TGI Paris, May 23, 2001 (interpreting away the directive and national legislation in an ISP liability case).
Canada has passed legislation giving ISPs immunity similar to the DMCA. See Copyright Act, R.S.C., ch. C-42, §2.4(1)(b) (stating “a person whose only act in respect of the communication of a work or other subject-matter to the public consists of providing the means of telecommunication necessary for another person to so communicate the work or other subject-matter does not communicate that work or other subject-matter to the public”). The Canadian Supreme Court interpreted this provision of the Copyright Act to exempt an ISP from liability when it acted merely as a “conduit.” Soc’y of Composers, Authors and Music Publishers of Can. v. Canadian Assoc. of Internet Providers, [2004] S.C.C. 45, 240 D.L.R. (4th) 193, ¶92. The court in that case also interpreted the statute to require something akin to the takedown provision of the DMCA. See id. at ¶110.
[9].Pub. L. No. 105- 304, 112 Stat. 2860 (1998) (codified in scattered sections of 17 U.S.C.).
[10]In the US, the Trademark Act of 1946, statutes § 1114 and § 1125 are specific to trademark infringement.
[11] As reported in the UK Telegraph by Kathy Marks on the 20th Apr 95. The policeman is quoted: "...If this had got out unchecked it could have done me serious professional harm. I am in a position of extreme trust and there has got to be no doubt...that I am 100 percent trustworthy".
[12] Cubby v CompuServe, 776 F.Supp.135 (S.D.N.Y. 1991). Another case, this time involving AOL was that of Kenneth Zeran v America On-line Incorporated heard by the United States Court of Appeals for the 4th Circuit (No. 97-1523 which was decided in November 1997). This was a case against AOL for unreasonably delaying in removing defamatory messages. The Court in 1st Instance and the Court of Appeal found for AOL.
[13] Compuserve offered an electronic news service named “Rumorville”. This was prepared and published by a third party and distributed over the CompuServe network.
[14] (NY Sup Ct May 24,1995)
[15] Ibid
[16] Communications Decency Act
[17] The was first made to include those postings even when that material is protected under the US Constitution. This has been subsequently amended.
[18] The EU Electronic Commerce Directive (No. 2000/31/EC) has now specifically limited the liability of an ISP to where it has been informed of a defamatory posting and has failed to remove it promptly as was the situation in Demon Internet. Lawrence Godfrey v Demon Internet Limited (unreported Queens Bench Division - 26th March, 1999)
[19] Western Provident v. Norwich Union (The Times Law Report, 1997).
[20] Godfrey v Demon Internet Ltd, QBD, [1999] 4 All ER 342, [2000] 3 WLR 1020; [2001] QB 201; Byrne v Deane [1937] 2 All ER 204 was stated to apply.
[21] Godfrey v Demon Internet Limited [1999] 4 All.E.R.342
[22] C.68/93
[23] Rindos v. Hardwicke No. 940164, March 25, 1994 (Supreme Ct. of West Australia) (Unreported); See also Gareth Sansom, Illegal and Offensive Content on the Information Highway (Ottawa: Industry Canada, 1995) .
[24] Ibid, it was the decision of the court that no difference in the context of the Internet News groups and bulletin boards should be held to exist when compared to conventional media. Thus, any action against a publisher is valid in the context of the Internet to the same extent as it would be should the defamatory remark been published in say a newspaper.
[25] RECORDING INDUSTRY ASSOCIATION OF AMERICA, INC., (RIAA) v. Verizon Internet Services, 351 F.3d 1229 (DC Cir. 2003); See also Godfrey v Demon Internet
[26] ; Further, in the US, the Digital Millennium Copyright Act’s (DMCA’s) “good faith” requirement may not require “due diligence” or affirmative considerations of whether the activity is protected under the fair-use doctrine. In contrast, FRCP 11 requires “best of the signer’s knowledge, information and belief formed after reasonable inquiry, it is well grounded in fact and is warranted by existing law…”. Additionally, with the DMCA, penalties attach only if the copyright owner “knowingly, materially” misrepresents an infringement, so the copyright owner is motivated to not carefully investigate a claim before seeking to enforce a DMCA right.
[27] Brown & Lehman (1995) (The paper considers the arguments to creating an exception to the general rule of vicarious liability in copyright infringement for ISPs and those that reject this approach), available at
[28] Thompson v Australian Capital Television, (1996) 71 ALJR 131
[29] See also “Google pulls anti-scientology links”, March 21, 2002, Matt Loney & Evan Hansen ,, Cnet,; “Google Yanks Anti-Church Site”, March 21, 2002, Declan McCullagh, Wired News,,1283,51233,00.html; “Church v. Google How the Church of Scientology is forcing Google to censor its critics”, John Hiler, Microcontent News, March 21, 2002,; Lawyers Keep Barney Pure, July 4, 2001, Declan McCullagh, Wired News,,1412,44998,00.html.
[30] See Reidenberg, J (2004) “States and Internet Enforcement”, 1 UNIV. OTTAWA L. & TECH. J. 1
[31] Ibid.
[33] s.91(1) of Schedule 5 to the Broadcasting Services Act states:
(i) subjects, or would have the effect (whether direct or indirect) of subjecting, an internet content host/internet service provider to liability (whether criminal or civil) in respect of hosting/carrying particular internet content in a case where the host/provider was not aware of the nature of the internet content; or
(ii) requires, or would have the effect (whether direct or indirect) of requiring, an internet content host/internet service provider to monitor, make inquiries about, or keep records of, internet content hosted/carried by the host/provider.
[34] The Broadcasting Services Act specifically excludes e-mail, certain video and radio streaming, voice telephony and discourages ISP's and ICH's from monitoring content by the nature of the defense. See also, Eisenberg J, 'Safely out of site: the impact of the new online content legislation on defamation law' (2000) 23 UNSW Law Journal; Collins M, 'Liability of internet intermediaries in Australian defamation law' (2000) Media & Arts Law Review 209.
[35] See also EFA, Defamation Laws & the Internet
[36] Computer Fraud and Abuse Act (CFAA), 18 U.S.C. 1030; There is an obligation for prosecution under the CFAA that a non-public computer is damaged where the term “damage” means any impairment to the integrity or availability of data, a program, a system, or information.
[37] Computer Misuse Act 1990 (c. 18), 1990 CHAPTER 18
[38] The PCI-DSS at section 5 requires that “Anti-virus software must be used on all systems commonly affected by viruses to protect systems from malicious software.”
[39] Scandariato, R.; Knight, J.C. (2004) “The design and evaluation of a defense system for Internet worms” Proceedings of the 23rd IEEE International Symposium on Reliable Distributed Systems, 2004. Volume, Issue, 18-20 Oct. 2004 Page(s): 164 - 173
[40] The Consumer Protection Act 1987 (Product Liability) (Modification) Order 2000 (Statutory Instrument 2000 No. 2771)
[41] See also, Electronic Commerce (EC Directive) Regulations 2002, SI 2000/2013 and the provisions of the Product Liability Directive (85/374/EEC)
[42] Dorothy E. Denning, Information Warfare and Security, ACM Press, New York, 1999
[43] See for instance Hazen (1977); Gagnon, Macklin & Simons (2003) and Slawotsky (2005)