Friday, 2 September 2011

The Economy is screwed?

The economy is far from screwed. I hate when people say this. I worked in Mumbai for a time. There I saw the world’s largest slum. I saw people leaving to work and support families they see once a year at best. I saw the garbage heaps in Manilla. I saw the Congo and Farc controlled places in Venezuela.

I saw Somalia and even Arch Angel in what was touted as an industrial powerhouse of a failed socialist country.

These are failed economies.

In these places, people would work long hours doing stressful repetitive jobs and go home to a shack so they could send money to their families to have less than the poorest welfare recipient in the US makes. They can work 100 hour weeks for just a few dollars a day.

So to all those people arguing how screwed things are right now, I have to ask…

How is your lot so bad?
Things change. Some businesses go under as they are inefficient. This is a good thing. It means that they are being replaced by more productive ones. In the long run, we all benefit from this. The trouble is when we try and bail out failing industries (such as the Australian Bailout of Kodak film when digital cameras started to replace film). When this happen, we all suffer. It seems good, we help a few people, but we forget the unseen.

We forget that the money to bail these industries comes from someplace, the more efficient industries and companies that would have employed more people.

So, we take parasitically. That is basically the case as government only redistributed and never creates. The issue is that governments are also inefficient in this process. They cannot be 100% efficient in redistributing wealth, so they in fact destroy wealth.

We need to stop thinking that people in the west are poor and stop the welfare trap we have created that makes more and more people dependent on it each year.

Mostly, we need to stop any bailouts – ever.

Failing companies do not just disappear, they are liquidated. The parts of them that have value are sold to others who can utilise them more effectively. That is companies who can create more wealth.

The economy is just an abstract.
We need to stop looking at the pipes (GDP) and look at the bucket (Wealth).

It does not matter one bit how much water flows in a circle around a pipe endlessly, but how much water there is in a system. This is the confusion we make. We are looking at the Keynesian fallacy that a fast flow of capital, any capital matters and not the total amount of capital.

What matters is the size of the pie, that is increasing wealth and not making more slight of hand tricks as our government does right now to make it see it is more effective.

What matters is wealth. 

The US, Australia and most of the west has this. We all want everything now. This is what economics is all about, limited resources. There is only so much time, goods and other things and many people. We cannot have all we want NOW and if we try, well we one day find it is time to pay the piper.

Try delayed gratification. Be happy with all we have! If you are not, work to earn it and be happy in the effort.

If you want more, wait, save and buy when you can afford it.

Saving is not lost or horded money. Saving increases the pool of funds for lending to business and makes capital investment MORE and not less attractive.

Remember, credit means that you have a little now, but in the long term, you have FAR less.

TTCP and later

NetCat is a great and simple tool with many uses, but it has a number of limitations in being such a simple and generalised tool.

A tool that allows for some more specialised uses of sockets and connection testing is TTCP or “Test TCP”.

Later versions and ports of this program, such as the Windows one, NTttcp that I shall be posting on today allow for TCP, UDP as well as IPv6 socket connections.

Like NetCat, TTCP allows you to send network traffic to and from a host. It does not have all of the functionality of NetCat, but equally, NetCat does not have the reporting and benchmarking of TTCP.
The first stage of using NTttcp is to install it on both of the systems you are testing. It is available from Microsoft.

Fig 1: Installing NTttcp

We see the install process in Figures 1, 2 and 3.
Fig 2: Accept the terms.

Fig 3: Select where the program will be installed

Fig 4: Confirm the install

Finally, confirm the install if the options are all OK and it will complete the installation.

Fig 5: Awaiting installation

Fig 6: And we are done…

Once installed…
Microsoft’s documentation states the following:
NTttcp is a multithreaded, asynchronous application that sends and receives data between two or more endpoints and reports the network performance for the duration of the transfer. It is essentially a Winsock-based port of the ttcp tool that measures networking performance in terms of bytes transferred per second and CPU cycles per byte. Because it can be difficult to diagnose a system’s overall performance without dividing the system into smaller subsystems, NTttcp allows users to narrow the focus of their testing and investigation to just the networking subsystem.
NTttcp measures a system’s networking performance for both Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic. The application can be configured in many ways, including:
  • Setting software affinity for threads to a specified processor index.
  • Specifying asynchronous or synchronous data transfers.
  • Specifying data verification at the application level for a predetermined pattern in the application buffers.
  • Sending and receiving traffic from multiple Internet Protocol (IP) addresses with single command.
  • Supporting IPv6 performance testing.
  • Supporting UDP performance testing.
  • Supporting time-driven testing.
The complete details are listed on the Microsoft document but we can see an example of starting the listener process on one host in Fig 7.

Fig 7: Starting the Receiver in wait mode

And the transmit process in Fig 8.

Fig 8: Starting the sender process (and the options above

The default install directory (or for that matter any place you installed NTtcp) will have a detailed document on its use called TCP_Tool.docx.

C:\Program Files (x86)\Microsoft Corporation\NT Testing TCP Tool

After the process has run for some time, you will be able to see data on the connection (Fig 9).

Fig 9: And the results

This is useful for mapping ports, collecting bandwidth statistics, doing tests of performance under load and DDoS conditions as well as validating firewall changes.

Thursday, 1 September 2011

Connecting to HTTP

The Web is drive by HTTP or the HyperText Transfer Protocol. Like many protocols that have been around for some time, this is deceptively simple.

There are a few “methods” used with HTTP. These include the ones we will primarily be concerned with:
  • GET
  • HEAD
  • POST
GET is the main method you will utilise in pulling or requesting a web page.

A conditional GET only grabs the page if it has been updated since a certain point.

The HEAD method requests a page, but only the headers for the page. In testing site security, it can be useful to just grab the status line and headers without the message body.

A POST method is used to send data to the server without having it stored in the URL line (as GET will do).

Web without a client
For testing purposes, it can be of use to connect to a web server using either telnet or NC (netcat)
The simplest means of doing this is completely manually as such:

% telnet 80
  Connected to
  GET / HTTP/1.0           [RETURN]
We see from the commands above (and displayed in Fig 1) just how simple it can be to connect to a HTTP server
Figure 1: Connecting to a web server using Telnet

This works the same for getting a HEAD (and displaying information about the server):
We see this below:
% telnet 80
  Connected to WWW.INTEGYRS.COM.
  HEAD / HTTP/1.0           [RETURN]
And again in Fig 2.
Figure 2: Connecting to a web server using Telnet to get the HEADer

In this example, we are using HTTP 1.0 and not 1.1 as we have not started using Host Headers or anything more complicated. If you want to start really getting into the details, we need to use HTTP 1.1 and also send more information. To do this, we want to make a file with the details of our request to ensure we do not make a typo.

Figure 3 is rather a simple request. We could also have a good deal of details.

Figure 3: Making a file of what we send to the Web Server

Some of the details we could request in a HTTP/1.1 request include:
  • Byte Range, allows for a request of a part of a document.
Range: bytes=500-999
  • Hostname Identification, most servers use name virtualisation and every HTTP/1.1 request must specify the hostname.

In Figure 4, I have used vi in Linux to create a list of requests for the WEB server.

Figure 4: vi to create a GET file

We can then use netcat to display the page as we see below and in Figure 5:
nc -v 80 < GET_HTML
And the page is displayed.

Figure 5: NetCat grabs the HTML

As a silly scripted example to do something to save the files, you could make a files with a set of names and a bash script or even make a directory with a list of empty files named after the web server you want to check. I am sure with a few minutes thought you can come up with a better way to do this.
Figure 6: NetCat Scan Script

In Figure 6 we have created a script that scans a list of domain names given as input (no checking, no validation and not secure… do not do this at home) and saves the default page to a file.
I have not broken the loop, so we will step through using ^C to see what it does.

Spend a little time and you can make a good test script. I will cover doing this correctly in a subsequent post.

Figure 7: A simple script 

If you want to learn more, have a look at the protocol specifications in the various formats:

Wednesday, 31 August 2011

Emailing using Telnet

Email clients are not always available and you may need to test a system. Doing this is simple as SMTP is a text based protocol.

Some systems are more complex with password requirements, but these are still simple and a quick read of the RFC will display all the commands.
You Do… Server responds as follows…
Telnet to hostname on port 25 220 (then identifies itself - possibly with several lines of 220 + text)
HELO your_domain_name or whatever 250 (followed by human readable message)
MAIL (ie, your email address) 250 is syntactically correct (or similar)
RCPT (email address you want to send to) 250 is syntactically correct
DATA Tells you to send data then CRLF period CRLF at end
You type your message then CRLF period CRLF (ie, type a period on a line by itself then hit ENTER) 250
QUIT Signoff message

There are several resources on the web concerning email, the best one to start with the basics is :

Tuesday, 30 August 2011

Virtualisation and Forensics

The following are a few points on the effects of virtualisation on digital forensics.

  • Memory state is retained by the virtualised system
  • Memory forensics is currently a technically difficult field with few qualified people
  • VM’s make capture simple – both of disk and memory
  • VM’s have a snapshot capability, this is handy for incident response and forensic capture
The reasons for these points come from the fact that memory (esp. on Microsoft) may contain details of deleted files and transactions for a long time (an example is email deleted on a server may be retained in the memory stack for weeks though the sender believes that it was deleted and wiped).

The capture capability of the snapshot functions on VM’s means that a single file can be captured with all memory and state information.

Personally I use an open source tool called “Liveview” to view captured images. A simple “dd” bit image can be loaded into Liveview to replay the image as if I was on the host. Liveview links to VMware to play the captured images.

With the snapshot and replay functions of VMware coupled with Liveview, I can load a copy of a forensically captured image and test it “offline”. This allows me to use tools that may alter the image without fear of contaminating the evidencal value of the image – as I am only using a copy.

Live View
Liveview allows the configuration of the system time to start the image and I can thus experiment without corrupting evidence.

When I have found the evidence of what has occurred, I can replay the actions that I have taken using VMware's replay function. This allows for the presentation of the evidence in a non-technical manner that the jurists may comprehend.

In the case of organisations that are already using VM’s, this process is simplified. The vast majority of the capture process is effectively done for me. The issue is that the host may also have much more data then the company running the VM wanted to retain.

Details on using Liveview can be found here.

This tool is great for analysing compromised or malware infected systems. Done well, you can run up a “live” version of the host and monitor the traffic to and from it allowing you to see which processes have been compromised quickly.

VM’s and eDiscovery
eDiscovery and Document retention come into the discussion at this point. There are requirements to hold documents when a case has started or if one is likely. As memory and state hold information, and coupled with some of the decisions in the US that may be influential here in Australia (though not authoritative), it is likely that they could be called under subpoena or captured in an Anton Pillar (civil search) order.

In this, files that a company had believed destroyed could actually be recovered.

Worse, documents outside of the request listed inn the order could be inadvertently provided given the difficulties of separating material held in state data.

Monday, 29 August 2011

Cyber (Crime / Espionage / Terror) - Lecture 2

Cyber (Crime / Espionage / Terror)

Join us for a Webinar on September 9

Space is limited.
Reserve your Webinar seat now at:

Lecture 2 in a series of 24.

We have just seen the largest cyber espionage incident in recorded history and it is only set to get bigger. The rise of cyber based groups engaging in hactivism is creating chaos, but it is only the start as these groups start to do more damage. Al-Qaeda and other pure terror groups have been on the back foot unable to leverage the social aspects of Web 2.0, but will this change as groups such as Anon and LulzSec define a distributed model for social malfeasance?

Add to this criminal controlled botnets of millions of zombie hosts and the decade is set to be the decade of the hack!

In this lecture, we focus on Cyber Crime. This will be the first of 4 lectures detailing the rise and development of cyber crime and its links to traditional criminal enterprises (including the drug trade, prostitution and smuggling).

Presented by Dr Craig Wright of Charles Sturt University [1] and the Global Institute for Cyber Security + Research [2].

1. Http://
2. Http://
 Cyber (Crime / Espionage / Terror)
 Friday, September 9, 2011
 7:00 PM - 8:00 PM AEST
After registering you will receive a confirmation email containing information about joining the Webinar.
System Requirements
PC-based attendees
Required: Windows® 7, Vista, XP or 2003 Server

Macintosh®-based attendees
Required: Mac OS® X 10.5 or newer

Password Cracking - John the Ripper

John the Ripper (JTR) is one of the most popular password auditing tools. JTR uses dictionary attacks where it tries all the words listed in a file (a “wordlist”) to find a match. It also uses the brute force method, where it tries all possible combinations of letters, numbers, and special characters. The primary use of JTR is to detect weak passwords.

John the Ripper can run on Windows, DOS, BeOS, and OpenVMS, Unix, and most Unix-like systems.

To run the dictionary attack mode, a wordlist must be provided to JTR. Wordlists are generally text files containing a single word per line which password cracking tools use to perform dictionary-based attacks. A wordlist, password.lst, comes with the default installation of John the Ripper. JTR goes through each word in the wordlist sequentially. Additional techniques (called “word mangling”) can also be performed, such as substitution of letters to numbers, suffixing numbers at the end, etc.

Wordlists lists can be built, purchased, or downloaded from different sources for free. Generally, the more unique words the list contains, the better. It is therefore recommended that a larger wordlist than the default list be obtained.

Password Lists
John the Ripper attempts to obtain the original password from a list of “hashed” passwords. Hashing is the process wherein a “string” of characters (in this case a password) is taken as an input and ran through a “hashing function” that produces a “hash value”.

In Windows machines, account information is stored in the SAM (Security Account Manager) database. The SAM is a binary (non-plain text) file that can be found at the folder %systemroot%\System32\Config.

This file contains the username and password of all local accounts that reside in the computer. To secure the SAM’s contents, the stored passwords are encrypted using LM/NTLM hashes, instead of just plain text. These hashes are non-reversible, which means they cannot be deciphered to produce their original values. However, the algorithm for NTLM hashing is well-known and thus can be recreated. A password cracking tool will just use the same hashing algorithm on words from a wordlist and compare it with the hashed values stored in the SAM database until it finds a match. This is basically how JTR works.

In order for JTR to run, it needs a copy or a “dump” of the SAM database. There are a lot of tools that does just that, such as pwdump, fgdump, or cachedump. These tools are run at the command line and the result can be saved to a text file. Please note that these tools require administrator rights on the computer.
In the example below, we use fgdump as our tool to get the password dump.

Fig 1

Fgdump gives us a clear text output in the form host.pwdump, which in our example is (password dump of the local machine). Using notepad, we can see all the users and their hashed passwords:
Fig 2

In UNIX and Unix-like systems, user accounts are stored in a clear-text file which is usually found at /etc/passwd. This file lists the username and the “hash” of the passwords. If shadowing is implemented (as is mostly the case), the passwd file can be copied to a text file using the following syntax:
unshadow /etc/passwd /etc/shadow > passwordfile.txt

Root access is required for the command to work.

Using JTR in Windows 

How to configure
Before we run John the Ripper, we should first edit the configuration file to tell JTR the filename and location of the wordlist. In the directory where John the Ripper is installed, open the file "john.ini" in Notepad. At the line that says, "Wordfile = ~/password.lst”, change password.lst to the filename of the wordlist (wordlist.txt) as shown in the example below.
Fig 3

How to run
When the password dump ( and the wordlist (wordlist.txt) have been acquired, and the configuration file (john.ini) has been changed accordingly, we are now ready to run John the Ripper. Issuing the command

shows us the list of options
Fig 4
At the command prompt, simply typing


will make John the Ripper go through its default order of cracking modes until it finds the passwords. JTR has many modes and options that can increase the speed and chances of success. However, for most situations, the default settings will suffice.
Fig 5

JTR will first perform the “single crack” mode, which tests for weak passwords. Then, it will perform a dictionary attack using the wordlist (wordlist.txt) described earlier, and lastly, it will perform an “incremental crack” which is simply a brute-force attack.

John the Ripper will display its output on screen as it goes through the entire password list. Pressing the keys Ctrl-Break will interrupt the processing
Fig 6

and then save it to a restore point appropriately called “restore”.

Issuing the following command will resume the processing:
john-386 -restore
Fig 7

In the example we have, the passwords of the first two accounts Guest and TStark were cracked in under a minute. This shows that very weak passwords, such as those with short lengths, and with just a simple mixture of alphabetic characters and numbers are very easily cracked using either the single crack or dictionary attack. User JDorian uses a 10-character password in a foreign language; however, it uses just alphabetic characters and thus was easily cracked using word-mangling techniques. The next screenshot shows JTR after it has done its processing. All passwords were cracked in just twenty minutes.
Fig 8

The following command shows how many passwords have been cracked and how many are left:
john-386 -show
Fig 9

Using JTR in UNIX
How to configure
As in Windows, the way John the Ripper performs can be customised by editing its configuration file in Unix. The configuration file is usually named john.conf and saved in the directory where JTR is installed.
In the conf file, global options, wordlist, rules, and parameters can be defined or set.
If we have downloaded a wordlist named wordlist.txt, we can then define in the conf file
Wordlist = wordlist.txt
The default wordlist is password.lst.

How to run
We can run JTR in a shell using options defined in the configuration file with the following:
# john passwordfile.txt 

where passwordfile.txt refers to the dump of the user accounts file obtained earlier.

JTR will perform single, wordlist (using wordlist.txt), and then incremental modes of attack. A particular mode can be specified by using the additional options --single, --wordlist:wordlist.txt, or --incremental.

The results will be displayed on screen and also saved into a file called john.pot (another filename can be used as defined in the configuration file). This file is not stored in clear text and requires JTR for it to be read. Issuing the following command shows the result of the most recent JTR process:

# john –-show passwordfile.txt

Sunday, 28 August 2011

Password Cracking - Brutus

Brutus is a free, online password cracker. It can use either a dictionary attack using a word list or the “brute force” method of finding passwords against HTTP sites (as in a password protected web page), a POP3 (mail) server, an FTP server, SMB, or a Telnet-enabled machine (as in a router console).

Although holding a reputation as being largely used for malicious purposes, Brutus can be a valuable authentication tool for administrators and auditors to check for weak passwords.

Before we use Brutus, we need to gather vital information that can help us streamline our options, with the hope of achieving better performance.

As an example, we are given an FTP site with an IP address of First we need to verify whether the site requires authentication or it permits anonymous logon. One of the simplest ways to do this is logging on to the FTP server using the command line (Figure 1).

Figure 1

Upon FTP-ing the site, we are prompted with a Username request. If we use “anonymous” with a blank password, assuming that anonymous logon is permitted, we will be granted access to the FTP site (Figure 2).
Figure 2

However, from Figure 2 we learn that anonymous logon is not permitted and that a username and a password are required to access the FTP site.

Next we find out what port the FTP host is using. We can use nmap in the command line.
Figure 3

We can see from the nmap scan that port 21 is opened for FTP services. Port 21 is the default for FTP.
With the information we have gathered, we are now ready to perform Brutus.

Using Brutus
Figure 4 shows the default screen of Brutus immediately after launching:
Figure 4

At the Target field, we can enter an IP address or URL of a website or an FTP server. We will use the IP address of the FTP server in our example. In the Target field, we will put (Figure 5).
Figure 5

Next we change the Type by clicking on the dropdown menu and then selecting FTP (Figure 6).
Figure 6

Notice that the in the Connection Options, the Port Number has changed from 80 (default for HTTP) to 21 which is the default for FTP connections. We have already verified from our nmap scan that port 21 is indeed being used by the FTP host.

The HTTP Options section has also changed into FTP Options. To force the attack, tick the Try to stay connected for option and select Unlimited attempts.

In the Authentication Options, let’s assume there is an “Administrator” account that can be used to access the FTP site. We will put this single username Administrator in the UserID field (Figure 7).
Figure 7

The Pass Mode field gives us an option of what type of attack to use. The default Word List attack uses a list of passwords from a text file which Brutus goes through sequentially. The Combo List includes the username with the password in the word list. The Brute Force attack is the most exhaustive type that tries every possible combination, permutations and substitutions from a given alphanumeric-and-character set.
For our example, let’s use Word List. Brutus comes with a default wordlist of passwords (words.txt) but it is recommended to use a larger list. We have obtained one such list, wordlist.txt, and we can tell Brutus to use it by putting its filename in the Pass File field (Figure 8).

Figure 8

We can now launch the attack by clicking on the Start button at the upper right side of the Brutus interface.
Figure 9

From the above figure we can see that the wordlist contains 306707 passwords. Brutus goes through each password until it finds one that, when paired with the username Administrator, allows access to the FTP site, which in this case was successful for the password found in line 111280. The Positive Authentication Results pane displays the outcome of the scan, with details such as the target host, the host type, username and the cracked password. To verify if the password obtained is correct, we again try to log on using the command line (Figure 10).

Figure 10

Success! Using the administrator username and the password obtained by Brutus we have successfully logged into the FTP server.

It is interesting to note that Brutus took only a little under 3 minutes to crack such a simple password. Again, it is recommended that a larger wordlist be used to increase the probability of success especially against more complex passwords.

Brutus also has a built-in Wordlist Generation feature. This feature is available by clicking on Wordlist Generation under the Tools menu on the main screen. The following actions are available from the Action dropdown list:
  • Convert List (LF > CRLF) - In some wordlists the line break is indicated by a single LF (Line Feed) character, such as in Unix file types. The LF > CRLF feature converts these types of lists into a DOS (and Windows) recognisable format by replacing the LF character with CRLF (Carriage Return/Line Feed).
  • Only Word Length – This simply reads the input file and copies to an output file all the words that match the specified word length parameters.
  • Remove Duplicates - This removes all duplicate entries from the word list.
  • Permutations – Words from the input file are read one at a time, and then ran against a set of permutations such as uppercase-lowercase mix, substitution of common alphabetic characters into numbers and vice versa (“leet speak”), etc, with each permutation copied into an output file. This may result to a single word having 50+ variants. This is a good way to build a larger wordlist from a smaller and simpler set of dictionary words, but may take some time to finish.
  • Create New List – The same as above but instead of reading the words from an input file, the “seed words” are provided by the user. The seed words are then ran against the user-defined permutations and then copied to a new word list.
  • Create New List for User – the same as above but creates a combo-list where both the username and the password are specified on each line. The username and any seed words you specify will be used to create the list.
  • Create New List for Users – the same as above but instead of specifying a single username, the input file will be a standard user list file.

Internet Piracy, Contraband, Counterfeit Products, Plagiarism and Copyright and the “Security Professional”

It may often occur that works offered over the Internet, either by a service provider or its subscribers, is included within the copyright owned by a third party who has not sanctioned the works distribution. In some instances, a service provider may be liable for a copyright infringement using its service and systems. Access to copyrighted material without license is illegal in itself. It is analogous to receiving stolen property. The damage done through plagiarism and the deception it entails damages not just those involved, but also the entire information security community when it is one of our own.

Plagiarism can be no different to receiving stolen intellectual property .
What has changed is the ease and distances associated with the distribution of copied materials. The global Internet allows people to copy and distribute copyrighted works almost instantaneously anywhere in the world be this on a one-to-one distribution or using a shared P2P network. Intermediaries are involved as both the storage sites and the conduit.

Plagiarism varies in its extent. It goes from simply rephrasing the ideas of another without referencing your sources right through to the literal block copy of paragraphs of text and the theft of entire passages.

This literal copying is a form of fraud and theft. In some cases, the aim is not an accidental unacknowledged phrase but deception. The author wants to use the works of another as their own. In this “uniquely secretive form of theft the author is asserting a level of skill, knowledge and expertise that they do not exhibit on their own. They are using the work and study of another to lift their own lack of ability.

Simon Caterson wrote [1] that “Plagiarists can only get away with stealing words while their victims remain in ignorance. As Christopher Ricks points outs, it is the intention to conceal that essentially distinguishes plagiarism from legitimate forms of literary appropriation, such as allusion: "the alluder hopes that the reader will recognise something, the plagiariser that the reader will not".”
Some, and this has been attributed to many individuals state that “to steal ideas from one person is plagiarism. To steal from many is research”.

This makes light of the damage that the fraud and deception of plagiarism causes, but more importantly, it detracts from real research. A good researcher uses the ideas of others, but also attributes the sources.

Further, plagiarism does not just hurt a nebulous idea of society and the copyright holder, it leads to liability for the hosting party in some instances. As a breach of copyright laws, the ICP [Internet Content Provider] or ISP can be found liable if they fail to act. This even extends to online journals and blogs.

For a party to be charged with a civil copyright infringement or media piracy in the US, the claimant needs to mutually prove each of the following:

  • show ownership of the copyright work, and
  • demonstrate that the other party "violated at least one exclusive right granted to copyright holders under 17 U.S.C. § 106".
What an intermediary needs to know is that simply making files available for download is equivalent to distribution. This was determined in the US case, Elektra v. Perez (Elektra v. Perez, D. Or. 6:05-cv-00931-AA). Intermediaries that provide storage and distribution services need to factor this into the contracts that they offer and the procedures they use in order to ensure that they are not hosting illegal content.

The problems in Information Security
Plagiarism by “security professionals” and I use this term lightly in the wider sense as fraud is not professional, is of particular concern. It is one thing to forget to attribute an idea in a report that is written by the author and has not been simply block copied, but another altogether to pass the writings of another person off as your own.

The issue is that some people in the industry leverage the works of others coupled with external promotion to seem more than they are. We all suffer for this and in a field as critical as security, the costs can be disproportionate to the damage a single individual could seem to be able to create.

This topic is not new. Other writers have taken Gregory D. Evans, “author” of "World’s No. 1 Hacker" book to task for stealing vast blocks of other people’s work. Yet these people remain. Despite their frauds in passing off a level of expertise they do not actually possess, people trust these security doppelgangers.

Here in Australia, we have such a case as well. I wrote on this topic three years ago now. That did not stop this individual from promoting herself as more than she really is to the point where she has been awarded ICT professional of the year in Australia.

Ms Rattray in one example of her writings took the text of text from Erik Guldentops “Harnessing IT for Secure, Profitable Use” and block copied this into an article she professed to have written. This article was published in Insecure. An article by Jo Stewart-Rattray began on page 73 of issue 14. I had notified the publishers who had that article pulled as Ms Rattray had plagiarised it. The original copy is still available thanks to the nature of the web.

Ms Rattray’s feeble excuse for fraudulently stating the writings as her own was that she had planned to add a reference later. Really? Adding a reference when more than half the article has been stolen and fraudulently promoted as her own? For that matter, would not the adding of a reference have been better justified before publication? If you have been published for three months and have not made an attempt to update a document, does that not seem as if you have basically intended to fraudulently promote it as your own?

There are copyright issues with this level of plagiarism, but the true problem is the betrayal of trust.
People such as Ms Rattray and Gregory D. Evans promote themselves as experts. People trust them in what they say and implement solutions and controls based on a level of knowledge that these individuals do not actually have.

In the end, we all suffer when frauds are allowed to flourish. This fraud is a sign of dishonesty.

In these cases, we have to ask the question, do we really want to trust a person who would steal the works of another and pass it off as their own. They are dishonest, how can we place our trust in them?
Worse, in Ms Rattray’s case, she is a director of ISACA. In allowing her unethical behaviours, she tarnishes the reputations of all members of ISACA.

The legal issues with respect to copyright and piracy.
In the UK, copyright law is governed through the "Copyright, Designs and Patents Act 1988” (the “1998 Act”) and the ensuing decisions of courts. The Australian position[2] mirrors that of the UK where protection of a work is free and automatic upon its creation and differs from the position in the US, where work has to be registered to be actionable. While some divergences may be found, Australian copyright law largely replicates the frameworks in place within the US and UK. The copyright term is shorter than these jurisdictions in Australia being the creator’s life plus 50 years whereas the UK has a term of 70 years from the end of the calendar year in which the last remaining author of the work dies for literary works. As co-signatories to the Berne Convention, most foreign copyright holders are also sheltered in both the UK and Australia.

The 1988 Act catalogues the copyright holder’s exclusive rights as the rights to copy, issue copies of the work to the public, perform, show or play in public and to make adaptations. An ephemeral reproduction that is created within a host or router is a reproduction for the intention of copyright law. Though, there appears to be no special right to broadcast a work over a network, a right is granted in Section 16(1)(d) to broadcast the work or include it in a cable program service. The notion of “broadcast” is restricted to wireless telegraphy receivable by the general public. Interactive services are explicitly excluded from the designation of “cable program service” (S.7 (2)(a)). A proviso making an individual an infringer of the act in the event of remote copying has been defined to encompass occasions where a person who transmits the work over a telecommunications system[3] knowing or reasonably believing that reception of the transmission will result in infringing copies to be created.

The law contains provisions imposing criminal penalties and civil remedies for making, importing or commercially trading in items or services designed to thwart technological copyright protection instruments, and sanctions against tampering with electronic rights management information and against distributing or commercially dealing with material whose rights management information has been tampered with.[4]
There are several legislative limitations on the scope of exclusive rights under UK law[5]. Liability is also possible for secondary infringement including importing and distributing infringing copy prepared by a third party. The scope of the exclusive rights of the copyright owner is extensive enough to include an ISP or ICH that utilizes or consciously allows another to its system in order to store and disseminate unauthorized copies of copyright works. This situation would create the risk of civil action. A contravention could constitute a criminal offence if a commercial motivation for copyright infringement could be demonstrated.

The Australian High Court decision in Telstra Corporation Ltd v Australasian Performing Rights Association Limited[6] imposed primary liability for copyright infringement on Telstra in respect of music broadcast over a telephone “hold” system. A large part of the decision concentrated on the definition of the diffusion right in Australia.[7] It follows from this decision that if an ISP broadcasts copyright works to in the general course of disseminating other materials through the Internet, that diffusion is a “transmission to subscribers to a diffusion service” as defined by the Australian Copyright Act. It consequently emerges that an ISP may be directly liable for an infringement of copyright caused by that transmission under Australian common law for the infringements of its customers.[8]
A determination as to whether a message using telecommunications is “to the public[9] will likely hinge on whether the message is made “openly, without concealment” [33] to a sufficiently large number of recipients. No case has attempted to quantify a specific cut-off point.

In Moorhouse v. University of New South Wales,[10] a writer initiated a “test case” asserting copyright infringement against the University of New South Wales. The University had provided a photocopier for the function of allowing photocopying works held by the university’s library. A chapter of the plaintiff’s manuscript was copied by means of the photocopier. The library had taken rudimentary provisions to control the unauthorized copying. No monitoring of the use of the photocopier was made. Further, the sign located on the photocopier was unclear and was determined by the Court to not be “adequate[11]. The Australian High Court held that, whilst the University had not directly infringed the plaintiff’s copyright, the University had sanctioned infringements of copyright in that the library had provided a boundless incitement for its patrons to duplicate material in the library.[12] Intermediaries are frequently in the same position as the University. They provide rudimentary monitoring of client infringements at best. In July 1997, the Australian Attorney-General published a discussion paper[13] that proposed a new broad-based technology-neutral diffusion right as well as a right of making available to the public. This provides the position where direct infringement by users of a peer-to-peer (P2P) file-sharing network would be covered in Australian law in a manner comparable to the US position in both Napster and Grokster[14].

Mann and Belzley’s position holds the least cost intermediary liable is likely to be upheld under existing UK, US and Australian law. The positions held by the court in Telstra v Apra and Moorhouse v UNSW[15] define the necessary conditions to detail public dissemination and infringement through a sanctioned arrangement. The public dissemination of music clips on a website could be seen as being analogous to the copying of a manuscript with the ISP's disclaimer being held as an inadequate control. It is clear that the provision of technical controls, monitoring and issuing of take down notices by the ISP would be far more effective at controlling copyright infringement than enforcing infringements against individuals.

Several cases have occurred in the US involving ISPs or other service providers that hosted copyright material made available to those accessing the site. A significant decision was made in Religious Technology Center v Netcom On–line Communication Services, Inc[16]. The case involved the posting of information online which was disseminated across the Internet. The postings were cached by the hosting provider for several days, and robotically stored by Netcom’s system for 11 days. The court held that Netcom was not a direct infringer in summary judgment[17]. It was held that the mere fact that Netcom’s system automatically made transitory copies of the works did not constitute copying by Netcom. The court furthermore discarded arguments that Netcom was vicariously liable. The Electronic Commerce (EC Directive) Regulations 2002[18] warrants that the equivalent outcome would be expected in the UK[19].
The US Congress has acted in response with a number of statutes by and large that are intended to protect the intermediary from the threat of liability.[20] The Digital Millennium Copyright Act (DMCA)[21] envelops the possibility of liability from copyright liability. The DMCA is prepared such that it exempts intermediaries from liability for copyright infringement whilst they adhere to the measures delineated in the statute. These in the main compel them to eliminate infringing material on the receipt of an appropriate notification from the copyright holder. These protections only apply to the US. With the globalization of service offerings and the introduction of cloud computing, extra-jurisdictional issues still arise. This makes it more critical that intermediaries act to ensure that they have created contracts that can be enforced and that they maintain a suitable monitoring regime.

The “fair dealing” exceptions provided in the copyright laws of the UK are a great deal more restrictive than the “fair use” exceptions held by the US. If the Netcom[22] trial was held in the UK, it would have to deal with the explicit requirements of Section 17 of the UK’s 1988 Act that defines copying in a meaner that includes storage by electronic means. The act also includes provisions that cover the creation of transient or incidental copies. These provisions make it probable that the result in the UK would have varied from that in the US at least in the first instance. The inclusion of storage differentiates ISPs and ICPs from telephone providers aligning them closer to publishers. AN ISP or ICP could attempt to argue a similarity to a librarian over that of a publisher. The statutory provisions providing certain exemptions from liability for libraries under the 1988 Act and accompanying regulations are unlikely to apply to an ISP as the ability for a librarian to make copies is controlled under strict conditions. It is doubtful that these conditions could be met by either an ISP or ICP.

An ISP or ICP would rarely have complete (or even near complete) knowledge of the content held on their systems. In contrast, even the largest of libraries has a complete catalogue of the materials on its shelves. Both the common law of the UK and Australia divide defamation by publication into three classes. This includes the publisher who is strictly liable for publishing defamatory material. As the distributer of the material, they are presumed to know its content and are not at liberty to use the defense of innocent dissemination. Next are the subordinate publishers. These parties are also known as secondary distributors. The subordinates are liable for publishing defamatory material to a limited extent. The defense of innocent dissemination can be used if the party can demonstrate that they had no knowledge of the materials content. Lastly, there is the class of those who are not publishers and are not liable for publication.

If an ICP [Internet Content Provider] or ISP is to claim protection as a publisher, it is illogical to except the last class of defense to apply to them. In the first class, they are liable. This leaves only the option of claiming innocent dissemination as a secondary distributor. If it can be demonstrated that the ISP or ICP monitors the content they maintain in any way or that the content was brought to the attention of the ICP, this defense will fail. There are both similarities and differences between the UK common law and US defamation code. The US also creates three classes, primary publishers, secondary publishers (also called distributors) and parties who are not publishers. Primary publishers closely represent the UK common law class of publisher and do not receive protection through limited liability provisions in the Federal code. Secondary publishers do have some limitations as to the liability they can face. There are few cases that have considered the liability of ICPs. These have so far placed the ICP in the same place as authors of printed material. This approach does create interesting possibilities as can be seen from Macquarie Bank Ltd v Berg[23]. This case involved an ex parte application for an injunction to restrain the publication of material. The intent was to stop publication via a Web site hosted in the US. The result was that New South Wales Supreme Court Justice Simpson declared:

“An injunction to restrain defamation in NSW is designed to ensure compliance with the laws of NSW, and to protect the rights of plaintiffs, as those rights are defined by the law of NSW. Such an injunction is not designed to superimpose the law of NSW relating to defamation on every other state, territory and country of the world. Yet that would be the effect of an order restraining publication on the Internet”
Modern peer-to-peer networks have separated the network from software with a decentralized indexing process[24] in an attempt to defend themselves from an exposure to vicarious liability as in Napster.[25] The methods suggested by Kraakman’s analysis of asset insufficiency [14], have led ICPs and ISPs to become judgment proof, thus restraining the effectiveness of sanctions even against the intermediaries. It seems natural to expect as the technology develops that it in practice will be so decentralized as to obviate the existence of any intermediary gatekeeper that could be used to shut down the networks [37].[26]
The success of modern peer to peer networks has resulted in the content industry targeting those individual copyright infringers who use peer-to-peer networks to disseminate or download copyrighted material.[27] Existing peer-to-peer networks and software permits the capture of sufficient information concerning individuals who attach to the network to identify the degree of infringement and possibly who is responsible [13]. Recent advances to the P2P networking protocols have allowed users to screen their identity removing the ability for copyright holders to bring their claims to court [1]. As copyright infringement evolves, it will become more improbable to expect a solution through prosecuting individual users[28].

This type of action is currently being fought in the EU with Danish ISP, Tele2, planning to fight a court order requiring it to block access to the Bit-Torrent website known as Pirate Bay. The ISP has cut off access to the site for its customers but other ISPs in Denmark are yet to receive letters requesting that they also prevent their users from accessing the website. The International Federation of the Phonographic Industry (IFPI) has stated that it plans to dispatch the letters this week (Feb, 2008)[29].

Jurisdictional issues will play a large role in the determination of a case. The location of the plaintiff as well as the increasingly global nature of Internet commerce introduces a level of uncertainty to both the ISP and ICP as well as the author of information. It is insufficient for the ICP to consider the jurisdiction in the locality where they are incorporated in alone. Rather, it is necessary to also consider the possible range of jurisdictions from which clients of the ICP may operate. Some jurisdictions, such as Australia, seek to limit the reach of their influence. Other jurisdictions such as Florida in the USA have taken the opposite approach. Florida’s ‘Long Arm’ statute permits jurisdiction over those “engaged in substantial and not isolated activity” within the state. When comparing the approaches of the Florida and NSW state courts, we see a radically diffident approach to determining jurisdiction.

[1] “A plagiarism on them all” November 20, 2004 -
[2] The Australian Act is modeled on the 1956 UK Act.
[3] This does not include broadcasting or cable
[4] See also, UK Intellectual Property Office (, Australian Copyright Council Online Information Centre ( and the US Copyright Office (
[5] See Queen’s Bench in Godfrey v. Demon Internet Ltd, QBD, [2001] QB 201. The United Kingdom Parliament took no action to exempt Internet Intermediaries from liability after the court held that an internet service provider liable as the publisher at common law of defamatory remarks posted by a user to a bulletin board.
[6] Telstra Corporation Limited v Australasian Performing Rights Association Limited (1997) 38 IPR 294. The Majority of the High Court (with Justices Toohey and McHugh dissenting) upheld the Full Court that music on hold transmitted to users of wired telephones represents a transmission to subscribers over a diffusion service. The Court further unanimously held that music on hold transmitted to users of mobile telephones involves a broadcast of the music.
[7] Section 26 of the Copyright Act 1968 (Cth, Australia), the Australian Copyright Act.
[8] This decision has created apprehension amongst authors. E.g. Simon Gilchrist “Telstra v Apra –Implications for the Internet” [1998] CTLR 16 & MacMillian, Blakeney “The Internet and Communications Carriers’ Copyright Liability” [1998] EIPR 52.
[9] Ibid; See also Goldman v The Queen (1979), 108 D.L.R. (3d) 17 (S.C.C.), at p. 30. It would therefore appear that it 70 is the intention of the sender of the message which is determinative of the private or public nature of the message
[10] [1976] R.P.C. 151.
[11] This is similar to the findings in RCA Corp. v. John Fairfax & Sons Ltd [1982] R.P.C. 91 at 100 in which the court stated that “[A] person may be said to authorize another to commit an infringement if he or she has some form of control over the other at the time of infringement or, if there is no such control, if a person is responsible for placing in the hands of another materials which by their nature are almost inevitably to be used for the purpose of infringement.”
[12] [1976] R.P.C. 151 “[A] person who has under his control the means by which an infringement of copyright may be committed - such as a photocopying machine - and who makes it available to other persons knowing, or having reason to suspect, that it is likely to be used for the purpose of committing an infringement, and omitting to take reasonable steps to limit use to legitimate purposes, would authorize any infringement that resulted from its use”.
[13] See Attorney-General’s Discussion Paper, “Copyright and the Digital Agenda”, July 1997 at 71. The goal of this paper was to indicate the method by which Australia could implement the international copyright standards agreed at the December 1996 WIPO meeting.
[14] A&M Records Inc v Napster, Inc 114 F Supp 2d 896 (ND Cal 2000) & A&M Records Inc v Napster, Inc 239 F 3d 1004 (9th Cir 2001); Metro-Goldwyn-Mayer Studios Inc v Grokster Ltd No.s CV-01-08541-SVW, CV-01-09923-SVW (CD Cal, 25 April 2003) ('Grokster') (available at & Grokster Nos CV-01-08541-SVW, CV-01-09923-SVW (CD Cal, 25 April 2003), 21-2.
[15] 47 U.S.C. § 230(c)(1) (2004) (This sections details the requirements of the CDA that do not apply to ISPs).
[16] 907 F. Supp. 1361 (N.D. Cal. 1995)
[17] See also, System Corp. v Peak Computer Co., F.2d 511 (9th Cir. 1993), in which it was held that the creation of ephemeral copies in RAM by a third party service provider which did not have a license to use the plaintiff’s software was copyright infringement.
[18] Statutory Instrument 2002 No. 2013
[19] The act states that an ISP must act “expeditiously to remove or to disable access to the information he has stored upon obtaining actual knowledge of the fact that the information at the initial source of the transmission has been removed from the network”. The lack of response from Netcom would abolish the protections granted under this act leaving an ISP liable to the same finding.
[20]With some minor exceptions, other countries have also seen broad liability exemptions for internet intermediaries as the appropriate response to judicial findings of liability. The United Kingdom Parliament took no action after the Queen’s Bench in Godfrey v. Demon Internet Ltd, QBD, [2001] QB 201, held an Internet service provider liable as the publisher at common law of defamatory remarks posted by a user to a bulletin board. In the U.S., §230 of the CDA would prevent such a finding of liability. Similarly, courts in France have held ISPs liable for copyright infringement committed by their subscribers. See Cons. P. v. Monsieur G., TGI Paris, Gaz. Pal. 2000, no. 21, at 42–43 (holding an ISP liable for copyright infringement for hosting what was clearly an infringing website).
In 2000, however, the European Parliament passed Directive 2000/31/EC, available at, which in many ways mimics the DMCA in providing immunity to ISPs when they are acting merely as conduits for the transfer of copyrighted materials and when copyright infringement is due to transient storage. Id. Art. 12, 13. Further, the Directive forbids member states from imposing general duties to monitor on ISPs. Id. Art. 15. This Directive is thus in opposition to the British and French approaches and requires those countries to respond statutorily in much the same fashion as Congress responded to Stratton Oakmont and Religious Technology Centres. Of course courts are always free to interpret the Directive or national legislation under the Directive as not applying to the case at hand. See, e.g., Perathoner v. Pomier, TGI Paris, May 23, 2001 (interpreting away the directive and national legislation in an ISP liability case).
Canada has passed legislation giving ISPs immunity similar to the DMCA. See Copyright Act, R.S.C., ch. C-42, §2.4(1)(b) (stating “a person whose only act in respect of the communication of a work or other subject-matter to the public consists of providing the means of telecommunication necessary for another person to so communicate the work or other subject-matter does not communicate that work or other subject-matter to the public”). The Canadian Supreme Court interpreted this provision of the Copyright Act to exempt an ISP from liability when it acted merely as a “conduit.” Soc’y of Composers, Authors and Music Publishers of Can. v. Canadian Assoc. of Internet Providers, [2004] S.C.C. 45, 240 D.L.R. (4th) 193, 92. The court in that case also interpreted the statute to require something akin to the takedown provision of the DMCA. See id. at 110.
[21]Pub. L. No. 105- 304, 112 Stat. 2860 (1998) (codified in scattered sections of 17 U.S.C.).
[22] 907 F. Supp. 1361 (N.D. Cal. 1995)
[23] [1999] A Def R 53, 035.
[24] Metro-Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd., 380 F.3d 1154 (9th Cir.) (Refusing to find liability for Grokster even though it aided end-users in copyright infringement because the service. This case is fundamentally different than Napster), cert. granted, 125 S. Ct. 686 (2004).
[26]This text explains that peer to peer networks have removed the intermediary on which copyright enforcement requires.
[27]See Amy Harmon, Subpoenas Sent to File Sharers Prompt Anger and Remorse, N.Y. Times, July 28, 2003, at C1. See also Brian Hindo & Ira Sager, Music Pirates: Still on Board, Bus. Wk., Jan. 26, 2004, at 13. See J. Cam Barker, Grossly Excessive Penalties in the Battle Against Illegal File-Sharing: The Troubling Effects of Aggregating Minimum Statutory Damages for Copyright Infringement, 83 Texas L. Rev. 525 (2004).
[28]Perversely, what probably has in fact reduced the frequency of copyright infringement is more crime: using P2P systems subjects a computer to the threat of viruses that are spread inside the files obtained. Wendy M. Grossman, Speed Traps, Inquirer (U.K.), Jan. 14, 2005, available at (last visited Jan. 15, 2005). Dissuasion has been the systematic effort by the recording industry to saturate P2P systems with dummy files that make getting the music a user actually wants quite difficult. See Malaika Costello-Dougherty, Tech Wars: P-to-P Friends, Foes Struggle, PC World, Mar. 13, 2003, at __ , available at,aid,109816,00.asp (last visited Jan. 15, 2005) (documenting the practice and attributing it to a company called Overpeer, which is apparently an industry anti-piracy company).
[29] See, and