Saturday, 10 May 2008

What can programmers can do to avoid buffer overflows

A buffer is a contiguous allocated chunk of memory, such as an array or a pointer in C. In C and C++, there are no automatic bounds checking on the buffer, which means a user can write past a buffer” (Grover, 2003). Manipulation of the buffer which occurs before it is read or executed may lead to the failure of an exploitation attempt (Lilly, 2002).

The primary solution to common buffer overflow problems mainly targets the impediment of significant system attacks. Although there is no panacea for all possible attacks, the methods below can make it more difficult to create or cascade buffer overflows and, hence, devastate the stability of stacks.

Both Viega & Messier (2003) and Grover (2003) note a number of protective actions that may help reduce the effects and likelihood of a buffer overflow occurring:

  1. Write secure code: Buffer overflows are the consequence of entering a greater amount of code into a buffer than it expects. “C library functions such as strcpy (), strcat (), sprintf () and vsprintf () operate on null terminated strings and perform no bounds checking. gets () is another function that reads user input (into a buffer) from stdin until a terminating newline or EOF is found. The scanf () family of functions also may result in buffer overflows”[1]. One of the most effective means to mitigate the risk from a buffer overflow is to block them and not make them feasible.

  2. Stack execute invalidation: Malicious code is an input argument to the program, it resides in the stack and not in the code segment. This is mitigated by not allow the stack to execute any instructions. Any code that attempts to execute any other code residing in the stack will cause a segmentation violation. This is not a simple solution.

  3. Compiler tools: Modern compilers and Linkers alter the method in which a programme is compiled. This may allow bounds checking to go into compiled code automatically, without changing the source code (Viega & Messier, 2003). These compilers generate the code with built-in safeguards that try to prevent the use of illegal addresses. Any code that tries to access an illegal address is not allowed to execute. (One tool is StackGuard. This detects and prevents “smash stacking attacks” by shielding the return address on the stack, stopping it from being altered. It places a canary word next to the return address whenever a function is called. If the canary word has been altered when the function returns, then some attempt has been made on the overflow buffers. It responds by emitting an alert and halting).

  4. Dynamic run-time checks: The application is only allowed restricted access which aides in the prevention of attacks. This technique chiefly relies on the “safety code” being preloaded before an application is executed (Grover, 2003). This preloaded component may either present a protected version of the standard insecure functions. It may also ensure that return addresses in the system are not allowed to be overwritten (Viega & Messier, 2003).
[1] Grover, 2003

Friday, 9 May 2008

Risk Party X

First party risks are simply those which primarily concern the organisation, whereas third-party risk concerns those parties which are external to the organisation. Volonino & Robinson (2004, p48) particularly differentiate first and third party risk in that first party risk of impact the organisation itself, whereas third-party risk creates liability through legal redress such as a lawsuit.

Any risk which impacts the organisations bottom line, reputation or otherwise devalues the organisation is a first party risk. Third-party risk is one which involves others external to the organisation such as the organisation’s partners, competitors or customers.
Some examples of first party risk of include any which impact the organisations bottom line directly such as electronic fraud online theft. Examples include the compromise of Citibank by Russian attackers in the early 1990s where USD10 million[1] was stolen through an unauthorised electronic transfer. There was no direct impact to the customers of Citibank and the reserve funds of the bank did not fall below the required level. As such there was no third-party impact or loss.

The next example (or more correctly set of examples) of a first party risk involved the many parties who had their web sites defaced and subsequently listed on the anti-online defacement site. Though there was a large amount of public embarrassment for many of these sites, these did not involve any realisable or actionable third-party costs.

Concerning third-party risk, one of the earliest and worst computer incidents did not involve hackers. This case was a software controls and design failure. The Therac-25 system was created by one programmer who revised the Therac-6 systems (Levson & Turner, 1995). This was a PDP-11 based system which controlled a CS-3604 x-ray source. Between 1985 and a following 19 month period to 1987, six people were irradiated with a massive dose of x-rays. In each of these cases severe physical damage or death resulted. This risk resulted from a control failure which allowed a single programmer to write, test and review a single set of code. This was one of the worst third-party risks as not only were three people seriously maimed, but three people died as a direct consequence of a control failure.

There are multiple examples of third-party risk. The recent release of the “Privacy Rights Clearinghouse's (PRC)” register detailing the number of personal records "involved in security breaches" has is close to 100 million breached recorded thus far[2]. The PRC has detailed and accounted security breaches ever since the ChoicePoint episode[3] was publicly leaked in February 2005. This demonstrates the level of control failure which currently surrounds us all.

[1] FraudWatch - Chip&Pin, a new tenner (USD10) http://www.financialcryptography.com/mt/archives/000673.html
[2] http://www.technewsworld.com/story/53222.html
[3] http://www.newsobserver.com/104/story/493117.html

Thursday, 8 May 2008

Taming the Wild wild web

In many ways, although this is slowly changing, and the Internet and Web have many analogous parallels to the ideal of a frontier. The Wild Wild Web (west) of the Internet (Behan, 1995) is slowly fading as new laws and methods of enforcement are brought to bear.

In this frontier world of the Internet, the mythology of the antihero has played a large part in the cultural development surrounding the Internet. In this analogous context, people such as Simon Vallor play the role of the western hero. Like Butch and Sundance in the US, or the Kelly's in Australia, the role of the outlaw takes a particularly strong psychological enticement to those who feel disenfranchised (Zur, 1991).

Through the creation of computer code to wreak digital havoc, the antihero makes his/her stand against society by thrusting themselves into the limelight. Like the outlaws of old, their reputation requires that they are caught. By making an example of them in the public press and providing for a mythological level of intrigue and technological magic to detail the simple acts they create, the common press promulgates this analogy (Bowser, 2004).

To burst this bubble, we need to demystify the antihero. We need to show them what they are. People like Kevin Mitnick for instance have grown in infamy through their exploits (Littman, 1997). However, all they have done is break the law. Mr Mitnick was a simple confidence trickster with skill in the ability to deceive. Why do we reward this?

Destruction is easy; creation is difficult and requires skill. By allowing the hacker antihero mythos to survive we allow this disenfranchisement of our rights and society's rules to occur.


References and further reading:

Behan, Catherine (1995) “Taming the wild, wild Web” [April 25, 1996] The University of Chicago Chronicle, University of Chicago Vol. 15, No. 16
Bell, D. Elliott & LaPadula, Leonard J. (1973). "Secure Computer Systems: Mathematical Foundations". MITRE Corporation.
Bell, D. Elliott and LaPadula, Leonard J. (1976). "Secure Computer Systems: Unified Exposition and MULTICS Interpretation". MITRE Corporation.
Bell, David (December, 2005). "Looking Back at the Bell-La Padula Model". Proc. 21st Annual Computer Security Applications Conference.
Bishop, Matt (2003). “Computer Security: Art and Science”. Boston: Addison Wesley.
Biba, K. J. (1977) “Integrity Considerations for Secure Computer Systems, Technical Report” MTR-3153, MITRE Corporation, Bedford, Massachusetts, April 1977.
Bosworth, Seymour & Kabay, M. E. (Ed.) (2002) “Computer security Handbook” Fourth Edition, John Wiley & Sons Inc. USA
Bowser, Diane J. (2004) “Being-in-the-Web: A Philosophical Investigation of Digital Existence in the Virtual Age.” PhD Dissertation proposal, Duquesne University
Casella, George & Berger, Roger L (2002) “Statistical Inference” Duxbury Advanced Series
CSI/FBI (2006) “Computer Crime and. Security Survey” http://www.gocsi.com/
DTI (2006) “A Director’s Guide, Information Security” Dept. of Trade and Industry UK
ISO 17799:1/17799:2 Standards Australia
Leveson, Nancy & Turner, Clark S. (1993) “An Investigation of the Therac-25 Accidents” IEEE Computer, Vol. 26, No. 7, July 1993, pp. 18-41
Littman, Jonathan, (1997) “The Watchman: The Twisted Life and Crimes of Serial Hacker Kevin Poulsen” Little, Brown and Company; 1st edition
McLean, John. (1994). "Security Models". Encyclopedia of Software Engineering 2: 1136–1145. New York: John Wiley & Sons, Inc.
NIST (800-12) “An Introduction to Computer Security: The NIST Handbook” (Special Publication 800-12)
NIST (800-27) “Computer Security” (Special Publication 800-27)
NIST (800-30) “Risk Management Guide for Information Technology Systems” (Special Publication 800-30), 2002
NIST (800-41) “Guidelines on Firewalls and Firewall Policy” (Special Publication 800-41)
NIST (800-42) “Guideline on Network Security Testing” NIST Special Publication 800-42
Panko, Raymond R. (2004) “Corporate Computer and Network Security” Pearson Prentice Hall, NJ
Rice, John A. (1999) “Mathematical Statistics and Data Analysis” Duxbury Press
Shimomura, Tsutomu & Markoff, John (1996) “Takedown: The Pursuit and Capture of Kevin Mitnick, America's Most Wanted Computer Outlaw-By the Man Who Did It”, Warner Books Inc
Stein, L. D. (1998) “Web Security”, Addison-Wesley
Volonino, Linda & Robinson, Stephen R. (2004) “Principles and practice of Information Security”, Pearson Prentice Hall, NJ
Wells, Joseph T, (2004) “Corporate Fraud Handbook” ACFE, John Wiley & Sons
Zur, O. (1991). The love of hating: The psychology of Enmity. History of European Ideas, 13(4), 345-369

Wednesday, 7 May 2008

QA and Patching

System administrators are often reluctant to apply patches when first released. One of the major for this hesitation stems directly from the large number of reissued patches. Due to past (and occasionally still present) failures to adequately test software, system patches have been known to introduce more problems than they fix.

It is often also the case that vendor software which provides a fix for their software could adversely impact other than the products. As a result, an administrator has to go through test or development stages followed by a quality assurance or QA phase prior to introducing the patch into a production environment.

Failure to adhere to this process from the system development life cycle (SDLC) process creates additional risk and unpredictability and software and solutions. As the number of systems and sites that follow an SDLC based regime is minimal, most systems administrators fear and avoid catching systems.

Even in an environment where the system administrator wants to implement a test and QA environment, management are often loathe to provide the necessary funding.

Tuesday, 6 May 2008

The use of IRQ and I/O Ports in Networking

I/O Addresses
The computer's memory map uses certain memory addresses for selected tasks and hardware allocation. The I/O addresses are in hexadecimal format (Sweet, 2005).
[1] The routines for accessing I/O ports are maintained within /usr/include/asm/io.h. When programming any Unix or Unix like O/S (such as Linux), each byte that is input from a port (call inb(port)) returns the same byte information. To output a byte (call outb(value, port)) the same process is used, but the I/O port must be defined. Thus to input a word (2 bytes) from ports X and X+1 one byte is assigned to each port to structure the word using the assembler instruction call inw(X). Output of the word is achieved using the two ports, [outw(value, X)].
The manual pages for ioperm(2), iopl(2), and the other macros give further details of their use.

IRQ's (Interrupt Request) Lines
IRQ is an asynchronous signal sent to microprocessor to advertise completion of a task. IRQ's are the direct access path from hardware and other peripherals to the main computer (CPU) and allow devices linked to the computer to signal the CPU to request CPU time. Three IRQ’s (of the 16) are dedicated to the main system board as the system timer, keyboard, and memory parity error signal (Tackett, 1998; Minnich, 2000).

The parallel port
The parallel port's base address is the I/O address. This is generally (although not always) 0x3bc for /dev/lp0, 0x378 for /dev/lp1, and 0x278 for /dev/lp2. Extended bidirectional mode, ECP/EPP modes and the IEEE 1284 standard in general treat this somewhat differently leaving programmers to write a kernel driver if they need to include ECP/EPP printer modes (Frisch, AEleen, 1995).

Serial port
RS-232 is the (most common/general) serial port protocol. Standard Linux serial drivers are adequate for nearly all applications. The IRQ and I/O are used by Linux in the Kernel mode to communicate with the serial ports. As the Serial port protocol and drivers are defined by Linux in User mode, it is unlikely that direct calls would need to be made to them.
A system call is made when access an I/O device or a file (like read or write) is required (Doell, 1994).
Overall and how this relates to Networking
Network I/O is unpredictable (Loukides, 1992); traffic flows are rarely synchronous or consistent leading the system to sort multiple I/O requests efficiently. IRQ are used by the system to allocate CPU time to the various input streams. I/O is used to match up the byte and create words.

[1] Fraser et al (2004)

Monday, 5 May 2008

The Impact of US law in .AU

The Gramm-Leach-Bliley Act[1], the Sarbanes-Oxley Act[2], and the USA PATRIOT Act[3] have an effect on security administration within Australia for a number of reasons. The least of which is that many multinational firms with Australian and US foundations are subject to the number of US jurisdictional controls.

Gramm-Leach-Bliley Act has effect due to its impact on the international financial community. Not only banks and financial institutions based in the US are impacted, but those institutions which wish to deal with the US are also included in this net. Due to the size of the US economy, this legislation has to at least a limited extent impacted security administration of all Australian financial institutions.

Sections 302 and 404 of the Sarbanes-Oxley Act (not to mention Sections 802, 1102 and others)have an impact which covers multinational firms. This act effects of financial regulations of not only US companies but any that raise funds through the US. Such means would be a U.S.-based institutional bond raising or issue.

The USA PATRIOT Act again has some influence, but to a limited extent. As many telecommunications companies, health care companies and defense contractors (just to name a few examples) deal extensively with the US, they are impacted by this legislation.

Lastly, through international trade agreements and government alliance, the advance of these legislative instruments has a political effect within Australia. The promotion of these Acts has resulted in similar changes to Australian legislation. Changes to the evidence act, antiterrorism laws and accounting changes such as ASSB and “force of law” auditing standards have resulted from a direct international influence.

Although these rules have created compliance concerns, in many instances they have done little to promote increased computer security. Their focus on selected areas such as financial data and privacy has created or promoted many gaps within the other areas of an organisation's control structure. In some cases and to the ideal the advancement of computer security has been achieved. However, for the most part the addition of a compliance regime has created an industry of tick-box will it is more concerned with the letter of the law then the intent.

[1] Gramm-Leach-Bliley Act 15 USC, http://www.ftc.gov/privacy/glbact/glbsub1.htm
[2] Sarbanes-Oxley Act 2002 http://www.legalarchiver.org/soa.htm
[3] The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001 (Public Law 107-56), commonly known as the USA PATRIOT Act or simply the Patriot Act, http://thomas.loc.gov/cgi-bin/bdquery/z?d107:HR03162:%5D