Saturday, 6 December 2008

Reversing a file with DD

Back to my posts concerning 'DD'. IN particular, how to reverse a file with dd.

This is actually fairly simple, a small shell script code executed in with the length of the file (based on the sector size) is all that is required. You can either use a default block size (where the individual blocks will be moved into a reverse order), or set the block sieze to 1 in order to completely reverse the file. The flag, "bs=1" is added in order to copy the entire file in reverse - bit by bit.

If the size of the file and its name are known beforehand, the script is particularly simple (note that this script uses the 'count' command which is not foiund on all systems):

$j = [file_size]
$F=[file to copy]
for i in `count 0 $j`; do
dd conv=noerror bs=1 count=1 skip=($i) if=$F > /($j).out

In the event that you do not know the size of the file, the following script can be used (or if you want to incorporate this in to a script that changes multiple files at once) you need to feed more information into the script (including a file descriptor). This script is a little messy (I have not made any effort to tidy it up), but does the trick.

#! /bin/bash
# This is a small utility script that will reverse the file that a user inputs
# It is not coded securely and presumes the directory for a number of command - change
# this to run it in a real environment. The main thing is a proof of concept anti-forensic tool.
# This script by reversing files will make the file undetectable as a type of file by commercial
# file checkers. Run it in reverse to get the original back.
# Author: Craig S Wright

#Set the file to reverse
echo "Enter the name (and path if necessary) of the file you want to reverse:"; read FILE

#i Work out the file size
SIZE_OF_FILE=`/bin/ls -l $FILE | awk '{print $5}'`

#The script - not pretty - but the idea was all I was aiming at

K=`expr $SIZE_OF_FILE - $i`
/bin/dd conv=noerror bs=1 skip=$K if=$FILE count=1 > $FILE.out
i=`expr $i + 1`

J_Plus=`expr $SIZE_OF_FILE + 1`

while [ "$i" != "$J_Plus" ]
K=`expr $SIZE_OF_FILE - $i`
/bin/dd conv=noerror bs=1 skip=$K if=$FILE count=1 >> $FILE.out
i=`expr $i + 1`

To go a little further and add some options, I have included the following example. I have NOT added input checking or other NECESSARY security controls. This is a quick and nasty only. Please fix the paths and input checking if you want to run it.

The following script is called

#! /bin/bash
# Set the file to reverse - I DO NOT check if the file actually exists - you should!

echo "Enter the name (and path if necessary) of the file you want to reverse:"; read FILE

# Default File output = FILE.out

# Set the file where the reversed file is to be saved - I DO NOT check if the file actually exists - you should!
# echo "Enter the name (and path if necessary) of the file you want ithe output saved as (must be different to the input):"; read $FILE_OUT

#Set the Block Size. This will default to BS=1 for dd
echo "Enter the Block Size (the default = 1 bit):"; read BS_SIZE

#i Work out the file size
SIZE_OF_FILE=`/bin/ls -l $FILE | awk '{print $5}'`

#The script - not pretty - but the idea was all I was aiming at

K=`expr $SIZE_OF_FILE - $i`
/bin/dd conv=noerror bs=$BS_SIZE skip=$K if=$FILE count=1 > $FILE_OUT
i=`expr $i + 1`

J_Plus=`expr $SIZE_OF_FILE + 1`

while [ "$i" != "$J_Plus" ]
K=`expr $SIZE_OF_FILE - $i`
/bin/dd conv=noerror bs=$BS_SIZE skip=$K if=$FILE count=1 >> $FILE_OUT
i=`expr $i + 1`

# The end...

To use the previous script enter (I have not tested using the other block size options):
$ ./

Enter the name of the file you want to reverse and the block size (best left at 1 bit). This will return the bitwise reversed file. If you want to verify it - run it twice and use "diff" to validate that the same file is returned. This will reverse the reverse and get the original back.

This works on text and binary files and with a little tweeking, you can reverse headers but leave the body the same, reverse the body after skipping the file header and many more options.

I am yet to find a forensic tool that will find reversed text...

Thursday, 4 December 2008

SEC709 - Developing Exploits for Penetration Testers and Security Researchers

I was signed up for Stephen Sim's SEC-709 course at NS2008.

The material was great and truly in depth. I have only 1 issue which I see could be an opportunity for SANS.

At NS2008, it was a 2 day course, at SANS 2009, it has been extended to 4 days (As can be seen from the description on the link).

I applaud Stephen Sims for the course and enjoyed the material greatly, but what I would ask is could SANS find some way to update the material for those who have taken the courses within a reasonable timeframe (say 6 months) and they are updated like this. Make it an additional cost or service - subscription courseware/books and I would pay for the addition benefit of having these shipped.

In my case, I received my 2 day materials and as stated think they are great, but now I see an EXTRA couple days.

I personally would he more than happy to pay for the extra books and shipping if this was available. Small updates to the courseware occur all the time, and I do not care to have new books every 6-8 weeks, but major changes are another thing.

So the question I would pose to SANS is there a way to update students with newer materials if they have taken a course and it is changed significantly in less than 6 months.

Tuesday, 2 December 2008

What am I up to?

At the moment I am working on a number of things. I am working on a few GIAC Certifications.

I have signed up for the CSSP-C and GCMP exams. On top of this I am renewing my GCFA certification (it is coming up on 4 years soon).

These are the following:

  • GSSP-C - GIAC Secure Software Programmer - C
  • GCMP - GIAC Certified Project Manager Certification
  • GCFA - GIAC Certified Forensics Analyst
I have completed a PMP course and done a post graduate level project management course, so I do not think I shall need to do more than a little refresher for the GCMP. The GCFA is a part of my daily forensic work, so no issues there - besides, I just had to do all of this for the GSE-Malware exam a few weeks ago, so it is still fresh.

The GSSP-C is a little more work. I have been reviewing "The CERT C Secure Coding Standard" by Robert Seacord (Addison-Wesley, October 2008.). This is an excellent book if a little dry. It is not the easiest read and packs a lot into its covers.

I am starting another masters degree (as I have posted) soon. I would like to sit the secure coding exam in C before I start with a course that is primarily focused on C#. I also plan to sit the GSSP-J secure Java coding exam in 2009.

On top of this, I have my GCUX Gold paper to complete before the end of the year. It is on the way, but messy at the moment. I have the 100 Unix commands, but there is a great deal more I would like to add.

On top of this I present my (and the other's *) paper on disk wipes and recovery at ICISS08 in a couple weeks.

Well, back to the millstone.

* The paper and the others are "Overwriting Hard Drive Data: The Great Wiping Controversy" by Craig Wright, Dave Kleiman and Shyaam Sundhar R.S..

WOW people see public information!

Today there is an uproar in the Australian "Financial Review" titled "Click goes your identity as thieves perfect online scam". This is clearly FUD. Ignorance at its most block headedness.

This is based on a fake Linkedin invite from "Mr James Packer". It was clearly fake as the invite used PBL as the company when Packer had already left.

My view, it was either a test or a recruiter. Either way, I do not refuse many invites. The information is entirely public as far as I am concerned. Between my blogs and other sources, it is there for the taking. So why the uproar. Ignorance. There is a big difference between private and public information. That which is private does not go near the cloud. That which is public is open.

As for the reams of commercially sensitive information - it is there already. There are many "LION" open linkers who have 10's of thousands of connections. So where is the difference.

The treasure troves of information is in effect an online CV. As a consultant I hand mine around with abandon, so I still fail to see the point. Nothing on the site garnishes you with a Credit Card. There is too little information to forge a loan.

Besides, searching will gather about all you could hope to find anyway. A combination of Google and Linkedin gets you all the same information - but without alerting the other person.

Get a clue and stop the FUD.

Monday, 1 December 2008

Application "HotSpots"

A great tool in the binutils package is gprof (The GNU Profiler). Gprof is a profiling program which collects and arranges statistics on a program.

The GNU Profiler is a tool that can be utilized in order to preform an analysis of a program execution. In secure code analysis and audit, malware analysis as well as other code analysis functions, this may be used to determine the program "hotspots". These are the functions that require more processing time or run-time then the rest of the application.

Hotspots are commonly the functions that are the most mathematically or I/O instensive. Where a program uses a crypto function, the processing time will increase.

Timing studies may also be used in the analysis of a cryptographic key.

gprof is not a debugger
You will need to have a working program to optimize that program. Malware analysis requires that you first reverse the code and then reassemble or recompile the code with debugging enabled. This is not an easy task and I am not going to cover this in this post.

To run gprof use the following syntax:

gprof options [executable-file [profile-data-files...]] [> outfile]
There are a number of ways to profile a program. These are listed below:

Time Profilers:
  • These provide statistics as to where a program has spent its time
  • This may also be used to determine which functions called which other functions while it was executing
  • The profiler will count how many times a program function was called and which function called it.
Space Profiler:
  • This is also known as “heap profiling” or “memory profiling
  • Space profiling is useful to help you reduce the amount of memory your program uses
  • Space profiling stops the execution and examines the stack whenever a page of memory is allocated
  • This allows it to collect data concerning the function that has been requested.
Whence the data has been collected by a profiler, an interpreter needs to be used in order to convert the data into a format that can be used. There are both text and graphical display options with gprof.

The output from gprof may be displayed in three report formats:
  1. A flat file report listing total execution times and call counts for all functions,
  2. A list of the functions called sorted using the time associated with each function (and the children of that function),
  3. A list of the cycles. This displays the members of the cycles and the associated call counts.
The output is sent to standard out by default, but may be redirected to a file etc.

Sunday, 30 November 2008

Masters of System Development

I have completed my Masters of Information Systems Security at Charles Stuart University last week. This was my 3rd Masters Degree with CSU (which is a fraction of the total). I am starting my 4th in January 2009 and have been running through a CBTs on C# this weekend.

I am starting the Masters in Systems Development next. I have Java, PhP, Perl, Assembly, C and C++ language experience, but have not done a great deal with C#. I have the SANS GNET certification and some knowledge of .Net, so this is a beginning.

Being that I have already taken many of the compulsary subjects, I have an unusual collection of subjects for the degree. These are:

University Subjects

Industry Subjects
  • ITI530 Application Development Fundamentals
  • ITE501 Windows Forms Application Development
  • ITE502 ASP.NET Application Development
  • ITE503 ADO.NET Application Development
  • ITE504 Windows Communication Foundation Application Development
  • ITE505 Enterprise Applications Development
My main focus will be around the material in ITC 584. This is in machine learning.

ITC584 Machine Learning

Machines "learn" by retaining facts with meaningful links between them. This subject explores current approaches to machines acquiring, storing, linking and using facts.

I have approached this from the Statistical side of the equation, now I wish to learn from the data and computing side of the equation. Though both are related, it is of interest to see the different approaches to the same issues and problems.