Saturday, 12 July 2008

More on DNS

Shame I have not finished the DNS paper - I will endeavor to get this done before August.

Having tested 5,000,000 IP addresses earlier this year and finding 264,125 DNS Servers with over 200,000 of them found to hae not been correctly patched, I do not see that the randomness issue is the biggest concern.

Yes, many of the most vulnerable systems I recorded are small sites and home users, but they are still an issue. Worse, I found over 25,000 systems in this that can be remotely compromised. If a site wants to make a great bot-net, an adaptive DNS scanner will work wonders. It would need to account for many attacks as I found over 50 versions of BIND running (including some version 4s), but it would be doable.

Of course they could already be a part of a botnet...

So the issue is not so much new versions of old attacks, but the fact that most people do little anyway.

What I did get from the scan:

DNS [2] [3] [4]
ISC BIND 36.77% 23.85% 68.55%
Microsoft 78.31% 0.00% 16.56%
Tiny DNS 38.91% 0.00% 2.22%

DNS [5] [6]
ISC BIND 79.55% 21.87
Microsoft 84.15% 15.50

What I found was that approx 75%+ servers are vulnerable at some level. Worse, over 16% of DNS servers are still vulnerable to a root compromise.

The big issue was that huge numbers of DNS servers are not patched against root level compromise. If this issue gets people patching - great - but I doubt it. Much work on my paper is still needed. However the issue remain patching.

Thursday, 10 July 2008

DNS Issues

My response to the Shocker DNS spoofing vuln .
This was one of the many issues I noted in a 2000 report on DNS. ICANN stated my "study was flawed" and got the lawyers involved.,130061744,120101062,00.htm

I did a followup of this in 2005/2006. I submitted a paper to the IEEE who rejected it. It was "sensationalist", "overly theoretical" and best yet I had a reviewer state "This could never be exploited on a real system" and best that I "obviously have no idea of how DNS works".

I am publishing an updated paper for a SANS GCIA Gold attempt that is coming out later in the year on this and a number of other DNS attacks.

On top of this I expanded the testing. In 2005 I tested 2,500,000 servers. Earlier this year I ran a test of 5,000,000 systems. This will be in the SANS paper.

1. If the Levels of Security (based on patching practices) has improved since 2000 and 2005?
2. How the TLD[1]’s and Australian servers compare to the general population of DNS Servers worldwide,
3. How security the Internet is based on the overall level of DNS Security.

In fact, it looks suspiciously like the one from way back in 1997. There was a theoretical paper I used in this paper in 2000. From what that described and what I have read of this current issue - the paper describes it to a T.

The paper is still available at:

Based on the query ID and port and as a MiTM, we need the details, but it seems awfully similar...

[1] TLD, Top Level Domains

Wednesday, 9 July 2008

Child Pornography and Obscenity

Any work that depicts the sexual behaviour of children is classified as child pornography. The anonymity and ease of transfer provided through the Internet has created an international problem with child pornography[1]. The increasing pervasiveness of chat rooms, instant messaging (IM) and Web forums[2] has increased the potential for sexual abuse to occur against children. This use of chat rooms by paedophiles for the purposes of sexually abusing children by starting relationships with them online is widespread. This normally involves making friends with the child, beginning a stable rapport and then steadily exposing the children to pornography through means of images or videos that include sexually overt matter.

Additionally, the Internet has increased how readily available pornography is to children. The ability of children to view pornographic magazines, adult films and movies can be guarded making it difficult for children to obtain illicit materials. As many parents are less computer literate than their children, it is often difficult for them to stop pornography from being downloaded using the Internet by their children. Further, freely available pornographic publications in open areas such as newsagents are controlled through legislation and are only allowed to contain “soft” pornography.

There are few restraints to publishing pornography on the Internet. In fact, “hard-core” pornography is legal within many countries. For example, Denmark[3] has legalised any category of pornography (except child pornography) allowing it to be produced, sold, displayed in cinemas to persons who are 16 years or older and published on the Internet. This includes extreme violence and bestiality. The availability of pornography from these jurisdictions aids in its distribution between school children. An immense amount of obscene matter concerning children is also available. R v Smith and R v Jayson[4] were heard jointly in the Court of Appeal. The Court addressed the matter as to what constitutes "making a photograph or pseudo photograph" for the purposes of s.1(1)(a) of the Protection of Children Act 1978. Jayson avowed that the act of willingly downloading an indecent image from the Internet to a computer screen represents "making." Similarly in Smith it was held that opening an e-mail attachment enclosing an indecent picture could comprise "making." The necessary mens rea in each case is that the performance of “making” need be a conscious operation with the awareness that the picture was, or was likely to be, “an indecent photograph or pseudo-photograph of a child”. It was demonstrated that it is not necessary to prove an intention to store the image in order to fulfil the prerequisite of mens rea.

The Obscene Publications Act 1959 UK[5] [the “1959 Act”] relates to media with the potential “to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it”[6]. The volumes of case law[7] that have defined obscenity have created a range of classifiers that when taken as a whole would be seen to have a propensity to deprave and corrupt the kind of individuals that have witnessed it. Due to their capability to be influenced, children face a greater peril. Print media based “hard-core” pornography can be limited whereas digital pornographic images on the Internet are readily available and require additional measures to restrict access. The Criminal Justice and Public Order Act 1994[8] [the “1994 Act”] was enacted to include the obscene images stored or broadcast as electronic data.

The 1959 Act defines the publication[9] or possession with the intention of publication for gain of an obscene item to be a criminal act. The additions to the law introduced by the 1994 Act connote that an ISP or ICP could face prosecution for the publication of obscene material introduced through an intermediary without consent as the 1959 Act does not require that the defendant had the intent to deprave or corrupt. If the ISP can argue that no examination of the offending media and no reasonable cause to suspect an obscenity existed, they have a defence to the charge. However, a notification and subsequent failure to act within a reasonable time would remove this protection. The wide-held knowledge of the types of materials being disseminated across the Internet would make the introduction of monitoring software prudent.

More crucially, the Protection of Children Act 1978[10] (as revised by the 1994 Act) makes it a crime “to take, or permit to be taken or to make, any indecent photograph or pseudo-photograph of a child”, “to distribute or show such indecent photographs or pseudo-photographs” or to hold “possession such indecent photographs or pseudo-photographs”. The revisions of the 1994 Act extended the definitions to include any “data stored on computer disk or by other electronic means which is capable of conversion into a photograph” with the introduction of the expression “pseudo-photograph”. The act also extends the definition of child to include any image where the principal sense derived from the image would lead one to believe that the picture is of a child, whether or not the person (or representation[11]) in the image was actually a child. The nature of the images must be “indecent”[12] to be included within the provisions of the 1978 Act. The danger for an ISP or ICP is that mere possession is all that is required to be prosecuted under this Act leaving it possible for both the content owner and the service provider to be jointly charged. Child pornography is also covered by the Criminal Justice Act 1988[13]. The possession of an indecent photo of a child is an offence under the act which is also amended by the 1994 Act.

Under the Telecommunications Act 1984[14] it is an offence to transmit any communication of a grossly offensive, indecent, obscene or menacing character through means of a telephone from the UK. As the definition of communication includes data transmissions sent by modem Internet transmissions are also included. An Internet service provider would not be expected to be effected by this Act as it is aimed at the instigator of the message containing the illicit material. However, the increasing use of VoIP[15] and the associated capability to record and replay communications could place a service provider at risk it they came to know about an illicit transmission and did not act to mitigate it.

The Indecent Displays Act[16] added the offence of publicly displaying indecent material. The individual who creates an indecent display as well as somebody who causes or permits such a display can be held guilty of an offence. Display is defined under the act to be visible from any public place including free Internet transmission. Section 1(3) states that the requirement of a payment to access the material means that such a site is not on public display. Thus a pay for view pornographic website is not covered by the Act. The Act applies to both individuals and organisations.

The Sexual Offence (Conspiracy and Incitement) Act[17] made it an offence to conspire or incite others in the UK to perform sexual offences outside of the UK. Under this Act, the foreign poster of an Internet communication comprising an incitement under the act could be prosecuted in the UK. A service provider or other organisation with knowledge of such a transmission who subsequently fails to act could face both criminal and civil action.

The US Congress tried to address the problem of the ease of access to this type of material by children through the Telecommunications Act of 1996. Title V of the act (commonly known as the Communications Decency Act, CDA) included provisions with the intent to regulate the dissemination on the Internet of material deemed to be inappropriate to minors. Shortly afterwards however, the Supreme Court struck down sections 223 (a) and (d) in Reno v. American Civil Liberties Union[18] result of these and subsequent cases is that there is no clear “community standard” which defines obscenity. In cases such as child pornography, this is being clearly held not to be expression protected by the First Amendment. The Internet has provided offenders with greater access to obscene materials and even aids in the solicitation of children by paedophiles.

The issue of free speech protections in the US does not preclude being prosecuted in a jurisdiction with extremely stringent standards (such as China) for matter that would not be deemed offensive in its homeland. This would be of greatest concern to the most significant service providers that have multinational operations and thus may face International actions[19].

An alternative option to limit child pornography over the Internet is to target payment intermediaries. These organizations allow it to remain profitable to sell child pornography across the internet. Even though a great quantity of pornography is distributed through non-commercial transactions[20], commercial sites are a key supplier of child pornography over the internet. The commercial sources of a great deal of child pornography could be curtailed by targeting payment intermediaries. As commercial pornographic distributors commonly oblige credit card processing and necessitate this information to be held in a database for processing before granting access the service, the credit card both ensures payment for the service and authenticates the client’s age. This approach thwarts many of the issues a site could be exposed to if it permitted minors to access pornographic material.[21] Thus access to credit card processing is vital to the operation of a commercial website offering pornography[22].

[1] The exploitation from child pornography can result in far reaching negative effects and suffering. Those concerned with the child pornography trade often entice problem or disabled children with pledges of pecuniary or other payments. Children who are sufferers of sexual exploitation may undergo lifelong depression, emotional dysfunction fear and anxiety.
[2] such as Facebook and chat rooms.
[3] Quimbo, Rodolfo Noel S (2003) “Legal Regulatory Issues in the Information Economy”, e-ASEAN Task Force, UNDP-APDIP (MAY 2003); See also, JT03220432 (2007) “Mobile Commerce” DIRECTORATE FOR SCIENCE, TECHNOLOGY AND INDUSTRY COMMITTEE ON CONSUMER POLICY DSTI/CP(2006)7/FINAL, 16-Jan-2007
[4] 2002 EWCA Crim 683 (No. 2001/00251/YI)
[5] Obscene Publications Act 1959, UK; see also Obscene Publications Act 1964, UK
[6] Ibid, S 1.1.
[7] Case law on obscenity predates the Internet and may be extrapolated from the large amount of case law concerning mail order pornographic material, video tapes and printed media.
[8] Criminal Justice and Public Order Act (UK) 1994 CHAPTER 33
[9] Publication includes of any variety of sale, distribution or performance.
[10] The Protection of Children Act 1978 (UK).
[11]The Act includes computer-generated and manipulated images and if these are significantly similar to the image of a child such that they are likely to be taken to be a child shall be treated as such.
[12] Indecent is different from obscene. Indecency occurs at a reduced level of offensiveness than obscenity. In particular where children are involved a lower standard of offensiveness will be required.
[13] The Criminal Justice Act 1988 (UK).
[14] The Telecommunications Act 1984 (UK).
[15] Voice over IP.
[16] The Indecent Displays (Control) Act 1981. The aim of the Act is to make fresh provision with respect to the public display of indecent matter and to this end a number of existing statutes dealing with indecent public display. These are replaced by a new offence in section 1 of the Act of publicly displaying indecent matter.
[17] Sexual Offences (Conspiracy and Incitement) Act 1996 (UK). See also Sexual Offences (Conspiracy and Incitement) Act 1996, Sex Offenders Act 1997, Criminal Justice (Terrorism and Conspiracy) Act 1998, Sexual Offences Act 1956.
[18] 521 U.S. 844 (1997).
[19] Yahoo in 2000 lost a case brought by the French Government seeking a ruling to prevent people in France gaining access to websites offering Nazi memorabilia. Yahoo France does not carry the auctions but French internet users can access the company's US site at the click of a mouse. Judge Jean-Jacques Gomez confirmed a ruling that he first issued on May 22 ordering Yahoo to prevent people in France from accessing English-language sites that auction Nazi books, daggers, SS badges and uniforms.
[20] Williams, Katherine S. (2003; File-Sharing Programs: Child Pornography is Readily Accessible over Peer-to-Peer Networks, Testimony Before the Comm. on Gov. Reform, House of Reps. (Statement of Linda D. Koontz, Mar. 13, 2003), available at at 5 (Stating that Usenet groups and peer-to-peer networks are the principal channels of distribution of child pornography).
[21]Pornography websites were channelled into the use of credit cards to verify age in part by the affirmative defence offered by §231 of the Communications Decency Act. 47 U.S.C. §231(c)(1)(A) (“It is an affirmative defence to prosecution under this section that the defendant, in good faith, has restricted access by minors to material that is harmful to minors by requiring use of a credit card, debit account . . . .”).
[22]See id. at 5–6 (Concerning a child pornography ring that included websites operating from Russia and Indonesia (content malfeasors located out of US jurisdiction) and a Texas-based firm that supplied the credit card billing and access service for the sites.

Monday, 7 July 2008

Statistical Methods to Determine the Authenticity of Data

I am presenting at CACS (Sydney 2008). This is the ISACA conference. My Presentation (which I am completing to submit tonight) is for session AUD 122.
It is on "Statistical Methods to Determine the Authenticity of Data".

The presentation will address statistical methods including:

  • PCA (Principle component analysis) to RF (Random Forests)
  • Classification And Regression Trees (CART) and Decision Trees in forensics,
  • Multivariate adaptive regression splines (MARS) in quantitative structure-retention as applied to email header information,
  • Classification and regression tree analysis for email header descriptor selection,
  • The evaluation of Two-Step Multivariate Adaptive Regression Splines for email analysis.
Next the presentation will address text mining techniques that may be used to determine the correlation between events from an anthology of prior events to determine authenticity.
This increases the ability to detect events of interest and limits the error rate.

The development of quantitative methods of analysis to detect tampering with logs offer great promise to the future of security and digital forensics. New methods of quantifying the statistical correlation between events and a log anthology from the subject using PCA (Principle component analysis), CART Decision Trees, and MARS predictive modelling to assign the probabilistic likelihood of associating a log event is expanding the forensic arsenal.

MARS and Regression Tree Analysis may be used together to achieve the best prediction success. The CART model can be difficult to use for cartographic purposes due to the high model complexity but also adds to the predictive capability in cases where a large test set (or email anthology) is available.

This creates a methodology that increases accuracy and makes fraud detection easier.

All this and more.

Sunday, 6 July 2008

Issues on licensing in the US

The amendments include the exclusions. To this end I am going to copy a little piece of another conversation (just mine) in this email as an introduction to legal theory for those who did not study law.

There are a number of methods used for the interpretation of legislation by the justices. Of the available topics, the majority of the current Supreme court fall in the legal formalism camp. It is their view that there is no need to inspect legislative history or intent. They state that this perspective is often unreliable and does little to confirm the intended plain meaning of the law. This makes it unlikely in their view that the use of interpreted meaning to resolve ambiguity is going to be upheld. They espouse the view that legislative history is not the law and that intent fails under the weight of the letter.

The current state of both the Supreme and Texan courts is that the majority of judges are Textualists and strict constructionists.
Textualist judges have contended, with much practical impact, that courts should not treat committee reports or sponsors' statements as authoritative evidence of legislative intent. These judges base their resistance to that interpretive practice on two major premises: first, that a 535-member legislature has no "genuine" collective intent concerning the proper resolution of statutory ambiguity (and that, even if it did, there would be no reliable basis for equating the views of a committee or sponsor with the "intent" of Congress as a whole); second, that giving weight to legislative history offends the constitutionally mandated process of bicameralism and presentment”.
[John F. Manning, Textualism as a Nondelegation Doctrine, 97 Colum. L. Rev. 673, 1997]

The is best seen in the comment by former Supreme Court Justice Antonin Scalia. Justice Scalia stated that "[i]t is the law that governs, not the intent of the lawgiver." In fact, he later stated;
The meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by a larger handful of the Members of Congress; but rather on the basis of which meaning is (1) most in accord with context and ordinary usage, and thus most likely to have been understood by the whole Congress which voted on the words of the statute (not to mention the citizens subject to it), and (2) most compatible with the surrounding body of law into which the provision must be integrated-a compatibility which, by a benign fiction, we assume Congress always has in mind. I would not permit any of the historical and legislative material discussed by the Court, or all of it combined, to lead me to a result different from the one that these factors suggest.”

Much of this stems from post WW2 thought. Those such as Professor Cass Sunstein (in "Must Formalism Be Defended Empirically") have argued alone the lines of:
In the Nazi period, German judges rejected formalism. They did not rely on the ordinary or original meaning of legal texts. On the contrary, they thought that statutes should be construed in accordance with the spirit of the age, defined by reference to the Nazi regime. They thought that courts could carry out their task "only if they do not remain glued to the letter of the law, but rather penetrate its inner core in their interpretations and do their part to see that the aims of the lawmaker are realized." . . . .After the war, the Allied forces faced a range of choices about how to reform the German legal system. One of their first steps was to insist on a formalistic, "plain meaning" approach to law.”

This line of thought has become the predominant line of reasoning in the US.

In addition the “newer” law in this case explicitly excludes the old one from its domain. That is – the law (1702) states that it does not apply in the case of the others (eg 1001). This is explictially excluded by the terms of 1702 (the PI law).

I have to go into lawyer mode here. The condition is Expressio unius est exclusio alterius. This means that the express mention of one thing excludes all others. In the case of the professional code 1702, the explicit exclusions preclude their inclusion.

The opinion from 2004 remains valid. The issue, as occurs frequently is twofold.

  1. Public servants are commonly no more familiar with the law. That any other person and the opinion of common parties who work in a department is not law.
  2. Many people will see this as moot in any event as they do not hold either a PI or other accepted professional license.

For the latter reason, many ignore the latter argument as unimportant. This is far from the truth as there are important precedents that have already been decided in court for the other professions. To narrow the issue into a PI licensing fight is detrimental to the overall goals that people such as Jerry are attempting to fight.

Jerry’s paper does not account for the exclusions in the code. They should. For it is easier to get the outcome that is being moved (an independent body that is licensed) using these exclusions.

At present, code 1001 in Texas covers forensic engineering. Forensic engineering incorporates the analysis of a computer system for court.

Following the loss and sanction in 1979 for anti-trust (Sherman Act) violations, the engineering board has held a more open determination of aligned professional activities than is going to be achieved though the incorporation of PI acts.

What I have been attempting to convey is that there is going to be a requirement for a license. The other aspect is that a sub-body run by the engineering board that does not require status as a PE but rather acts as a paraprofessional engineer for the conduct of digital forensics is a better option.

Jerry’s take on the accountancy exclusions; "while performing services regulated under Chapter 901" have failed to consider that forensic services are included in TX occupations code 901. The distinction is that there needs to be a formal engagement letter that adheres to the stipulations of the rules defined by the AICPA. In fact, the AICPA has a defined Forensic & Litigation Services Committee and sub-group.

Working for a member firm that has affiliate offices in Texas, I can categorically state that the accounting profession is acting within the strictures of the law while still conducting digital forensic engagements in Texas. This is without a PI license. This is without being hounded by the Texan Attorney General’s department.

In fact, the association of Certified Fraud Examiners in Austin, Texas issued position papers detailing the use of “Computer Forensics Procedures and Tools for Fraud Examiners” in 2000. The exclusions under the Texas code have not altered the position of the AICPA or the CFE. In fact, it is a long held position of the AICPA that examinations into fraud include that the “trail of evidence in an investigation may start with computer login records from a mainframe, server, firewall, or PC, which can substantiate the date and time a user entered or exited a computer system”.

Other than myself (who is known for being outspoken and at times would be better off learning to shut up and not help others in need where there is no personal self interest), you will note a distinct absence of comment from those who are either PEs or CPAs or employed in this manner. There is a reason for this. Both these groups are excluded from the PI bill and are staying out of the media on the issue. They have nothing to win (they are already allowed to practice) and only something to lose by helping.

In fact, there is US federal law that empowers CPAs and PEs with the right and also obligation to investigate fraud. The Securities and Exchange Commission has requirements that preclude PIs who are not CPAs and also does not require the CPA to be a PI.

As for the opinion I quoted dated in 2004, the current position of the National Association of Forensic Engineers states:
Forensic engineering is the application of the art and science of engineering in the jurisprudence system, requiring the services of legally qualified professional engineers. Forensic engineering may include the investigation of the physical causes of accidents and other sources of claims and litigation, preparation of engineering reports, testimony at hearings and trials in administrative or judicial proceedings, and the rendition of advisory opinions to assist the resolution of disputes affecting life or property.”

For how the engineering board in Texas will apply section 1001 of the code, see:

From this, you can note the board of engineering is far more forgiving than a body of PIs. They are more open to positive actions than the other boards.

As for a comment I have received that there will never be a unified standard for professions across the states in the US. I have to disagree. Both the AICPA and National Association of Forensic Engineers have a unified exam and the boards of each of the states apply the rules of the federalised body. So, this has already occurred. Precedent is set, it is up to those in digital forensics to follow suit and get our act together.

The issue of exclusion and control came up in the US Supreme court in cases such as National Soc'y of Prof. Engineers v. United States, 435 U.S. 679 (1978). Here the act to take the Texan law to the Supreme court on the basis that it “suppresses, competition, does not support a defence based on the assumption that competition itself is unreasonable” . Pp. 435 U. S. 686-696. 435 U.S. 679. For the reason that there is competition, this course of action will fail. The states have a right to license the professional activities within its borders. This right has been upheld in the US Supreme court. The exceptions that allow other professions to act means that the PI law is not under these interpretations unlawful and do not violate the Sherman Act.

So, is the plan to get nowhere and argue the same points? To fight the wind like Don Quixote (which helps those who advocate a PI board control) in fighting for a removal of the PI codes?

Or rather is the effort better directed at formulating an acceptable alternative that is aligned to the needs of digital forensic practitioners?

Waste time fighting my views if you like, it costs me nothing and I have nothing to lose on the point. However I do have something to add, which, perchance also gives those who are disavowing my help most fervently a win. For tearing down my arguments gains nothing but a loss, but also does not stop me from practicing if I go on secondment with my firm (a firm of CPA's) in Huston. However, if you do listen I am attempting to offer an alternative approach that may result in the consequences that are desired.

Considerations about exploits tricks

To fix code flaws, you can attempt to fix the stack for a start. You are correct that fixing the multitude of programmes and programmers is unlikely and that new ones are "born" every day.

We should all know that code "usually" resides in a R/O text area at the start of the program memory. In an ideal world our programs will not execute any instructions off of the data stack.

In the case of hardware that supports it, software can be integrated to stop many of these attacks. On AMD64 and Pentium-4 (and newer) CPU's there is what is called an NX bit. The Linux kernel has had support for the NX bit functionality since 2.6.8. In solaris and HP-UX there are kernel switches for this behaviour on RISC chips (eg noexec_user_stack=1 in /etc/system on Solaris).

OpenBSD has W^X (3.4 up) and the grsecurity ( PaX patches include stack-protection from the Admantix Linux Project. Redhat has "Exec Shield" for this.

With the Risc systems (solaris, HPUX etc), stack protection prevents executing code off stack pages. This still does not stop heap attacks - but these are another issue.

W^X and PaX (with NX) marks all writable-pages as non-executable - even the heap area and other data areas - not just the stack. The issues come as many high level languages (ie JAVA, JSP etc) execute runtime code out of the heap. Thus these can break Java.

So this is a functionality issue for a start. Many systems (eg Internet DNS) do not need the extended functionality provided by Java and other high level languages. In this case - there is a good case to disable code from running out of the data areas, stack and heap. On the other hand, Users want to browse the web etc and as such they want this added feature (ie no heap protection).

Alternatively there is another option.

There are complier-based solutions. Adding a "Canary" between the frame pointer and return address in order to create code that is resistant to buffer overflows. In this, any buffer overflow exploit that overflows the data area and writes downward to the return address pointer will also overwrite the canary value (I will ignore format string attacks for this as this make it a little too complex).

In the normal course of program execution, the program will check the canary value. If this has been altered (ie buffer overflow exploit or error) the program aborts rather than returning the memory address given by the return address pointer. This adds an overhead of about 10% to the system, but makes many classical buffer overflows unable to be executed. GCC has this option built in (-fstack-protector & -fstack-protector-all) though they are rarely used. I believe that Novell - from OpenSUSE 10.3 is building this in though I have not tried to break this myself.

So to end - there are a number of options. Some work very well, but all have a cost. This may be an increase performance hit and it may be no Java, but it is possible. So for the original question, PAX helps, but it breaks Java and other pretty user toys.