Saturday, 19 December 2009

Quota based selection, discrimination by any other name.

I will use the US figures as those in Australia as a little funky to say the least and what have been collected have not been done well.

That stated, the Center for Women’s Business research analysed the number of businesses by women in the US. The rate of growth of new business owned by women was found to be double that of all new US businesses (in the 1997-2002 period) and hence women are starting to gain in entrepreneurship by taking action and starting their own firms.

The US SBA (Small Business Administration) reported that businesses with a substantial female ownership account for 28 percent of all privately-owned businesses. Of Sole proprietorships owned by women, the growth in the period 1990 to 1998 in numbers, gross receipts and net income where all high. Over the 1990 to 1998 period, the percentage of sole-proprietorships owned by women increased from 33.5% (or 5.6 million businesses) to 36.8% (7.1 million businesses); a 3.3% increase. Unfortunately this is not a substantial growth and the growth rate following this period has declined.

One issue that does arise in these figures is that the turnover from these businesses created a mean of 17.1% of the total revenue figures from all sole proprietors. As those businesses that are owned by women businesses are not producing nearly the revenue of their male counterparts, it demonstrates a distinct division in the types of activity being undertaken.

Two-thirds of women operating a sole proprietorship were married. The SBA attributed a guess that “many of the small sole-proprietorships owned by women are run by stay-at-home mothers who run a service-based company part time”. This is JUST a guess mind you and not a good analysis (an unsupported assumption). There is a low-powered statistic that supports the supposition that women come in and out of business ownership.

27% of women business owners will invest in new technology such as computers and software over the next six months.(OPEN from American Express, November 2006)

56% of women business owners plan to make their business environmentally friendly by recycling waste products.(OPEN from American Express, November 2006)

85% of women surveyed don't believe being a woman is detrimental to their business success, while 32% believe it's beneficial.(Center for Women's Business Research, December 200)

What is interesting is that “48%, nearly half, of all privately-held firms are at least 50% owned by a woman or women” (Center for Women's Business Research, 2005). This however contrasts with the number of women on boards.

What is interesting from this is that the female owners of companies are NOT appointing women at the rate that you would seem to indicate should occur. In contradiction to the “old boys club” approach that seems to be how boards are supposed appoint members (according to what I see as touted), share-holders have a significant input. Statistically, if women selected other women as a primary choice based on their shareholdings, at least 45% of board members would be women.

Maybe we can assume from this that those with a shareholding appoint the best candidate and not make selections on sex?

If this is not the case, it is women that you are going to need to convince to appoint more women to boards. They are substantial shareholders and company owners after all and do make these decisions.

There are still too few women starting out and growing a business. We need to eliminate the barriers that remain, be it access to finance or to childcare or because of some other form of discrimination. If women started new businesses at the same rate as men, we would have more than 100,000 extra new businesses each year.
Patricia Hewitt, secretary for state for Trade and Industry

There are answer to these issues, stop the governments restrictive practices. Let people start childcare businesses, stop interfering in banking and finance. Free trade nearly had a chance 100 years ago, yet we are moving more and more into the regressive practices of a socialist state. For this we are ALL paying.

Results and Discussions – Homogeneity Tests

The Monte Carlo simulation described in the previous post showed that

  • Bartlett test is not as robust as Levene tests against the violation of the normality assumption.
  • All four Levene tests are less powerful than the Bartlett test.
  • ANOVA F provides generally poor control over both Type I and Type II error rates under a wide range of heterogeneity variance situations.
  • The Wald test (as proposed by Rayner, 1997) is comparable with the versions of Levene tests.

Variance ratio 1:1

The results where the samples have equal variances are listed below for the Log Normal (0,1), Exponential (1), Gamma (2,1) , Chi 2(2) and Beta (6, 1.5) distributions. These distributions are displayed for the Bartlett (B), Wald (W) and Levene tests (Lev1, Lev 2, Lev 3 and Lev4). The testing power for the previously listed distributions are graphed and displayed below.

Log Normal (0,1)

clip_image002

The characteristics of the Log Normal (0,1) distribution is a variance of 4.67 and a skewness of 6.19.

Exponential (1)

clip_image004

The characteristics of Exponential (1) distribution is a variance of 1.000 and a skewness of 2.0.

Gamma (2,1)

clip_image006

The characteristics of Gamma (2,1) distribution is a variance of 2.00 and a skewness of 1.41.

Chi 2(5)

clip_image008

The characteristics of Chi 2(5) distribution is a variance of 10.0 and a skewness of 1.27.

Beta (6,1.5)

clip_image010

The characteristics of Beta(6,1.5) distribution is a variance of 0.019 and a negative skewness of -0.921.

In power, the Bartlett test performed best. Levene -2 was more robust in controlling Type I error rate, but also displayed the least power among the six (6) tests considered above. The Wald test is comparable with the other versions of Levene tests (Lev-1, Lev-3 and Lev-4).

Variance ratio 1:4

The use of heterogeneous variances has a greater relationship to the sample sizes than when the values are homogeneous. In this instance, the power increases rapidly as the sample size increases for both equal and samples.

Log Normal (0,1)

clip_image012

Exponential (1)

clip_image014

Gamma (2,1)

clip_image016

Chi 2(5)

clip_image018

Beta (6,1.5)

clip_image020

We see from the results above (and displayed in the tables included in the appendix), that the underlying distribution of the sample data plays a role in the selection of the ideal homogeneity test

Discussion

This paper compared the empirical type I error and power of several commonly used tests of variance homogeneity. These tests assess the level of homogeneity of within-group variances. The tests of homogeneity of variance that have been evaluated in this paper include:

· ANOVA-F test,

· Bartlett's test,

· the Scheffé-Box log-ANOVA test,

· Box’s M test,

· Cochran’s C test

· Levene’s tests (Lev-1, Lev-2, Lev-3 and Lev-4)

· Wald’s Test

These tests have been evaluated in both their parametric and permutational forms. This paper has explored the conditions where the ANOVA-F and Levene test p-values are questionable at best and has evaluated the conditions were heterogeneity of variances really a problem in these tests. These conditions are further analysed such that the usefulness of the various tests for the homogeneity of variance for the detection of heterogeneity can be compared in a variety of data distributions.

A preliminary simulation study confirmed that the ANOVA-F is extremely sensitive to heterogeneity of the variances. This was confirmed in situations where the assumption of normality was otherwise achieved. The ANOVA-F was sensitive to even low levels of heteroscedasticity. This caused inflated type I errors. This was particularly pronounced in the case where the variance of a single group was larger than the other groups that approximated each other.

The use of non-normal data with heavy tails is problematic for many of the standard tests. It was demonstrated that the parametric tests are extremely sensitive to heteroscedasticity. The existence of a heavy tail (kurtosis) generally results in a loss of power in the various significance tests for heterogeneity of variance. The simulation was expanded to incorporate more extreme conditions (small sample sizes, non-normal distributions).

Both Cochran's test as well as the log-ANOVA test can be shown to display undue levels of sensitivity when even a solitary high variance value exists amongst the groups. It is also shown that these tests have low power when small to moderate sized data samples are tested.

Both the Bartlett's and Box's tests performed well where the sample sizes were relatively large. The Bartlett test was not as robust as any of the four Levene tests when the normality assumption was violated (even where large datasets have been used). At the same time, Bartlett's test displays a higher level of power then the Levene’s tests. The

From these results, we can construct an algorithm that can aid in the determination of which homogeneity test should be used (see Table 1).

clip_image022

In table 2 this process is extended for the selection of testing procedures.

clip_image024

clip_image026

Anova test results can be demonstrated to be largely independent of sample size (within 5 to 100 observations per group). When the variances are homogeneous, Anova yields correct type I error irrespective of the distribution. For normal data, type I error is overstated when one of the variances is higher than the other. The problem is worse for non-normal distributions.

Effect of sample size

When the variances are not equal, the tests are influenced more by the sample size. As such, care should be applied to unequal samples. The Bartlett test provides the most power. Levene's second test (Lev-2) was more robust in controlling Type I error rate. This was countered by its tendency to display the lowest power. The Wald test is comparable with other versions of Levene tests.

Conclusion

Heterogeneity of variances is always a problem in Anova. The effect of variance heterogeneity with Anova is somewhat to extremely exaggerated type I error.

The most effective methods to test the homogeneity of variances are Bartlett's or Box's tests. Cochran's test should be avoided. The log-Anova test exhibits low power with small to moderate sample sizes. Bartlett's and Box's tests can be used if the samples are fairly large (ni>20).

Bartlett's and Cochran's tests have and uncontrolled risk of Type I errors when the populations are asymmetric and heavy-tailed. Levene-1 fairs well in both robustness and power in a variety of situations (especially when the population mean is unknown). The type I error rate did become overstated to an unsatisfactory level in cases where the mean was unknown.

The assumptions of the parametric test are a linear relationship in the mean function, normal error and correct specification of the form of the variance function. When the assumptions of parametric test are violated, the nonparametric tests can be more powerful.

The Bartlett test is not as robust as Levene tests against the violation of the normality assumption. In general, the Levene tests are less powerful than the Bartlett test. The Wald test is more robust than the Bartlett test against the violation of the normality assumption. This test is poised between the Bartlett test and the four versions of the Levene tests in terms of Type I error rate and power. While Lev-2 (Levene’s 2nd test) was robust in controlling Type I error rate, it displayed a lower power than many other tests.

Bartlett’s test displayed the highest power in the majority of distributions. This test has the feature of rejecting the null hypothesis of equality of variances the greatest number of times. Bartlett’s test is also associated with has insufficient control of the type I error rate. Levene’s second test (Lev-2), displays low power but is highly robust in the control of the type I error rate. The Wald test was demonstrated as a balance between the Bartlett test and all versions of Levene tests.

When the datasets are asymmetric and heavy tailed, most tests for the homogeneity of variances perform poorly.

Thursday, 17 December 2009

Size and Power Study

The appropriate statistic to use under the conditions of a violation of assumptions can be selected using the statistic’s robustness and power as selection criteria. Robustness is defined as the capacity of the statistic to control Type I error. As such, a test of variances is robust if it does not detect non-homogeneous variances in the event that the original data is not normally distributed but the variances are homogeneous.

Peechawanich (1992) noted that if the probability of a Type I error occurring exceeds the Cochran limit, that the test will not be capable of controlling the error rate. As such, a test can be considered to be robust where the calculated probability of a Type I error lies within the Cochran limit. The Cochran limit of the discrepancy of the Type I error (clip_image002) from the nominal significance level (clip_image004) are set at the following values:

  •  clip_image006significance, clip_image008
  •  clip_image010significance, clip_image012
  • Where clip_image014is defined as the real probability of a Type I error occurring. This is also equal to the probability that clip_image016will be rejected when clip_image016[1]is actually true.
  •  clip_image002[1]is the empirically calculated value of a Type I error occurring.
  •  clip_image004[1]is the nominal level of significance. For this exercise, the values of clip_image004[2]=0.01 and clip_image004[3]=0.05 have been used.

A test’s power is the probability of rejecting the null hypothesis (clip_image019) when it is false and should correctly be rejected. The power of a test is calculated by taking the probability of a Type II (clip_image021) error[1] from the maximum power value (1.0). As such power is defined as:

clip_image023

As such, the power of a test ranges from a value of 0 (no power) to 1.0 (highly powerful).

Power studies rely on four context variables:

(1) the expected “size” of the effect (such as the approximate incidence rate for a population of survival times),

(2) the sample size of the data being evaluated,

(3) the statistical confidence level (α) used in the experiment, and

(4) the variety of analysis that is used on the data.

In this chapter, a size and power study of the aforementioned tests is presented. A test size study has been conducted in order to simulate sizes for sampling from normal distributions and a variety of non-normal distributions. These distributions increasingly vary in the levels of kurtosis displayed as they progressively move away from the conditions of normality. These distributions are created using a simulation study technique based on the simulation features of R. The majority of these tests have been evaluated at the 5% significance levels.

Power studies of tests aid in the determination of the relative effectiveness of the processes in a range of situations. A good deal of material has been published concerning power studies based on simulations and retroactive data analysis (e.g., Goodman & Berlin, 1994; Hayes & Steidl, 1997; Reed & Blaustein, 1995; Thomas, 1997; Zumbo & Hubley, 1998).

Statistical power can be seen as a fishing net, a low power tests (such as is due to low sample sizes) can be associated with large mesh nets. These will collect large values and generally miss most of the examples. This leads to accepting the null hypothesis when it is actually false. Tests can be constructed that are too sensitive. Using larger sample sizes may increase the probability that the postulated effect will be detected. In the extreme, extremely large samples greatly increase the probability of obtaining a dataset that contains randomly selected values that are correlated to the population and lead to high power. This increase in power comes at a cost. Many datasets do not allow for the economical selection of extremely large datasets. In destructive testing, any dataset that approaches the population also defeats the purpose of testing. Consequently, the selection of powerful tests that hold at low sample sizes is important. There is a trade-off between sample size and size of uncontrolled error. Choosing the test that provides the best statistical power can be essential.

There are four (4) possible results of any test,

(1) We conclude that H0 is true when H0 is true.

(2) We conclude that H0 is false when H0 is true.

(3) We conclude that H0 is true when H0 is false.

(4) We conclude that H0 is false when H0 is false.

Concluding either that H0 is true when H0 is true or that H0 is false when H0 is false can be seen as the desired outcome. Concluding that H0 is false when H0 is true is defined as a type I error (the erroneous rejection of H0). Concluding that H0 is true when H0 is false is defined as a type II error (the erroneous acceptance of H0). Type I and type II errors are undesirable.

The p-value is the risk of making a type I error[2]. The lower the alpha or beta values that is selected, the larger the sample size.

Simulation

A Monte Carlo study was conducted to evaluate many of the common tests of the homogeneity of variances with regards to type I error stability and power. The type I error stability study assessed the relationship between observed rejection rates and the nominal rejection rates (a) when the homogeneity of variance hypothesis (2.1) was true. The power study evaluated the ability of each test to detect differences among sample variances when the homogeneity of variance hypothesis (2.1) was false.

Several R and Matlab functions were programmed to create a series of written to perform the Monte Carlo study. An example of the programs and functions used to create simulated datasets and the test is in the Appendix. The tests each used 100,000 replications. Random data with the proper characteristics were generated within the function and transformed as required, and the various forms of tests were computed. The random generation algorithms in R and Matlab provided the distributions used in this paper.

For each dataset under test and each test of the homogeneity of variance being examined, the function wrote one line of results to a MySQL database. This database was created with separate tables for each of the test datasets and SQL statements that contain the simulation parameters. A table with the various rejection rates of the null hypothesis for each test (for a given number of simulations) was contained in the same database as a separate table. The datasets where tested using the 95% and 99% (‘alpha’ =1 and ‘alpha’ =5) confidence interval of this rejection rate. The output was sent to a separate table in the same MySQL database. This was used to produce the summary tables presented in the appendix to this paper.

Data were generated with a large number of different distributions (these are displayed in the results, Tables 1 to 5). These distributions have been selected to align with those presented in the existing literature (Conover et. al., 1981) where multivariate tests have been conducted (ni where i =3, 4, 5, 6 or 8). These tests used ‘k-sample’ tests for homogeneity where is k > 2. These papers were also constrained in the sizes of the simulations with some as low as only 1,000 data points being generated.

The standard errors of all entries in Tables 1 to 5 are under 0.015.

The process used in this simulation study is as follows.

1) Select the sample size (ni), the number of simulation times (100,000), the level of α (0.01 and 0.05) as significance.

2) Generate independent random samples from the selected distributions.

3) Compute the test statistic for the simulated data for the various Homogeneity tests (e.g. Bartlett, Wald, Levene [Lev-1 to Lev-4], Cochrane, etc).

4) Repeat steps (ii) to (iii) 100,000 times and count the results where “the computed test statistic is greater than the corresponding critical value” and compute the proportion of rejection over 100,000 repetitions.

5) The proportion of rejections in step (4) is the estimated testing size when data are simulated under equality of variances, and power.

6) The entire process, steps (1) to (5) are repeated for varying (ni) values and divergent heterogeneity.


[1] A Type II error is defined as incorrectly accepting (a failure to reject) the null hypothesis (H0) when the null hypothesis is indeed wrong.

[2] The type I error is also designated as "alpha".

Wednesday, 16 December 2009

Where Coase went wrong

Ron Coase made several errors in his theory of transactional processes in economics. Most critically, he has failed to note the value of property.

Property rights include the right to have and use. The value of property includes the right to use and dispose. If this right is removed, then some of the value of the interest in property is also removed.

In Coase's example of a rancher and farmer, the cattle straying into the farmers property is calculated as a pure equilibrium on the transaction alone. This ignores the value of the property already vested in the right to exclude. By removing the right for the farmer to exclude the cattle ranchers steers, Coase has developed a means of taking some of the total property value away from the farmer without compensation.

Basically, the diminution in capital value of the property is not considered. This means that property value is not fixed. The individual cannot plan for a use of the property and the capital value of that property cannot be guaranteed as a neighbour with a more economically valuable idea can move next door and usurp ones rights.

For instance, if we assume that the rancher has moved next door to a person with a home in the bush, that use has no economic value. Under Coase, the rancher's steers would be able to roam onto the homeowners property damaging it and destroying the homeowners rights. The owner may for instance want to have a bush rejuvenation project. This may not appear to have value to the rancher, but it had value to the owner and also to other prospective purchasers of the property.

The rights of the rancher to have the cattle roam and destroy the neighbours land remove its use as a rejuvenation project. This devalues the property. This is the capital value of the land is diminished through the loss of the right to exclude. This result occurs as no other party with a desire to own a bush rejuvenation project will want the land. This reduces the pool of possible purchasers and hence lowers the value of the property. The owner loses capital value without any means of addressing this loss.

In all instances, there is a more valuable use of property. The arbitrary assignment of rights devalues ALL property. The end consequence is that all property is uncertain and no property can be seen to have value.

The conversion of a property-right into a liability right does not account for the additional harm done to the property owner.

There are more problems with Coase, but I will address these at a later time.

BF and Modified Brown-Forsythe Tests

The Brown-Forsythe (BF) test approximates an F variable with (K-1) of freedom. It takes the following form:

clip_image002

In this, the Sattertwaite approximation (Sattertwaite, 1941) is used to obtain the value for F. Here it is defined by

clip_image004

in which

clip_image006

The Brown-Forsythe (BF) test is implemented with the numerator degrees of freedom equal to K-1. This was modified by Mehrota (1997) using a Box (Box, 1954) approximation in order to obtain the value for the degrees of freedom in the numerator (vl) and the denominator (v). This variable, vl is defines as follows:

clip_image008

Likewise, the variable v is defined as:

clip_image010

Here, the modified Brown-Forsythe test takes the following form:

clip_image012

When under the Null hypothesis (Ho), clip_image014 approximates an F distribution with vl and v degrees of freedom respectively (Mehrota, 1997).

Balance of payments lie

It has commonly come to be believed that the fallacy known as the “balance of trade” exists and is a real effect. Although propagated, this myth is one that needs to be relegated to the dustbin of history or at the least assigned to the flat earth society.

To explain this misunderstood concept cleanly, it is best to simplify it and not confound the issue with the added miasma of obscure relationships that create a society or nation. To do this, I would start with a household balance of trade. This is the income and expenses for a household. If we take a common family group of two working adults and two children under the working age, we see two sources of income and four direct expense sources (which we can think of as the states).

Each adult has an income from an employer and possibly some smaller amount from a hobby or side occupation. Let's assume that partner A works at a department store and has hobby income C and partner B as an engineer. In this instance, there are three sources of favorable trade balances. These are partner A’s employer, partner B’s employer and any person who purchases from partner A’s hobby side business.

Each of these is what is commonly deemed a “favorable balance of trade” in the commonly misapplied term. Partner A and Partner B each have a positive balance with their respective employers. However, partner A’s hobby income breaks even. This means that it is a balanced trade source.

Partner A in our scenario has the responsibility (they are not pooling their income) for food and school fees. Partner B has the responsibility the rent and entertainment. Each partner supplies their children with a small sum each month. We will assume for simplicity that our family is rather ascetic and wants for little. We will also assume that 50% of the expenses for food go to the same store that partner A works at and that the other 50% goes to the grocer.

In this scenario, we have a favourable balance of trade with the department store. Here we see the following equation:

  • (Partner A’s income) – 0.5(Food Expenses) = positive trade balance with department store
We see this positive balance of trade as Partner A is unable to spend the entirety of income (by the fact that this is split to several parties) to the department store where partner A is employed. Partner A receives the amount of (Partner A’s income) from the department store, but spends 0.5(Food Expenses) at that same store.

We next see a negative balance of trade for partner A’s dealing with the grocer and partner B’s dealings with the land lord. In each instance, we can represent this mathematically as the following:
  • 0.5(Food Expenses) = negative balance of trade with grocer
  • Rent expenses = negative balance of trade with landlord

In each example, we have a negative balance of trade. Let us presume that partner B earns 150% as an engineer of what she could earn as an estate agent working for the landlord. Partner B could sacrifice some of her time employed as an engineer and work for the real estate agent (the landlord). Let us assume that partner B works 40 hours a week and that 40% of her income is assigned to rent.

In this instance, we see that 16 hours of partner B’s time is associated with working to pay the rent. This is 40% times 40 hours. An option exists where partner B (we presume she is talented and can be either an estate agent or an engineer as she wishes) can work for her landlord part time by sacrificing some of the time spend working as an engineer.

Working for the landlord, she earns 66.67% of her earnings as an engineer (the reverse of 150% as noted previously). This means that she must by needs work a longer time to create the same income. This is, in order to earn and hence pay the same value of rent, she must work longer hours. As the income from the landlord is only 66.67% of her earnings as an engineer, she must work 1.5 times as long for the landlord. This is:
  • (work for landlord) = 1.5 x (work as engineer)
As partner B works 16 hours directly for the payment of rent, we can express the previous equation as follows:
  • (work for landlord) = 1.5 x (work as engineer) = 1.5 x 16 = 24 hours
In this case, partner B is still working 24 hours (40-16) as an engineer, but also needs to work an additional 8 hours (this is 24 hours at the landlord – the 16 hours she was working).

So we have two scenarios already, partner B can engage in the most efficient source of trade (this is working as an engineer for her) or she can seek to minimize “balance of trade” deficiencies. BY working in her optimal trade position and creating several “unfavorable trade balances”, partner B is better off. She needs to work only 40 hours to have the same standard of living as she would have required if she was working 48 hours with a “more favorable trade balance”.

This scenario is the same if you substitute companies, states or nations for the partners. We could also argue in our example that the children are an unfavourable trade balance that could be removed by removing the children. This is the nature of the anti-trade and protectionist argument.


This of course begs the question, why do governments perpetrate this lie?

The answer lies in inflation. Most people have little understanding of the economy and few people seem to want to spend any time explaining it in simple terms.

Inflation, the hidden tax.
What is inflation and why does it occur? This is far simpler than many would like to state. There are many complex reasons that can be used to explain inflation, but simply and completely, inflation is the result of an expansion in the money supply. There is only ever one cause for an expansion of the money supply, this is more money.

The creation of money is a monopoly of the government. In modern western nations, the reserve banking system (such as the US Federal Reserve or the Australian Reserve Bank) is government institutions. This is the case even (as in the US) where the reserve banking function is “nominally” a private concern. In all instances, policy is either set or at the least influenced by the government.

Even in the event that government was independent of policy (which occurs in no existing context), the additional money manufactured through inflation goes to the government. This is in effect an indirect and hidden tax.

If the inflation rate is set at 5% for a year and the total money supply for the economy is $1,000,000 (low I know) at the end of the year, the total money supply is now $1,050,000. This is, the government has printed an additional $50,000 that did not previously exist. As the gold standard does not exist any longer and as it is no more expensive to print a $10 note than a $100 note, there is no cost to the government in doing this.

So at the end of the day, we have an economy with 1.05 times the supply of money, but no additional productivity. Productivity is a function of companies and individuals. In contravention to what governments (right or left or centre) would have you believe, they do nothing to aid productivity other than not hindering it. That is, they can stay out of the way. This is a reduction of a negative effect at best and never a positive effect.

So what does this extra money supply mean?
Without an increase in productivity, additional money has a negative effect. There are the same amount of goods and services in society, just there is more money competing for the same goods and services.

To demonstrate this, imaging that the government uses the $50,000 they have created to pay its people. To now purchase the same amount of goods, there is more money available. As such, if you want to have the same goods, you can pay more for them (as can others). This increase in demand (added money) causes prices to rise. Hence the effect of inflation is an increase in prices. Price rises are the effect of inflation and not as if stated the cause. This is lie perpetrated to keep people in the dark about one of the most insidious taxes.

As there are no new sources of productivity, the overall market will shift to consume the entirety of the new volume of money. This is, a product that formerly cost $1 will now cost $1.05. The salary of $50,000 will now increase to $50,250. The end result may seem to have no change in values, or is there?

The reality and what is frequently ignored, is that there is an addition of money in a fixed location. This is the created money goes to the government. The created $50,000 is not distributed to all members of the nation; it is assigned to a single party.

The effect of this is that price rises will occur, but value will be impacted to the individual. There will be a lag in pay rises and other effects (that the government can later use to justify the next round of inflation).

So what is the result?
The sole result is a hidden tax. In effect, the item that costs $100 prior to the injection of more money by the government now costs $105. As such, each $100 a person earns can now buy a pre-inflationary amount of $95.24. This is, $4.76 has been lost (or at least reallocated to the government). As the government has the newly printed money to distribute, this is a hidden means to taking money, in effect a tax.

What this all comes to is that a 5% inflation is in reality a 4.74% hidden tax.

Tuesday, 15 December 2009

Obrien Test

O’Brien (1978) proposed the use of an alternate Jacknife procedure. This procedure computes a set of pseudo-values clip_image002which are defined by:

clip_image004 (F 2.22)

The O’Brien test statistic (clip_image006) is used to test the null hypothesis (F 2.1) based on a one-way ANOVA F-value based on clip_image002[1]. Thus the O’Brien test statistic is given by the formula:

clip_image008 (F 2.23)

The null hypothesis (F 2.1) is rejected were the clip_image006[1] statistic exceeds the clip_image010percentile of an F-distribution with (K-1) and (n-K) degrees of freedom. Other variations of the Miller Jacknife procedure have also been proposed (Sharma, 1991).

Masters of Forensics

We have over 14 people signed up for the Digital Forensics Masters degree at Charles Stuart University that starts next year (2010). So all is set for a trimester one at full throttle.

I have been working on several case studies and drive images for my students (and aslo for those I will help with doing a GSE here in AU).

So all is go.

Climate and carbon

As a statistician who worked on some of the glacial Varve datasets, I have to state that the way that the results are reported has nothing to do with the calculations we did.

Carbon from human sources is less than 2% of the global emissions.

Most critically, 83% (+/-4% at a 95% CI)  of the model is attributable to water (H2O) vapor. At present, we can not model water vapor. The result is that the models are highly non-robust.

The real issue lies with water. Deforestation is impacting rainfall patterns, but we can not as yet quantify the effect. The carbon issue is drowning the issues that actually occur in a quasi-religious and definitely non-scientific paradigm. The problem is that this is actually damaging things more as we are failing to focus on the actual issue.

Add to this the filing of several solutions to the entire GW issue, and we start to see that it is really not about the climate at all. IV (Intellectual Ventures) have a number of patients for technological solutions to green house warming. A few other firms have also filed similar solutions. These solutions will require a capital investment of around $250 million dollars to have an adequate effect. Compare this with the $300 million costs associated with Al Gore in his efforts to gain himself a Nobel prize - for NOT doing anything!

Then again, we can model little of interest at present. So we SHOULD wait. We do have solutions, but we do NOT want to implement them when we have NO IDEA of what is really occurring in the world's weather. For instance, in 1974 Time reported on the coming ICE Age.

If we had responded at that time, we would have covered the poles with coal soot (the leading solution of the age) to increase the earth's temperature.

When are we going to wake up and start treating the issues as more than a religious debate?

Cochran Test

This homogeneity of variance test is computationally less complex than the Bartlett test but is also subject to problems in non-normal conditions (Phil, 1999). The test statistic for the Cochran test is:

clip_image002 (F 2.21)

In this equation (F 2.21), clip_image004 is defined as:

clip_image006

Such that under the null hypothesis (clip_image008), the COC test statistic has an asymptotic distribution with K-1 degrees of freedom and a clip_image010central variable (Hartung, 2001). It has been shown that the Cochran test is suitable for equally sized samples and has good power in selected cases (Gartside, 1972).

Monday, 14 December 2009

A Quota for all...

A quota system is an anathema to free enterprise. This perverse belief that quotas can be economically viable for any firm is obscene.

The costs of imposed quota systems to business are in the order of 10's of billions of dollars annually (Lubove, 1997). Quotas burden all employers. This added cost has to be borne by somebody. Though it is immensely difficult to measure the harm that a quota system does overall, we all end up paying more to receive less. As we sink into the ever devolving quagmire of quotas that threaten to consume us already, it is difficult to reflect on a world without them. It requires an intellectual exercise in deep thought to appreciate that without quotas we are all much richer.

This is how the egalitarian statists attempt to maintain the status quo. They use the law and regulations to not only keep us in the mire, but to see we sink deeper. The addition of more quotas help to ensure that we remain stuck.

To the egalitarian statists, businesses are not in business to provide desired goods and services. They are a tool designed to provide jobs for even the least worthy and help socio-engineer the politically correct society. But at what cost? The question to be asked is “who pays”?

Who runs the bureaucratic infrastructure needed to monitor the boardroom? Who tells a business that they have to hire to a quota when there are no suitable applicants? If 20 men and 1 woman are in the running for a board appointment at a company under the quota, does the company have to hire the woman even if she is the least qualified?

So why is there no outcry at this expense?
The mass media rant perversely when an executive receives a $1 million bonus for returning a company to profitability. Yet, the same media crowed in a vicious joy when Smith Barney was been ordered to "invest" in "extensive" diversity training to the excessive amount of $15 million. We look at the companies as the cause of the failures and cry for bailouts, but forget that we engineer these failures when we suck money from the firms into unproductive black-holes.

Most companies know this. Their boards are well versed in the costs, but the apprehension of adverse government action and increased regulatory frameworks forces them to cowl in silence. They put up a brave front and wear the ever increasing burden that they must carry.

Again, the true question is just how much an imposed affirmative action costs? It does not matter if it is the government or alternatively some other regulatory body, what is the cost?

As noted above, Lubove (1997) estimated the annual cost of diversity training at $10 billion through a surcharge on the penalties assessed against alleged discriminators. Even this is far too low. Recent diversity settlements in the US of eight companies totalled to over $400 million before the legal fees where added.
Brimelow and Spencer(1993) calculated an annual sum of $300 billion for the quota system in the US as a whole. Each time we accept a quota system it is another brick in the wall against freedom.

The opportunity cost of quotas impacts all companies. Lost productivity of qualified employees who are passed over for less-qualified applicants hurts us all. The myth that companies across the market hire to set targets of white males is just that, a myth. Companies are in the business of making profit. They hire whoever will help them achieve this be they male, female, black, white or green. Add to this the loss of time when employees are forced to sit through sensitivity training. The productivity of an employee doing during the hours they learn to be sensitive is zero, yet it must be borne by the company.

The miracle of markets is that they provide an indirect estimate of these costs. To test the claim that quotas enhance profitability and hence create a surplus for society all we need to is to contrast the performance of those that adopt quotas to those that have not imposed quotas. For example, Texaco in the US has an affirmative action policy that pushes the hiring of black geologists. If the quota system is valid, Texaco should outperform any competitor with a policy that results in its engaging the best geologists while not considering race. We could evaluate the performance of companies with diversity training programs against those that do without.

The problem is that these experiments are only achievable in the event that companies are free to decide without regulatory force. Quota advocates are opposed to this level of freedom as they recognise the facts that are revealed in such situations. The result is that they persist on creating regulative and administrative laws that ensure all companies implement the full assortment of affirmative-action measures. The result, the real cost of quotas continues to be hidden.

To force boards to adhere to the quota, do we add the booby-trap of diversity training, this is the logical next step and one that has been discussed. These training efforts teach the board to prize sexual, ethnic and other cultural differences, while also remaining quite about these differences.

Does the introduction of a quota increase the number of skilled women?
What really matters and what will increase the number of women on boards is an increase in skills. This is a combination of both education and experience. Companies that base their hiring practices in selecting the best people regardless of race, sex and other irrelevant factors have a competitive advantage over those that discriminate. Discrimination is a negative factor whether it is imposed by regulation or ignorance. A quota does nothing to increase the skills of women and actually reduced the quantity of available positions – hence having the perverse effect that less skilled women are hired to board positions.

There is a direct correlation between skills as well. Many people forget that not all MBAs are the same. Some qualifications have a finance component and others do not. While this may not seem to matter too many people, it does have an impact on selection processes with board appointments in major companies strongly correlating in favour of those applicants with finance qualifications. In the past, commerce and finance qualifications were the realm of male applicants. This has changed and more women have learnt these essential skills. Some Universities are seeing more women in commerce, finance and accounting for the first time. The issue is that these are the board appointees of 20 years forward, not those that are applying now.

Silence gives consent in politics. We should not consent to the current waves of restrictive trade or capital control legislation being spewed forth.

For a previously well publicised quota call we can look to the ridiculous. A Supreme Court nominee proposed by President Nixon was scorned for being "mediocre" resulting in Senator Roman Hruska (R., Neb.) proposing that “the mediocre folk of America deserved representation" on the highest Court. I ask, what has changed today?

Taking the quota argument to the farthest position and presenting a reductio ad absurdum Lynch (1989) notes the idiocy of the claim that youths aged 18-25 had been grievously "under-represented" in the past. Should we take this a step further and correct the “heinous and chronic under-representation of five-year-old men and women" (Rothbard, 1974) on company boards as well?

References

  • Brimelow, P. & Spencer, L. (1993) “When quotas replace merit, everybody suffers” Forbes, 1993
  • Lubove, S. (1997, December). Damned if you do, damned if you don’t. Forbes pp. 126-130
  • Lynch, Frederick R.(1989) “ Invisible Victims: White Males and the Crisis of Affirmative Action” Contributions in Sociology
  • Rothbard, Murray N.,(1974) "Egalitarianism as a Revolt Against Nature," in Egalitarianism as a Revolt Against Nature and Other Essays (Washington, D.C.: Libertarian Review Press, 1974), pp. 7-8

Box Test

The log-ANOVA test was proposed by Box (1953). In this procedure, clip_image002 sub-samples are created out of the clip_image004observations in the ith sample (where i=1,...,K). Each of the clip_image002[1] sub-samples hence contains clip_image006separate observations where,

clip_image008

And

clip_image010 (F 2.18)

Where,

clip_image012

And clip_image014is defined as

clip_image016

In the event that all values of clip_image018are equal (or extremely close to being equal), the Box statistic (clip_image020) can be defined as:

clip_image022 (F 2.19)

The clip_image020[1]statistic approximates an F-distribution with (K-1) and (clip_image024) degrees of freedom. However, where the clip_image018[1]values diverge (thus they are not equal), a generalisation developed by Scheffe (1959) can be used to obtain the clip_image026statistic:

clip_image028 (F 2.20)

In this equation, clip_image030 is defined as

clip_image032

Hence it can also be demonstrated that

clip_image034

that

clip_image036

The clip_image026[1]statistic also approximates an F-distribution with (K-1) and (clip_image024[1]) degrees of freedom. Other modifications of the Box method such as the Bargmann modification (Gartside, 1972) of the Box test have been proposed remove bias and to improve the approximation to the homogeneity condition of ANOVA.