Wednesday, 15 May 2013

The dumb country

The Australian government hates people like me. Educated ones.

They love to say we are the smart country, but no longer. The Commonwealth government has capped the educational expenses at $2,000 per annum that can be deducted.

This means that anyone doing an MBA, a PhD or some other form of education whist working has to have to full tax costs associated with their education. The end result will be less education. We can expect fewer scientists, fewer people doing post grad studies and a general decline in the standard of living in the country.

You can argue that there is a cost and this is skewed. For instance, I spent and claimed $56,000 in educational expenses last tax year for the two degrees I was undertaking (my second PhD and a Masters). What you would be missing is that I could do this study without the structure. In having a doctorate, you learn how to research. I just like to have the structure of the University.

Next, I am actually paying the University that money. As the University IS a government institution, the amounts I am paying to do a full fee course are actually going to the federal government. In effect, taxing one such as myself who has done a working post graduate degree is double taxation. 

The end result and the unintended consequences of this will be a lowered uptake of education. Then, less educated people are easier to control…

 

For more, see:

http://www.smartcompany.com.au/tax/055096-business-protest-capping-self-education-expenses-at-2000.html

Tuesday, 14 May 2013

On Trust and Risk

Security matters, but not so we can eliminate all risk, but so we can have trust. Even if we could eliminate nearly all risk (we cannot ever remove risk entirely) we would have to ask whether it was worth it to do so.

Risk IS quantifiable.

This is a statement like many others that is true, not always in the ways we assume, but it is true none the less.

We can always measure risk. This does not make a difference what field you are referring to, risk is a quantifiable metric.

The problem is not if we can measure risk, but how and with what results. These results come to:

  • reliability,
  • precision, and
  • accuracy.

These are not the same, but each has a bearing on how well we report on risk. The first of these, reliability comes down to whether we can repeat the same results again when we do an experiment. It refers to an ability to have either or both precision and/or accuracy stay within predictable bounds.

Precision is how true we are to the mark each time we make a risk measurement. This is, how close to the real value we lie and in effect it comes to the level of variance we have. We can actually be imprecise with the mean value right on the bulls-eye and results that have a large variance or spread. This would be centered on the expected mean on average but with results that vary widely.

image

Accuracy is how close we are to the mean or other value we see as the measure of risk. We can say it is a measure of how close we are to the bulls-eye.

To have a good measure of risk, we need to aim for both precision as well as accuracy. It is also important t5hat we can reliably have a measurement that we can have others examine and produce.

Qualitative measures of risk.

There are always people who will tell you that risk cannot be measured. What they are really saying in effect is that risk cannot be measured using a scientific process and is an art.

There are reasons that people hold these views. Some have the idea that metrics are not possible and that only skilled people can create a metric. The flaw in this argument is that this is a form of metric and it is one that can be measured and tested. When we look at the results of how risk comes out over time, we see that the art based approach does not work well.

In science, we make predictions and the ultimate test of these predictions is the result that the real world delivers over time.

Risk can be measured. In doing so, we hold those making predictions to account. We can start to measure the actual predictions made. Is a system secure, well time does tell and in checking the “predictions” of risk and security people against time we can make measurements.

In making models, we also see how well we model a system and the feedback from inaccuracy and imprecision allows us to improve over time.

Next time somebody states to you that risk cannot be measured, remember it is. Think instead what they are telling you is that they do not want to have their ability tested in case they come up short.

Sunday, 12 May 2013

Are the poor exploited?

In 2012, the US trade with Sub-Saharan Africa (SSA) came to a total of $48 billion [1] as a combination of both imports and export to the nations. This was mostly in the form of machinery and other capital equipment that could (if increased) help the African people develop. The trade with Africa accounts for a little less than 1.4% of the overall US trade to the world.

We see this in the figure below.

image

Notice, for all of the resources in Africa, they are insignificant and if all trade to SSA stopped overnight (incl. South Africa), the US would hardly notice it.

Overall, in 2012, the US GDP was $14.99 Trillion . Of this, only a small amount comes through trade with “poor” countries. This is the issue, a lack of trade and not exploitation.

Overall, the GDP from Africa as a whole (incl. the oil nations and South Africa) is tiny when compared to the USA. We see this below.

image

The entire African continent does less than the US. Not as we have seen through trade based exploitation, there is not enough trade, but through a lack of markets.

Next time you hear that the poor are exploited, know that it is through their own leaders and failed political systems and not through trade. It is trade that could help them no longer be poor.

GDP is not the best measure of trade and growth for a number of reasons I will not address here, but it is sufficient.

[1] http://www.agoa.gov/build/groups/public/@agoa_main/documents/webcontent/agoa_main_003964.pdf