Thursday, 15 November 2007

An analysis of the Australian Computer Crime and Security Survey 2006.

I was going to wait to there was a new survey, but the Government has decided that it will not fund this report any longer and AUSCERT have cancelled these reports (I wonder why - what is a little Bias between friends?). I will have to pick on an FBI survey now.

The Australian Computer Crime and Security Survey 2006 has been taken as authoritative in the IT industry. There are a few flaws in their methodology however that bias the results and damage the effectiveness of the report.

The survey has the aim of discovering “what or who are the most potentially dangerous internal and external threats to an Australian organisation?” such that the threats to the National Information Infrastructure (NII) can be assessed. The paper presents that it “presents a snapshot of Australian computer crime and security trends now and in the future.”

Contrary to the survey method, this question needs to be specified in terms of a sector and industry. It is unlikely that broad brush statements which have been made will impact equally or reflect the true threats.

The survey seeks to furnish additional data of use in developing strategies of risk management in order to assess the correct level of investment in security. A good understanding of the probability of the impact and likelihood of an attacker impacting an undertaking would be an economic boon. This could allow one to evaluate an undertaking’s risk through a comparison of the experiences of other undertakings with comparable systems and characteristics. Such associations could enable a competitive analysis and of the principles of due care and diligence in defending the undertakings assets.

The survey was sent to organisations using reply paid envelopes. These organisations were chosen by ACNielsen using a selection of organisations and where ACNielsen had existing data concerning the IT managers of those undertakings.

In selecting the 2,024 organisations that the survey was sent to, no attempt was made to either randomise their selection or to ensure that the sample was representative of the population as a whole. Rather, members of the Trusted Information Sharing Network (TISN) were targeted and given a preference. TISN members were invited to complete the survey using an online survey web site. Members of the TISN are however not representative of the Australian information security population.

Of the 2,024 reply paid envelopes, only 238 responses (11.75%) where received. Of the over 1,000 TISN members (the exact number is not disclosed), 151 (or less than 15%) online submissions were received.

The survey was anonymous and no information regarding the source of the organisation or the names of the parties was collected. Further, no method was initiated to restrict TISN respondents from submitting multiple responses. In the case of TISN members, many of these organisations have multiple information security personnel all of whom could have submitted the survey online and all of whom were invited to do so.

The target population and sampled population
The target population consisted of 2,024 organisations that were in the existing database of ACNelson with a designated IT manager. Additionally, members of the Trusted Information Sharing Network (TISN) were targeted.

The stated goal of the research was to have a Sample population which was representative of the overall population of organisational information technology users (including government, business and industry and other private sector organisations).

Of the 2,024 reply paid envelopes, only 238 responses (11.75%) where received. Of the over 1,000 TISN members (the exact number is not disclosed), 151 (or less than 15%) online submissions were received.

The survey’s authors reported an overall 17% response rate. In this they did not take the number of TISN members into account. They have taken 389 responses and divided this by the 2,024 survey sent by reply paid envelope to come up with the 17% figure. However, when the approximate number of TISN members is included in the total, the number of organisations invited to participate exceeds 3,024. This in reality he is a 12.8% or lower response rate.
A copy of the distribution of responses is attached below:




It should be further noted that “notable changes in survey demographics, size and industry representation” have occurred on a year-to-year basis. Significant variations across the responses have occurred each year from the initial survey in 1996 to the last survey in 2006.

The survey conclusions
"Interestingly, having more security measures did not mean a reduction in attacks. In fact there was a significantly positive correlation between the number of security measures employed and the number of Denial of Service (DoS) attacks."

The survey found that around 20% of organisations have experienced electronic attacks in the last 12 months. This is stated to be far lower than the previous two surveys (with 35% and 49% in 2005 and 2004 respectively).

It was asserted that 83% of organisations that were attacked electronically were attacked through external attacks. Conversely, only 29% when aged to be attacked through internal attacks.

It was noted that there was an overall reduction in the level of attacks noted. The survey included infections from viruses and worms as an attack. 45% of respondents who had been attacked stated that they were attacked in this manner.

It was also asserted that 19% of respondents who reported computer crime to Australian law enforcement agencies had resulted in charges being laid. This statistic does not correspond to the number of “cyber crime” convictions in any justifiable manner. It is thus difficult to believe these findings.

Are the conclusions are justified?
Nothing is noted in either survey to account for the confidence of the tests/survey and the calculated type I error. According to the responses, approximately 1 in 3 Australian organisations are be critical to national infrastructure, if this were the case, with a number of systems failures that occur in that a daily basis astray should be in a constant state of panic.
The justifications of the survey cannot be considered valid. The available public references however cite this report as authoritative.

Antivirus, antispyware, firewalls, and antispam are principally designed to defend against external threats. In considering internal threats it is necessary to take into account file access controls, role based access, segregation of duty (SOD), logging, configuration of the log file, internal use of IDS/IPS, change management, change monitoring of critical systems, administration of user access with regard to new users, role changes, and terminations, reconciliations of access, roles, and SOD, background checks[1], and other such controls.

The survey has taken no account for randomness. The selection method is not randomised and there is no validation to ensure that the TISN members which did submit the survey did not do it multiple times. In fact, several TISN members have multiple security personnel, all of whom were invited to participate, but who would have been counted as a single organisation.

If the survey had been randomised, they would have been an equal chance of all members of the population being selected. However, where the population is predefined as in the survey and the Sample is drawn non-randomly, than the Sample must be described as being biased.

There is no reason to conclude that the respondents are a representative sample of information security practitioners, or that the undertakings they are attached to are representative of the population of organisations within Australia.

Sources of bias within the survey?
The overall response rate for this survey was reported at approximately 17% (389 of 2,024). However, the true rate was 12.8% or lower. Of the 2,024 reply paid envelopes, only 238 responses (11.75%) where received. Of the over 1,000 TISN members (the exact number is not disclosed), 151 (or less than 15%) online submissions were received and no account was made to ensure that multiple people from the same organisation did not respond.

Nothing was noted as to whether the 12.8% who did answer where different in any other way from the 87% who did not respond.

In this instance, there is certainly clear bias towards TISN members. There is no indication that TISN members who are likely to be running critical infrastructure networks are in any way representative of the Information Security initiatives which are associated with the population of Australian organisations as a whole.

The failure to either isolate, separately report or otherwise take into account the unmonitored responses from TISN members must make them a potentially confounding variable. Further, there is nothing to suggest that “people who respond the surveys” in this case are representative of the larger population of all people involved with information security from all companies and other organisations, as was desired.

For instance, what if the individuals who where willing to answer the survey are also those who either have better security in place and thus better detective mechanisms and logging then those who don’t respond. In this the respondents could be confounding the reported quality of security and number of cyber crime based attacks. The biased Sample could be misleading us into overestimating or underestimating the number of, or impact of cyber crime as it impacts the organisations within Australia. We could also be overestimated the level of information security controls in place within the desired population.

Next, the comparisons of the surveys in different years are based on changing demographics as reported. These changing demographics do not reflect the changes in the population as a whole.
Different individuals and different methodologies have been used in the survey responses from prior years. This may be introducing further confounding variables as responses may change over time, with the nature of people surveyed, or the survey techniques.

Next, the surveys were carried out with different questions and were asked by different research groups in each of the years. The difference in questions asked over the years is a possible confounding variable in itself. Nothing has been done to take the confounding changes in methodology into account in the survey.

When the survey respondent rate is undersized, there is a possibility that the Sample turns out to be biased. The authors of the survey need to lessen this chance by warranting that the Sample incorporated an apposite representation of magnitude, industry sector and location and by weighting the data accordingly.

Interviews and other social-sciences research methodologies can suffer from a systematic tendency for respondents to shape their answers to please the interviewer or to express opinions that may be closer to the norm in whatever group they see themselves belonging to. Thus if it is well known that every organization ought to have a business continuity plan, some respondents may misrepresent the state of their business continuity planning to look better than they really are.

In addition, survey instruments may distort responses by phrasing questions in a biased way; for example, the question “Does your business have a completed business continuity plan?” may have a more accurate response rate than the question, “Does your business comply with industry standards for having a completed business continuity plan?” The latter question is not neutral and is likely to increase the proportion of “yes” answers.

The sequence of answers may bias responses; exposure to the first possible answers can inadvertently establish a baseline for the respondent. For example, a question about the magnitude of virus infections might ask “In the last 12 months, has your organization experienced total losses from virus infections of (a) $1M or greater; (b) less than $1M but greater than or equal to $100,000; (c) less than $100,000; (d) none at all?” To test for bias, the designer can create versions of the instrument in which the same information is obtained using the opposite sequence of answers: “In the last 12 months, has your organization experienced total losses from virus infections of (a) none at all; (b) less than $100,000; (c) less than $1M but greater than or equal to $100,000; (d) $1M or greater?”

The sequence of questions can bias responses; having provided a particular response to a question, the respondent will tend to make answers to subsequent questions about the same topic conforms to the first answer in the series. To test for this kind of bias, the designer can create versions of the instrument with questions in different sequences.

Another instrument validation technique inserts questions with no valid answers or with meaningless jargon to see if respondents are thinking critically about each question or merely providing any answer that pops into their heads. For example, one might insert the nonsensical question, “Does your company use steady-state quantum interference methodologies for intrusion detection?” into a questionnaire about security and invalidate the results of respondents who answer “Yes” to this and other diagnostic questions.

Ultimately, as rare as they may be in the real world, independent substantiation of responses offers strong evidence as to the truth of a respondents answers.

[1] COBIT V4.0 (ISACA) provides a detailed list of internal information systems controls.

No comments: