"Just out of curiosity, how many people here thinks that PCI does
anything to protect you from the real world threat?"
Are you "REALLY" compliant. Or is the organisation doing just enough to fool the auditors.
From experience, I see 80-90% in the latter. Most auditors are not experienced enough to know when they are being BS'd.
As for the question, I have not seen a compromise of a truly compliant organisation. Every single organisation that has been compromised has actually also had a flaw that should have failed them.
In some instances, the testing was completed on an alternate system that was given to the auditor in place of the real one. In others, they pointed out the good systems and missed many.
I guess that people do not understand that in this case less than full disclosure is actually criminal fraud. In Australia in Corps act provisions (similar to the SEC provisions in the US) make it a criminal offence to mislead the auditor.
To answer the question, yes. I think that getting to this (truly this level) would help. The issue is that which companies are?
Take for instance a validated firewall. What does this mean?
A validated firewall is one that is tested. This is you use hping or a similar crafting tool to fire packets through ALL interfaces of the firewall and you validate the firewall policy.
When you read these standards, think how a lawyer will read them, not an IT person. This is because it is a lawyer who is the judge and prosecutor.
Many QSA's even following their wham bam thank you mam intro to audit and now you are a QSA process are not ready to audit systems. Those same who do not know the systems they audit. An example being "how do I copy a directory on Unix":... being a real quote from the principle at a MAJOR PCI specialty firm.
Passing an audit is NOT compliance!
Let me say that again...
Passing an audit is NOT compliance!
The fun of having just completed my LLM in commercial law is understanding these issues a little better. An audit is a risk report to management. It does nothing to stop the lawyers rolling over you if you are not compliant. The issue where people try to BS the auditors is BS. The auditors are NOT the enemy, they are the ones stopping the courts roll over you.
You are either compliant 100% to the standards you need to meet, or you (both the individual AND the organisation) are at risk.
I do not think that many on the list understand the issues. On another compliance topic, SOX, you tell management and the auditors that a system is compliant. The auditors are a little clueless (as many are) and do not (as by law they are required to) test the system.
Who is to blame when the system is compromised?
You are and management is - both.
In fact, your ignorance as to the system security is not a good defence. You have defrauded the auditors and the worst case is 20 years with a new hubby called bubba... (or the equivalent for the female of bubba).
To demonstrate this. I have a SOX client who is stating that they do not need logs, they have never been compromised.
I have another (who has for 4 years received a clean bill of health from a Big 4 firm) who has more services running on the finance database than come out of the box. They have the Archie filesystem as a consultant though it would be cool. They have also not patched it for the last 113 remote root level exploits.
In Australia, not securing payroll and finance information (eg tax file numbers of employees) is a criminal offence.
I will say it again.
Passing an audit is NOT compliance!
Choosing a firm who will pass you is dumb. This is economic false economy. It is buying a broken umbrella in case it rains.
Saturday, 24 May 2008
"Just out of curiosity, how many people here thinks that PCI does
Posted by Craig Wright at Saturday, May 24, 2008
Tuesday, 20 May 2008
Started a Post Grad Diploma in Law and followed this last year with an honors Masters degree.
My area of study was based around:
- Competition Law (Anti-Trust and merger law)
- E-Commerce Law (Internet law)
- Commercial Law (Banking Law)
- The Law of International Trade
- International Trade Finance Law
In this process I have received 2 awards for my legal papers associated with the path towards the final dissertation. One of these was published by the British Computer Society (Risk Journal) and others have been included with sources such as the IIA. I have been asked to present one of these papers at IT Security World 2008 in San Fran Cisco. This is an international IT Security Conference that draws 20000-30000 attendees. I am being marketed as a "featured speaker".
My dissertation covered the convergence of International laws and looked at:
- Australian Law (and some NZ Law)
- Canadian Law
- US Law
- UK Law
- ECC Law
I was awarded my honors LLM (Masters in Law) degree with commendation.
I shall be starting a PhD in Economics/Law later this year looking at the risk effects and costs of regulation. This is a combination of the negative and positive effects.
Sunday, 18 May 2008
Quantitative risk requires statistics (a rant).
It is not hard to do Quantitative risk modelling in IT Security as long as you have the maths. The difficulties are missing values (requiring longitudinal data analysis and multivariate methods) and incomplete risk profiling (requiring Bayesian methods).
The risk is a survival function with compounding time factors (heteroscadesis).
Pulling a number "out of your ass" is qualitative. If another can not re-calculate the same value, it is qualitative and NOT quantitative. Quantitative methods are not based in subjectivity.
Why this is not commonly done in IT risk assessment (BASELL II DOES require a qualitative risk assessment).
- Lack of math skills in IT people
- SAS and other quant people earn more (2.5-3x) the IT salaries
Hence few quantitative risk reviews.
Some standards (BASEL II, GLBA) have requirements for quantitative risk. This is mainly banks, hedge funds etc. Few others can afford it.
The large ones do some. The smaller ones issue fake numbers more than not.
As for being delusional if you trust in quant-based methods, that is for anyone who trusts a qualitative assessment where people pull numbers. These assess perceived risk - these do not assess risk. There is a distinction.
Qualitative = Perceived risk
Quantitative = Risk (within confidence bounds)
Some say that ncircle IP360 is quantitiative, it does nothing of the sort. ncircle IP360 is fluffy qualitative assessment.
You need to feed all the data you can and do a little dimensionality reduction, letting the numbers chose the factors and including the errors.
If you want to start learning how:
It has been stated in one form or another that:
"Bottom line: I personally do not believe that it is possible to do a quantitative risk assessment and anyone who thinks otherwise either does not understand today's risk environment, or is delusional."
No, the opposite. Qualitative risk is for those who like to think they know. The data is far too complex to be assessed by ANY person and requires computational methods. I have yet to see a qualitative assessment that when compared to a REAL quantitative one comes close. The issue being many naive qualitative methods that are falsely called quantitative.
Look at ARO, ALE etc. This relies on a risk calculation. The likelihood of an event for the type of organisation. The ONLY way to do this is to use survival analysis with multivariate analysis taking compounding factors into account. The issue is that people pull a figure out of their proverbial as was stated. ANY addition of non-quantitative data makes the ENTIRE calculation qualitative. ALE is ONLY a quant measure if the likelihood calcs are completed using hazard factors and survival calcs.
The difficulty is the cost. I have seen PCA, PLS, SIR and k-dimensional factorisation for 80+ dimensions that can take a few weeks of computer time and this costs $. Look at the rates of C++ programmers with quant skills. The question is why use these skills for security risk when market risk pays $600-$800 an hour. Even at the security risk calcs, few want to pay. My charge rate for this is $370 ex tax. For 80 hours plus work per system, the cost of the process is often greater than the assess value and risk for smaller firms.
However, once done, the model generally only needs to be updated yearly with the principle 5-6 components accounting for over 98% of risk by asset. This leaves an error of the 1-2% which is not material for most organisations.
In theory and practice.
How do you model historical data? Well the answer is multivariate means. Longitudinal data analysis.
The whole basis of what most people consider to be probability theory is starting from a flawed foundation and is being built without substance. The chance based theory being implemented is basing a determination on methods from the 17th century (literally). Although we still teach this in high school, it is not the basis of a modern curriculum in statistics.
The how is the same how used in heteroscadestic financial modelling, biostatistics and similar disciplines.
You are starting with a qualitative assumption. You have in your own mind decided on the risk factors. As I specified - you can not do this. This is another flaw in understanding quantitative methods. You need to use a dimensional reduction technique and allow the data itself to determine the correlative factors.
"For example: Historically, the chances of a Windows box on a secure network getting rooted were less than 1 in 100,000."
I am sorry, how do people make these type of figures up? I see no basis in reality. These are perception from analogy. The assumptions are also incorrectly factored this into a single dimension. Wrong. "Bad Zutt, Naughty Zutt".
You need all the data. For a start:
- Type of industry
- Traffic volume and patterns
- Router and firewall rulesets
- (and it is easy to feed these into a correlation engine)
- ... (no skimping)
I did a paper on the use of ARIMA (autoregressive integrated moving average) methods for the prediction of malware a couple years ago. Although people such as yourself scoff at this type of modelling, my predicted model is still accurate after 2 years (based on a 95% confidence).
"Except perhaps for risks associated with Mother Nature. And with climate change"
Please, are people kidding. IT risk is simple compared to weather modelling. The dimensionality in IT risk comes at most in the order of 60-100 factors. Weather modelling comes in the 10's of thousands.
The problem with this type of attitude is that you see this as hocus pocus just as people do not understand it. Yet the maths is the same in many cases as that which allows a GPS to not drift the minutes a day that relativity theory dictates it must due to velocity differentials to earth. It is the same that allows our phones work.
You can not make the dimensional reduction to a windows host has a 1 in x chance of being compromised.
You need to model EACH host:
- Workstations in network A,
- Servers on DMZ with config A,
- Servers on DMZ with config A that are patched a week later,
- Workstations on a hub
- Workstations on a switch
- Workstations on the same network as a win 95 box
Most end up doing BI (Business Intelligence) modelling for banks and telcos to predict client churn. Same maths, but IT people with maths are rare. I am not talking B.Sc. I mean a good post grad research math degree.
In Australia, we produce less than 250 of these per year. Of those, in any field, of IT there is about 5% - and most of this goes to bioinformatics.
So is there a great volume of quant snakeoil. Answer as yes and you are correct. The issue is that few can do the maths to see if it works.
How do you tell what is real. Well look at track records. Those who are willing to publish their models and who have a track record over the years and can be validated etc are more likely to keep doing this. Those who refuse to publish their models and algorithms as they are "proprietary" are basically snake oil sales organisations.
As for future aspects, my models take EVERYTHING into account and I let a dimensional reduction method choose those factors that have a statistically significant effect remain.
As an example, I am already factoring the impact of 3d printing technology on IP (intellectual property) protection.
"how do you base risk on historical data"
Again, people are thinking high school stats. I have pointed out a few methods. LDA and other methods are used for missing data projections. These have been around for 15 years or so now and have proven themselves. My data analytic team is learning these, most do not learn them. Just because most people do not know grad level statistics, does not make it magic.
Multivariate data analysis using Bayesian techniques accounts for the gaps in data. What you get is a range and confidence interval. As an example, a calculation would provide something of the type (based on real data):
System Expected Risk at 95% CI
Windows host A (patching daily) $3,521 - $4,210
Windows host A (patching weekly) $5,422 - $6,585
Windows host A (patching monthly) $13,895 - $15,510
System Expected Risk at 99% CI
Windows host A (patching daily) $3219 - $4512
Windows host A (patching weekly) $5002 - $6905
Windows host A (patching monthly) $10275 - $22130
The trade off is that the more accurate the confidence level, the wider the range. What this then allows is a determination of the benefits.
For instance, if the Windows host A cost estimate at a 95% CI is set daily at $35 (+/- $2.50) we have a years cost range for daily patching of ($11862.50, $13687.50). SO we are 95% confident that patching the system on a daily basis will cost us between $11,862.50 and $13687.50.
The calculated costs of patching weekly are ($4,225, $5,362.50)
The calculated costs of patching monthly are ($1,482.20, $1,596.21)
So looking at the expected benefits:
System Cost of patching (CI = 95%)
Windows host A (patching daily) $16,640.50 (+/- $1,257.00)
Windows host A (patching weekly) $10,797.25 (+/- $1,150.25)
Windows host A (patching monthly) $16,241.71 (+/- $864.50)
So we see that the additional effort to patch the system for this organisation daily is a cost. That doing this less than monthly is a cost. So the best (lowest cost) strategy is to patch weekly.
The results where statistically significant at the alpha=5% level for a determination that the effort to patch daily would cost more than it saved. Equally, the cost "savings" of patching the system on a monthly basis added additional risk.
If the client had wanted to pay more we could have modelled this to the inflection point and determined the exact benefits, but the model was not significantly better than the simple model in any event and did not justify the cost addition.
(So Matt and others at iDefense, MacAfee, the Certs etc, this is what I do with that zero day data.)