Saturday, 7 June 2008

"PCI compliance only covers in-scope systems which handle, store, or process CC# data."

True in a way, but not really. Remember that these have to be isolated in PCI. The problem is that people take the points in PCI in isolation to the requirements statement. This is a bad approach. The points are designed to support the requirement, not the other way around.

"This isn't a failure of being compliant but a failure of the compliance scope itself."

Again, it is a failure in testing and understanding. In an attempt to meet the "points" in PCI, people miss the aim of the requirements. This includes most parties who are testing it.

This is a serious problem. If the example is taken where the failure occurred due to client system breaches, remember the first requirement of PCI:

"Firewalls are computer devices that control computer traffic allowed into and out of a company’s network, as well as traffic into more sensitive areas within a company’s internal network. A firewall examines all network traffic and blocks those transmissions that do not meet the specified security criteria.

All systems must be protected from unauthorized access from the Internet, whether entering the system as e-commerce, employees’ Internet-based access through desktop browsers, or employees’ e-mail access. Often, seemingly insignificant paths to and from the Internet can provide unprotected pathways into key systems. Firewalls are a key protection mechanism for any computer network."

First compliance is a legal term of art that has been bastardised into normal parlance. Think as a lawyer when reading compliance statements as it is a lawyer who will run the case when a breach occurs.

The statement needs to be the first and foremost goal. To be compliant, you need to ensure that " All systems must be protected from unauthorized access from the Internet, whether entering the system as e-commerce, employees’ Internet-based access through desktop browsers, or employees’ e-mail access."

Clearly, when a breach occurs as an employee opened a link to a malware site, the system is NOT compliant. To have been compliant, the system would have needed controls. PCI systems would have needed to have been on a separate network to the user, or controls to stop this being possible needed to be in place.

Where people go wrong is that they assume that because the tick off the points one by one (in this case 1.1 to 1.5) that this in itself makes them compliant.

This may have you pass an audit, but passing an audit is not compliance. The initial requirement statement is exponentially more critical then the sub-points and I think this is where people go wrong. They believe that ticking the boxes is compliant and not needing the main goal.

I can configure systems that will meet all of the sub-points in the PCI requirements. I can also do this in a manner that make not a one of the Requirement statements.

Too often people seek the easiest manner to tick the boxes, but this is rarely a way to have any hope at being compliant.

Truthfully, PCI-DSS is the LEAST of the requirements that people need to be concerned about. In Australia, Canada, UK, US and several other countries there is legislation about the integrity and confidentiality of financial data.

As an example, in Australia, the Privacy Commissioner has issued Tax File Number Guidelines under s.17 of the Privacy Act. The Guidelines are legally binding. A breach amounts to an interference with the privacy of an individual, who may complain to the Privacy Commissioner and, where appropriate, seek compensation. There are also possible criminal sanctions for company directors and these are strict liability.

In particular, taken from the attached Tax File Guidelines:

"6. Storage, security and disposal of tax file number information
6.1 Tax file number recipients shall ensure:
(a) that tax file number information is protected, by such security safeguards as it is reasonable in the circumstances to take, to prevent loss, unauthorised access, use, modification or disclosure, and other misuse; and

Commissioner’s note: Tax file number recipients need to be aware that tax file number information handling procedures and safeguards should anticipate all reasonably foreseeable risks to security. Some examples of tax file number security are physical and logical barriers such as building security, locked filing cabinets, user identity checks and password controls for computer systems."


This is "all reasonably foreseeable risks to security". This covers most of the means stated to make a breach. These days it is even arguable that some defence against zero day attacks is necessary. These are after all becoming more and more common- making them "reasonably foreseeable".

PCI is only a fine. There are criminal offences for not protecting data in ALL of the above (and many other) countries.

As I have stated, the best thing companies have going for them is a lake of understanding of IT by prosecuting attorneys at the moment, but this is changing.

Friday, 6 June 2008

"But what if they (economics) were ignored?"

"But what if they (economics) were ignored?"

Economics can not be ignored. We live in a world with limits. To speculate on a world without economic constraints is on one where there is no shortage of anything. All people have anything they want any time.

This is fantasy. You may as well ask "what if dragons where real?" Who cares? I have too many real world things to consider to bother with fantasy.

Finance is simply micro economics. We are bound by limits. The Universe has limits on the speed we can travel. We have only so much energy per person on the earth. We have only so many materials. When we get into space and start mining Jupitor, this will increase, but supply and demand will bring costs back in line. Everything has a limit.

Time is also a limiting factor, we all have set limits to life. Mind you, the genetic possibilities may be larger, but this is sci-fi as yet. Time is money is stated for a reason.

A factor of finance is time. This is where the concept of the time value of money comes into play. When assessing risk and possible costs, at least an NPV and IRR calculation needs to be factored.

Pen testing is limited economically. Companies can either go for more low cost testing that rarely finds anything at one extreme to an infrequent test by highly skilled individuals at the other. This can range thus from $100 per hour people, to $370-600 for the top people. These find more, but less frequently and the number of people who can do this are limited.

Limits pose constraints.

Risk should be a quantitative function - in many organisations it is required to be (such as BASELII) quantitatively defined even for IT. This means calculus. Either risk is optimised at a inflection point that is a maxima/minima or the function is compounded by saddle points. Quantitative does not equal assigning numbers - this is a perception exercise. Risk needs to be scientifically calculated within defined confidence levels.

If we take a pen test team with 10 members all working on a large site with 500 hosts, we give them 5 days per host (large budget here). The test time is the entire working year. Either a sample is taken or the test takes a year.

This means systems are tested at best yearly.

A full test generally takes more than 5 days for a system so I am being conservative. I do not want to look at "but I broke x in 10 minutes" etc.

Security and risk like finance have a time factor. How long it takes for a consultant to come in and test and how frequently they do this is important.

Testing is a detective control and a validation. Validation is only effective in 2 ways:
if the control has a failure - to find and rectify
to ensure that a control is in place.

Testing is not generally done scientifically. We seek to prove a negative. In Pen testing this can be simple as the standards are frequently low enough already. Testing a broken model (a system with poor controls) will lead to discoveries. The problem is that it can not state that a system is secure.

A pen test can at best find and exploit a flaw. A worst it can make speculations. In any event, it does not test the full compliment of control failures.

The alternatives need to be in place. An infrequent detective control with no preventative controls is less than useless. A pen test is only effective in pointing out holes and control flaws. Even with zero days, there is defence in depth to be considered.

A good control framework will detect and stop even most zero days far more than it stops many other threats. Good logging and monitoring at the host and network level with competent people is a more effective control than pen testing.

I will discover a breach with a combination of integrity checks that run live and database triggers faster than I will using pen testing.

Monitoring and baselining network traffic is more cost effective if done correctly than testing.

Yes testing and audit have their place. Their place is to ensure that other controls exist.

Thursday, 5 June 2008

Musings

Compliance and security are related.

I would split off "compliance" and "perception of compliance". Passing an audit is evidence that a system could be compliant. A compromise of a system using a know vulnerability is strong evidence that it is not compliant.

A system that is breached due to a complex password and secured key that was "guessed" is possible, though unlikely. This would one of the few examples of a compliant system that is also breached. Basically, it will be rare to find a compliant system (to any jurisdiction) that is easily compromised.

What I learnt completing my LLM was how few systems are complaint. How little knowledge there is of the law and legal frameworks already in place (even with politicians) and the lack of due care. An ABSOLUTE baseline for a compliant system that has not other effect other than being owned by a company would be the CISecurity.org baselines at 100%.

The combination of technical people with no knowledge of the legal system, laws and processes with lawyers who can not turn on a PC is an issue here.

"I would agree with Adriel that finding a worthwhile auditor is difficult". Actually so would I. Finding an staff with half a brain provides enough difficulty to want to give up on the whole idea.

"The problems are analyzed from a primarily financial and business risk avoidance perspective"
Here I have to disagree. I work with financial auditors and I am yet to meet one who understands risk and have met very few who have the faintest comprehension of finance. Audit and finance are NOT the same thing. I did finance at a masters level and I think audit is wacky for the most part. For the rest, there is an approach of try to find nothing wrong or it will upset the client.

I have developed statistically based continuous audit programs for financial systems. These have a significantly lower cost and deliver more. What I get back is "Craig, we are watch dogs and not blood hounds. Please try not to find so much". So I use these with the Insolvency teams and on forensic audits, but it is a hard sell to audit teams. Clients seem to love it though.

"I'm curious as to what vulnerable points you're thinking of." Pen Testing is by nature externally focused. Many of the biggest issues are in the system. Static analysis of code, business process reviews and system walkthroughs all add additional layers of testing.

Many controls are not tested using Pen testing ion any effective manner. Take a banking application. Pen tests look at the system from a software and protocol implementation aspect. They do not go into the business process controls. In this case I would be asking, how do I get the money off the system. This requires an understanding of the controls in the application. This is not something a pen test will provide.

An attacker can do this by compromising the system and modelling the application functions. Or the attacker could be internal and know them. This takes time, it can take months or longer. The same process can be done in a matter of weeks with a cryptal box and business process approach. The pen test provides valuable information as to a known vulnerability, but this is where it stops.

When there are no obvious points of access that may be exploited, a pen test does nothing to state a system is secure, just that it failed to determine the state of the system and was unable to determine if a system was secure or not.

Prof. Cohen developed the concept of protection testing over a decade ago. This mitigates many of the problems with a pen test methodology (that where noted as far back as 1977 by Distraka). The issue is that the tester needs more knowledge than a pen tester.

Protection testing really requires a combination of technical and business process skills. Teams can do this, but this increases cost and also requires a co-ordination factor with knowledge.

"what if the economics aspect were ignored"
Not my world. We live in a world where EVERTHING is subject to economic constraints and interrelationships.

Time for instance is a economic constraint. They only way to remove it is to have an instantaneous pen test. A detective control that reports faster than a pen test is more effective. The lengthy of the pen test is one factor, but also the frequency. 1 test a quarter is detection every 3 months at best.

Tuesday, 3 June 2008

Plagiarism Steals Ideas

This is me self plagarising from my chapter in the Official CHFI book. Oh wait, it is not plagarising as I am giving credit (even if to myself).

Copyright Infringement: Plagiarism
The Webster New World Dictionary describes plagiarism as taking ideas of another and passing them as "one's own". This section details the tools and detection factors involved when investigating plagiarism. A common misconception is that plagiarism hurts nobody. The reality is that it is a fraud and thus a criminal offense (see § 1341. Frauds and swindles). Plagiarism takes away from the effort of the author and society suffers as a consequence.

The various plagiarism detection factors
Konstantinos Tripolitis in his 2002 dissertation, “Automated Detection of Plagiarism” (University of Sheffield, UK - note the referncing) addresses the various plagiarism detection factors that commonly occur. He includes the following as possible detection factors:

  • Changes of vocabulary: When the vocabulary an author uses varies significantly in the text under consideration, then there is a great possibility that the author has committed plagiarism.
  • Incoherent text: Inconsistency in the style of a text, such that parts of the text seem to be written by different people, can imply plagiarism.
  • Punctuation: When two texts exhibit extremely similar punctuation, plagiarism can be implied, as it is not possible that two authors could use punctuation in the same way.
  • Dependence on certain words and phrases: If particular words or phrases used by a certain author in a custom way are used consistently by another author, then plagiarism is possible, as authors tend to have different word preferences.
  • Amount of similarity between texts: Texts written by different people and sharing a great amount of similar text should be checked more thoroughly for plagiarism

This is a fundamental idea for a detection tool and is the one mainly used in the present work. A reliable detection tool should be able to provide a fairly accurate similarity measure in order to be useful.

  • Long sequences of common text: Long sequences of consecutive common characters or words found in the texts under test exhibit a fair possibility that plagiarism may have been committed. This is another fundamental idea that is used in the implementation of HERMES. It is also known as the ‘sequence comparison’ approach.
  • Order of similarity between texts: If two texts have the same order of matching words or phrases then plagiarism is a possibility.
  • Frequency of words: Finally, words used in the same frequency in two texts written by different authors suggest potential plagiarism
I admit, I am a taddle tail. In 1999 I wrote my first book chapters and was right royally taken with "Internet republications" without authority. So now I have an issue with blatent copies.

Today I read a published article by an ISACA branch president that was significatly copied. It was basically from the Information Systems Control Journal, Volume 5, 2001, “Harnessing IT for Secure, Profitable Use” by Erik Guldentops, CISA. Over 25% of the document was copied directly from this document, and the rest was slightly paraphrased.

I can see occasional misses. I have forgotten this myself at times and left sentances out of quotes. The diffierence is where 99%+ of the quotes are correctly refered against not a sinlge reference.

In the words of an Australian, Mr Hinch;
Shame, Shame Shame...

Monday, 2 June 2008

GIAC .NET (GNET) certification.

Today I have finished the last in the collection (untill they add some more at SANS). I think that makes 27 SANS certifications...

Congratulations!!

You have earned the GIAC .NET (GNET) certification.

You can take pride in your efforts, and in the fact that you have joined the ranks of a select group of professionals, who have demonstrated expertise in the information security field.

Your certification will be posted to the GIAC website within the next few days.

http://www.giac.org/certified_professionals