Saturday, 13 August 2011

System Baselining

In the coming weeks, I will be providing a few simple methods to baseline your network at a high level. Nearly all external attacks and many internal ones will be initially based on the exploit of a network service. Knowing the systems and services running over the network will greatly aid in securing the organization.

Breaking this process into manageable sections is the key to successfully completing it. Each stage of the overall process of creating a secure and compliance network is then “projectized” into controllable chunks.

You may have guess from my collection of their certifications that I like the SANS model. The SANS audit strategy is defined using the following steps:

1. Determine Areas of Responsibility
2. Research Vulnerabilities and Risks
3. Secure the Perimeter
4. Secure the DMZ and critical systems
5. Eliminate Externally Accessible Vulnerabilities
6. Eliminate Internally Accessible Vulnerabilities
7. Search for Malware

These stages allow the organization to move from the outside in. Starting at the perimeter, the organization can test and provide a deeper level of defense of its systems in the most effective manner, locking external attacks out and reducing noise as the testing proceeds.

With the many other areas we will touch on, in coming weeks we will look at  a phased approach to securing your network and systems.

A Thank you


Hello,
First, I have to thank some of the people for the comments I have received in the last week. I am happy to have inspired some of you. I will continue to post and build a free repository of security solutions and knowledge.

I hope we can all inspire one another to create working solutions and stop the problems we all face.

My Blog (and in time) some of the other things I am doing will be monetarised. Now, for where the money goes. By clicking and purchasing on the site, you help Burnside and Hackers for Charity. All monies earned are going to be split 50/50 between these two charities.

Regards,
Dr. Craig Wright GSE GSM LLM
Director,  Australia – Asia Pacific
GICSR | Global  Institute for Cyber Security + Research
Exploration Park, Kennedy Space Center
100 Spaceport Way, Cape Canaveral, FL 32920

“Sumer did ok without capitalism.”

In my many random rants and battles with the world at large I have managed to have a retort from one Nancy Weston who argued that “Sumer did ok without capitalism.”

She has argued further that:
Wealth is made by draining and using those Third world countries to the benefit of Capitalist countries, if we didnt have China making our consumer goods do you really believe Australia could afford to produce it at the same cost whilst paying our award wages?”

To this topic of course I posted on the lack of trade with Africa. It remains that the issue with Africa (and for that matter any third world country) is a lack of trade. The US has an economy of $14.772 trillion (2011) based on GDP. This is $47,275 (2010) per capita or man, woman and child.

Trade to Africa accounts for just 0.607% of the US economy. Even trade with China only accounts for 2.27% of the US economy. All up, adding all of the impoverished nations that trade with the US, the total trade just accounts for 3.25% of the US economy.

This is, 96.75% of the US economy is derived from countries other than the third world. I have to ask, How can ““Wealth be made by draining and using those Third world countries to the benefit of Capitalist countries”?

Basically, the lack of trade and the lack of capitalistic markets is the issue. Capitalism is not draining resources from the poor, the poor are not engaged with trade. The are poor as the do not trade.

Sumer…
To the topic of this discourse.

The Sumerian civilization was founded upon the flood plain of the lower reaches of the Tigris and Euphrates Rivers about 4,000 B.C.E. Some argue earlier and to 5,000 B.C.E. but these were basically small tribal societies and did not in themself form what we would call Sumerian civilisations.

The economy of the Sumerian city-states were based on agriculture and trade. Basically, they started as a merchant economy. A form of proto-capitalism. Writing was “invented” as a means to account for the states taxation system and as a means of controlling debt and contractual obligations.

As the society grew, they became more and more specialised and started to industrialise creating manufactured goods in large factories. The Sumerian city state was also a slave culture and there was a heavy reliance on the capture and enslavement of the surrounding peoples and tribal groups.

These city states imported the goods needed for a bronze age society. Copper was imported from as far away as what is now the UK. Tin was found throughout Europe and Asia minor. Timber was sourced from the coastal regions of the Mediterranean including Lebanon (as it is now called) and Northern Africa. These imported goods were exchanged for dried fish, wool, wheat, and and finished metal goods.

The machine of the time, the engine was the slave, but this was a distinctly different slave culture to that most people would think of when comparing the South of the USA today.

Basically, Sumerian society thrived (ignoring the slavery) for the reason of trade and what was an economy derived on proto-capitalism.

At about 3,000 B.C.E. the invention of the wheel was used to enhance trade speeding movement inland and making it profitable to trade with those further from the coastal regions.

At the start, slavery was a minor part of Sumerian society. As the city states fortified and the priest and warrior cast started to be subsumed by the rise of the Kings, slaves became more important in the economy right to the point where the Sumerians disappeared from history around 2,000 B.C.E. This was a result of the military domination by various Semitic peoples that surrounded the region. Most importantly, at the period around 2,000 B.C.E. Sargon established an empire in Mesopotamia and this consumed Sumer. That stated, Semitic peoples had been encroaching into Sumer long before Sargon's conquest.

Basically, as trade was controlled more and more and more capital was consumed by the state for a military Kingship, the nation moved from prosperity to collapse.

Many supporters of communism have proposed some new progressive structure of society that is really the same agrarian society of the middle Sumerian city states. This system was created with a priesthood controlling the society and parallels that called for by those wanting to see a Marxist form of welfare state. In this case, the Party replaces the priest, but for all intents and purposes it remains the same.
Over 90% of the Sumerian people were involved in agriculture. In any standards, they were poor and disenfranchised.

Basically, any early agrarian was poor and undernourished. The people who made the city work were the merchant class and these became the first middle class society.

To learn far more, I highly recommend the following book:
The Invention of Enterprise: Entrepreneurship from Ancient Mesopotamia to Modern Times (Kauffman Foundation Series on Innovation and Entrepreneurship) by David S. Landes, Joel Mokyr and William J. Baumol (Jan 11, 2010)
 
Money in the early Sumerian city state was in the form of small silver disks. The record systems and accounting used in these city states has led to Sumer being referred to as “the birthplace of economics”.
Basically, we have a society that started through trade. As trade (and an early form or proto-capitalism) made wealth, a type of political parasite took off. This was the Kings.

The Kings made more war, they needed more slaves and they drained more from the society.
So, Sumer started with what was an early form of capitalism and collapsed as a top heavy government took more and more and gave back less and less.

I have to end by asking, did Sumer really do ok without capitalism?

References:

Trade.. It actually is a lack that is the problem

Right now, Trade in Goods with Africa from the USA is lower than that between the USA and Canada.

From the graph below, we see that there is a trade imbalance with the US to Africa.

What we have is a continent with a population of 1,022,234,000 people [1] having less trade with the USA than a country (Canada) of 34,547,000 people [2]. It only comes close to even with a distant country (Germany) with 81,471,834 people [3] with a distinctly different language and culture to the US.

The table below shows the 2010 Trade figures for these three geographic locations as a total for 2010 in millions of US$.

The table below has imports INTO the USA and Exports from the US.

Country Total Exports $ Total Imports $
USA - Africa
28,346.9
85,007.9
USA - Canada
249,105.0
277,647.5
USA - Germany
48,160.7
82,429.1

So we have a total of $85 billion being exported from the African continent. They only have $28 billion being imported. A net trade balance of $57 billion. This is a good thing, it is simply that there is TOO LITTLE trade happening, not that these people are being exploited by US companies.
 Looking at the graph above, it is clear that there is far less trade with the USA and Africa than with the USA and Canada. But is the trade situation with the USA and Germany so different to that with Africa? For this, we need to look at the trade figures per person as is displayed below.
What we see is that there is nearly NO trade with Africa on a per capita basis to the USA. The problem is NOT that the African people are exploited, but that they have too little trade with first world countries. This is not the fault of the USA and it's people or any other "first world" country, but that of those in power in Africa and the exploitation of their own people. It is NOT the west that is exploiting Africans, it is predominantly other Africans.

Clearly, the African people are not suffering from excessive trade with the USA or cannot be said to be exploited by the USA (They are by their own governments, but this is a separate issue).


In fact, the nature of money is that it can be used to purchase goods and services. The more trade there is in Africa, the more food and essential goods that the people there will be able to buy.


Simple answer, more trade will help these people come out of poverty and this means more companies making goods that others want.

References:
[1] "World Population Prospects: The 2010 Revision" United Nations (Department of Economic and Social Affairs, population division)
[2] Statistics Canada
[3] CIA World Factbook

Today's Rant...

I write articles such as the following that are published and taken up in newspapers…

The comments on these generally degrade into an anti-capitalist tirade even as those making the comments forget that they are on the product of this system, a system that has enriched us all.


MY response… (and yes I am a Christian but I also tolerate others and their creeds).
 
Business is made with all else from the people in business. It is a benefit in itself. People are enriched through trade. These silly ideas of exploitation are just that, silly. The fact is, the absence of trade is the biggest issue, not exploitation. People are not exploited through trade, they are exploited through small local governments that have a hold over them and limit open trade.
 
Places such as Africa suffer from a lack of trade. Look at the figures. There is more trade between Canada and the US than with the entire African continent.
The simple act of trade enriches. 
 
Ecclesiastes 11:1-2 (Good News Translation)
 
1 Invest your money in foreign trade, and one of these days you will make a profit. 2 Put your investments in several places - many places even - because you never know what kind of bad luck you are going to have in this world.
 
Or there is Luke 17:7-10; Matt 25:14-30
Paris Hilton is the idle rich in all this, but she is not a capitalist, she is simply a spoilt child of such. The middle class and even a person in a profession is not rich as is implied. 
 
Next, business and companies are NOT the masters. ANY person who has EVER built a successful company knows that their customer is king. Business is all about servanthood (2 Tim 2:24).
 
The SIMPLE answer is that ANY person in a capitalist country can start a business. They can raise capital with an idea and supplant the existing incumbent. This occurs again and again.
 
Finally, Paul was a leather worker and tent maker. A trade that seems lowly now, but it was really up in the levels of an IT worker 2000 years ago. It was one of the “higher” roles. He worked where he went. He traded and made businesses. These communities did not simply survive on love Byron, the practiced Ecclesiastes and the Parable of the Talents. They traded and they worked.
 
Now, the fact I am a capitalist (and libertarian seems to be ignored) blinds others to any ethics I may have. I see responses basically assuming I have no moral conscious (and I am not accusing you of this Byron). Yet, do these people even take the time to understand, a simple no is an answer.
 
I came from abject poverty, not privilege. I worked and I studied in a system that allows me to do this (and this also has meant full fees for my studies). Like many others, I give as I CHOOSE to give. I do this as I create something new in what I do and make something that did not exist. I enrich. I have not taken something and exploited, I create something new. This means, I do not care about the exact fairness of distribution and allocating the slices of the pie. 
 
I (as with other capitalists) make a BIGGER pie. Yes the percentage may not be fair – who cares really, the slice is bigger for all. Even the poor! 
 
Do we want to quibble over how 10 people divide 1 fish or do we want to have 20 fish for those ten people with at least a fish each? This is the question. In the first, lots go without, in the latter, maybe one person gets 8 fish and most just have one fish, but the answer is, the latter makes ALL better off. Who cares what the percentages are!
 
Business does NOT make a moral society. It is NOT the role of business to do this. Business is a mirror to society. It offers people what they want. IF people are virtuous, then people will want virtuous services. IF they are not, they will want other things.
 
Through business, I get to help make people more virtuous. I have donated over 300 computers and placed the heads of over 100 families through TAFE courses and given them skills. I have ONLY been able to do this through business. Not exploitation, creation.
 
http://www.burnside.org.au/content/Caring%20newsletter%20summer%2005.pdf
 
Virtue is NOT the function of business to teach. It is the values of the people that make a business. That said, businesses DO help make people more virtuous, they make people interact and trade and this leads to increased understanding and acceptance.
 
So, here I am the “big bad capitalist” who has donated over $10 million (more than my total net worth at any point) over the years to Burnside, St Vincent’s Community Services and a number of other charities what have others done?
 
The founders of PayPal have given far more than I. This is their created wealth, yet we are sitting her arguing their virtues as well. 
 
Simply, they provided a service. THEY created something and ENRICHED society.
 
ANYONE here can make their own business and compete. ANYONE. This IS democratic.
 
If you do not like PayPal, pull your finger out and make something better if you see something that can offer more. IF and I mean IF there is a service that people care more for, that is virtuous and as such people want for its virtue, they will use your service.
 
So, it is simple. Create something better and stop bitching about what we have as an armchair critic.
OTHERS have created the wealth that is here and is benefiting us all and we sit here wining about distribution of their created wealth and NOT making more.
 
Tell me, who is really doing more?

Friday, 12 August 2011

Windows 7 GOD mode…

Windows 7 has a little known Easter egg called “GOD”. This is a useful (for administrators we hope) way to get access to tools in one place.

Start by creating a folder on the desktop...






Next, rename the folder as:



"GodMe.{ED7BA470-8E54-465E-825C-99712043E01C}"





Once you have renamed this file, it automajically populates and fills with a all in one selection of useful administrative Windows tools...
Then, your new administrative tools folder is ready to use.

The thing is to also make sure that users cannot create this, or if they do... remove it.

Yes, if users have access to this, they still do not have admin rights and need a password to run the tools (local admin is bad for most people). That said, it is still a good idea to limit what users are exposed to.

Network Mapping and Organizing the Mapping Results

It is important to plan the scope of all audit and system testing engagements, network mapping is no different. A failure to adequately plan will quickly lead to being overwhelmed. Plan a risk based approach to mapping the network. Start at the perimeter and work in towards the centre, gradually gaining more and more depth as each of the systems is audited.

It is important to ask where the real value lies within the organization. This is not a job for the auditor alone and management should consider the value of the data and information assets. Work from the outside in. With each step go deeper into mapping the weaknesses associated with the organization’s information assets. This should align with the following steps:

  1.  Map the network devices and perimeter,
  2.  Scan the internal systems and Servers,
  3.  Test and map Databases and Applications,
  4.  Create Images and baselines. 
 Creating Network Maps
There are many tools that can be used to scan systems and make a network map. The best known of these tools is nmap. Nmap is available from http://nmap.org/. There are many excellent sources of information for the auditor or security professional wanting to discover more about this tool. Other then the section in the Firewall chapter of this book, the following sites should be one of the first stops in this process:
Though nmap has been ported to Windows, it works best under Linux or UNIX. Too many of the options available within nmap are “broken” by the Microsoft network stack.

We covered using nmap for individual scans in an earlier chapter, “Testing the Firewall”. In this section we look at how to automate the response and make this tool useful for reporting.

The prime limitations with nmap are its reporting capabilities. Nmap does provide output in a “grep’able” format, but there are far more effective tools that can query the data. PBNJ (this package includes ScanPBNJ and OutputPBNJ) can import nmap scan results from an nmap “-oX”, XML format and provides the capability to query this data. The program is written in Perl and provides a means to instantaneously identify changes to the systems and network.

ScanPBNJ can be used directly to scan the network using nmap directly. Using nmap to scan and then import the output into ScanPBNJ requires the use of the nmap XML output format (-oX). ScanPBNJ with the “-x” option can import the results of the nmap XML report.

PBNJ
PBNJ is a suite of tools that provides that capacity to monitor change across a network over time. It has the capacity to save nmap results into a database and check for changes on the target host(s). It saves the details concerning the services running on these hosts as well as the service state. PBNJ can then parse the data from an nmap scan and store the results in the database. PBNJ uses Nmap as a scanning engine. It is available from http://pbnj.sourceforge.net/.

The benefits of PBNJ include:
  • The ability to configure automated Internal and external Scans,
  • A configurable and flexible querying language and alerting system,
  • The ability to parse Nmap XML output files
  • The ability to access Nmap output using a database (SQLite, MySQL or Postgres),
  • The ability to use distributed scanning with separate consoles and scan engines, and
  • PBNJ runs on Linux, BSD and Windows (Linux or UNIX are recommended over Windows in this instance).
ScanPBNJ default scan options
By default, ScanPBNJ runs an nmap scan using the command options; “nmap -vv -O -P0 -sS -p 1-1025”. This output is extremely verbose with operating system identification set. It will also not ping host by default. The options above run an nmap SYN scan over TCP ports between 1 and 1025.

It is possible to override the default options in ScanPBNJ using the “-a” switch. For instance to scan all TCP ports on the host 10.50.20.10 the following command could be used;
ScanPBNJ –a “-A –sS –P0 -p 1-65535” 10.50.20.10
 
The other options of the previous command include using the SYN scan option, version scanning, not pinging the host and using operating system detection. Any of the standard nmap switches and scan types may be used.

OutputPBNJ
The ability to query the ScanPBNJ results is provided using OutputPBNJ. OutputPBNJ uses a query yaml config file to perform queries against the information collected by ScanPBNJ. OutputPBNJ display the results of the scans using a variety of formats (such as csv, tab and html).

A number of predefined queries have been included with OutputPBNJ. These may be used to query the nmap results. The configuration file “query.yaml” contains default queries that have been defined on the system.

By default, there are only a small number of queries are limited. It is both possible to modify the existing default queries and/or to query the database directly. An ODBC connection to the database could also be used to load data from the database into another tool.

Understanding the Map
Networks change over time. New hosts, servers and services are added and removed. Network maps are not just Visio diagrams. It is nice to have a detailed visual map of what is running on the network, but a representation that can be automatically tested and used as a baseline is better.

The map of the network is the basis of being able to see what is authorized and what is not. Even if the systems on the network are not all tested and verified to be at an acceptable level of security, the map gives a way to get there.

Think of it this way, you have a system that is baselined, but has not been tested and verified. You already have a way to know two things:

  1. You have a starting point to check for unauthorized changes,
  2. You have a set of details about the system such as a list of services that are running on the system and which operating system it is using. 
From here it is easier to make a project to test systems over time. Grouping systems also help. If you have a series of DNS servers, they should be configured in a similar manner. Start with checking the “snowflakes” – why are they different. Each time that you recheck a system, it would then be added to the updated baseline. This way, the network becomes more and more secure over time.

NDIFF
Another way to see changes to the network is with a tool called ndiff.

Ndiff is a tool that utilizes nmap output to identify the differences, or changes that have occurred in your environment. Ndiff can be downloaded from http://www.vinecorp.com/ndiff/. The application requires that perl is installed in addition to nmap. The fundamental use of ndiff entails combining ndiff with a baseline file. This is achieved by using the “-b” option to select the file that is the baseline with the file to be tested using the “-o” option. The “-fmt” option selects the reporting format.

Ndiff can query the system’s port states or even test for types of hosts and Operating Systems using the “-output-ports” or “-output-hosts” options.

The options offered in ndiff include:
ndiff [-b|-baseline ] [-o|-observed ]
[-op|-output-ports ] [-of|-output-hosts ]
[-fmt|-format
 
Ndiff output may be redirected to a web page:
ndiff –b base-line.txt –o tested.txt –fmt machine | ndiff2html > differences.html
 
The output file, “differences.html”, may be displayed in a web browser. This will separate hosts into three main categories:
  • New Hosts,
  • Missing Hosts, and
  • Changed Hosts.
The baseline file (base-line.txt) should be created as soon as a preliminary network security exercise has locked down the systems and mapped what is in existence. This would be updated based on the change control process. In this, any authorized changes would be added to the “map”. Any unauthorized changes or control failures with the change process will stand out as exceptions.

If a new host has appeared on the network map that has not been included in the change process and authorization, it will stand out as an exception. This reduces the volume of testing that needs to be completed.

Further, is a host appears in the “Changed Hosts” section of the report, you know what services have been added. This is again going to come back to a control failure in the change process or an unauthorized change. This unauthorized change could be due to anything from an internal user installing software without thinking or an attacker placing a trojan on the system. This still needs to be investigated, but checking an incident before the damage gets out of hand is always the better option.

Thursday, 11 August 2011

Prioritizing Vulnerability Fixes

The unfortunate thing is that the easiest targets are rarely aligned with risk. A risk-based approach dictates that the vulnerabilities that pose the highest risk to the organization are addressed first.

To go about this it is necessary to build a prioritized list of vulnerabilities. SANS created a “Top 20” list for just this reason. For most organizations, this is a great place to start.

SANS sponsors the consensus top twenty vulnerability list. The list is available free from the web at http://www.sans.org/top-cyber-security-risks/. Just securing the network against the 20 exploits in this list will provide your organization with a greater level of security than most organizations. A list of ports that should be blocked is also available. Start with the organization’s perimeter security. Address the top vulnerabilities first. Next move down to the next riskiest level of vulnerabilities. The exercise may never end, but security has never been a point in time exercise.

In thinking about what to include in a vulnerability mitigation list consider the following:

  • Historical exploits,
  • Current exploits, and
  • Trojan programs and other malware.
Next consider any compensating controls that may be in place and how much effort is required to fix the vulnerability. At times, a compensating control may be more effective than fixing the vulnerability itself. For instance, it can be extremely difficult to fix a legacy application. An alternative to rewriting legacy code could be the implementation of an application firewall.

More DNS

DNS is that unknown worker which goes considered until there is a problem. DNS resolves host names to IP addresses (and also conversely IP addresses to host names). Without DNS the Internet would stop. This is a big claim until you realize that people do not remember numbers. We can remember several thousand names but we cannot remember even 50 IP addresses easily.

Even within organizations DNS is key to the security of access as individuals connect to named servers and (usually) not to IP addresses. To secure a DNS server is essential to consider the following points:

1. Restrict zone transfers. DNS zone transfers are needed from the primary DNS to the secondary. Never allow anything else, not even secondary to secondary transfers.
2. Disable recursive checks and retrievals. There is no reason to allow recursive queries from ever host on the Internet. At best it is a waste of resources, at worst an attack path.
3. Log ALL zone transfer attempts. Any attempt to do a zone transfer should be treated as an incident. This is always going to be someone or some program looking for information about the configurations of systems. This should never be permitted.
4. Restrict queries. Not all queries are necessary. Information that is not necessary should be restricted on a need to know basis.
5. Restrict dynamic updates. Only authorized hosts should be allowed to change DNS entries.
6. Deploy split DNS. Split DNS involves logically and physically separating the external and internal address spaces.

o External IP addressing should include that information that is necessary for services on the Internet to function correctly.
o Internal IP addressing should be restricted to your organizations own systems.

Recursive
A DNS Server is recursive when it assumes the duty of resolving the answer to a DNS query. DNS servers are generally recursive by default. Exposed recursive servers can be used by attackers (e.g. Cache poisoning attacks). At best they are lost system resources doing lookups for unrelated entities.

Bind version 8.x and above provide the capability to configure the server to be non-recursive with selected exceptions for explicit IP addresses. This allows the servers to answer recursive queries for the organizations own hosts while blocking recursive queries from unauthorized hosts on the Internet.

To configure DNS correctly:

  • Recursive queries can be allowed for internal DNS
  • Recursive queries should be blocked for external hosts
Where there are exceptions (for roaming hosts for instance) these can be configured separately.

Zone Transfers 

Secondary DNS servers use the zone transfer function to update changes to the DNS zone databases. These changes are received from the primary (or SOA, Start of Authority) DNS servers.

Only allow zone transfers between the primary and secondary DNS servers. Secondary DNS severs should never be allowed to respond to a zone transfer request.

Do not block TCP 53 and think that you are ok. TCP is used for valid DNS queries. The blocking of TCP port 53 is breaking DNS and not fixing zone transfers.

Split DNS
Split DNS involves the logical separation of the external and internal name resolution functions.
  • Information that is necessary for hosts on the Internet is maintained on the external DNS servers.
  • Information about the internal hosts and IP space is maintained and resolved using the internal DNS servers.
  • When a system is required to support reverse PTR lookups, generic information should be provided. 
PTR records do not matter they are just required to resolve to something. To have reverse PTRs work requires a name… ANY name. This is NOT the real internal name.



Split-Split DNS

A split-split DNS is the idea DNS architecture. In figure 1, the split-split DNS architecture is displayed. This involves a back to back private address DMZ segment with two firewalls (it is possible to do this with a single firewall and 3 interfaces as well). The DMZ network and internal private network each have:

• Two DNS Advertiser hosts on the DMZ
• Two DNS Resolver hosts on the DMZ
• Two internal DNS servers on the internal network



Figure 1 Split-Split DNS 

There are at least two of each kind of server to provide for fault tolerance and load balancing. At least one of each type will be primary and the other a secondary DNS server (Windows Active Directory DNS servers do not use this system). Zone transfers are allowed only to occur between the primary and secondary servers. This is:
o External DNS. Acts as an advertiser and resolver system
o Internal DNS. Acts as to resolve queries for internal client hosts
o Each zone needs its own Primary and Secondary DNS. Zone transfers should only be allowed from primary servers to secondary servers (and not the other way)

Split-split DNS has multiple DNS servers located in the DMZ. Separate DNS servers provide name and domain advertising and resolution. A pair of DNS servers are positioned within the internal network as well. These are all run as duplicates to provide fault tolerance and load balancing.

A total of at least six DNS servers (three primary and three secondary servers) are required for a split-split DNS configuration. The three classes of DNS servers are:

· DNS Resolvers. DNS resolvers provide only DNS caching. These systems are configured to be DNS forwarders and allow access only from the internal network hosts.DNS resolvers do not maintain a DNS zone database and are not authoritative for any domains. This setup allows split-split DNS to aid in stopping DNS hijacking attacks.
· DNS Advertisers. DNS advertisers maintain the organizations domains that are “advertised” over the Internet (the organizations authoritative zones). DNS advertisers don’t allow recursive queries to be preformed.
· Internal DNS Servers. Internal DNS servers resolve queries that originate from the internal network hosts. Internal DNS servers function identically to internal DNS servers in a “split DNS” setup.

Wednesday, 10 August 2011

Logging and retention

The following documents the logs that are created by a fairly normal company and the related time that these logs should be maintained.

Security and audit logs and the reporting on the systems that produce these logs must be centralised. It is necessary to have these records centralised so that systematic attacks across the servers can be pinpointed in a timely fashion. It also significantly lowers the risk of the logs being compromised.

The security logs must be backed up and kept for the duration of the audit period (e.g. 1 Year minimum). This can be extended with other requirements and should be deemed to be the absolute minimum requirement.

For instance, in Victoria, changes to the Crimes Act (1958) [Crimes (Document Destruction) Act 2006; Act No. 6/2006] have created “a new offence in relation to the destruction of a document or other thing that is, or is reasonably likely to be, required as evidence in a legal proceeding”. This act, punishable by indictment for a term of up to five years imprisonment affects anyone who destroys or authorises the destruction of any document that may be used in a legal proceeding (including potential future legal proceedings).

The distinction between correct and circumstantial evidence is that direct evidence categorically establishes the fact. Circumstantial evidence on the other hand is only suggestive of the fact. Authentication logs are generally accepted as direct evidence short of proof that another party used the access account. The consequence of this is that system logs can be critical to a legal case and hence must be maintained.

A section noting the “minimum document retention guidelines” has been included below.

Logging Options

The logging requirements for many companies are defined in this document. In order to meet these needs, the following type of questions will have to be able to be answered:
  •  “Can you tell me when Joe Bloggs logged in and logged out?”
  •  “Can you see if someone has tried to open this file”
  •  “Can you tell me who the last person to open this file was?”
  •  “We need to know if this person has been chatting (Office Communicator) with this person outside the company”
  •  “We need to know what these people have been chatting about (office communicator) both inside and outside the company”
  • “We need to know when people add external contacts to their communicator list”
  •  “Who had this IP address on
  •  The concern is leaking of information, more people resigning and things of that nature. What can you give me?
Three options that can provide the required solution have been noted.

Systems and Logs to maintain

Ideally, it would be best to be able to save a copy of all system and security log files. The requirements of running a business and efficiently maintaining client systems can make this difficult at best. As such, the following is set as a guide as to the minimum logging that most organisations should maintain.

In addition to normal system logs, an operating log must be maintained that records any significant events and action taken by the system operator or administrator. These logs should be maintained on a centralised and isolated server. Proper recording would indicate whether operators were following instructions for halts in programs, change control, etc.

There are more reasons to maintain logs than for security alone. Some of the reasons to maintain system and network logging include:
  •  Optimizing system and network performance; to record the actions of users
  •  Identifying security incidents, policy violations, fraudulent activities, and operational problems;
  •  Performing audits and forensic analyses;
  •  Supporting internal investigations;
  •  Establishing baselines; and
  •  Identifying operational trends and long-term problems.
Note: Windows Server 2008 supports syslog logging natively. Windows 2000 and Windows 2003 server require a third party logging client. A decision as to the state of logging on Windows 2003 and earlier systems (i.e. the need for an agent or alternatives needs to be made).

General Logging

Where possible, logs for devices should be enabled and sent to a central system.
The following should be used as guideline in storing logs. This table sets the times for local storage. The central logging system should maintain all logs for at least 12 months online and maintain an offline backup for longer periods.
Category Low Impact Systems Moderate Impact Systems High Impact Systems
How long to retain log data? 1 to 2 months 3 to 6 months 12 to 36 months
How often to rotate logs Optional (if performed, at least every week or every 25 MB) Every 6 to 24 hours, or every 2 to 5 MB Every 15 to 60 minutes, or every 0.5 to 1.0 MB
Transferring log data to the log management infrastructure - how frequently that should be done? Every 3 to 24 hours Every 15 to 60 minutes At least every 5 minutes
(If not automatic – e.g. syslog-ng)
How often log data needs to be analysed (through automated or manual means) Every 7 days Every 12 to 24 hours At least 6 times a day
Whether log file integrity checking needs to be performed for rotated logs Optional Yes Yes
Whether rotated logs need to be encrypted Optional Optional Yes
Whether log data transfers to the log management infrastructure need to be encrypted or performed on a separate logging network Optional Yes, if feasible Yes













The analysis of the logs should be automated and use a correlation system to make review simpler.
Entries from the logging system that are deemed to be of particular interest should be retained on the system and also transmitted to the log management infrastructure. Reasons for having the logs in both locations include the following:

· If either the system or infrastructure logging should fail, the other should still have the log data. For example, if a log server fails or a network failure prevents logging hosts from contacting it, logging to the system helps to ensure that the log data is not lost.

· During an incident on a system, the system’s logs might be altered or destroyed by attackers; however, usually the attacker will not have any access to the infrastructure logs. Incident response staff can use the data from the infrastructure logs; also, they can compare the infrastructure and system logs to determine what data was changed or removed, which may indicate what the attacker wanted to conceal.

System or security administrators for a particular system are often responsible for analysing its logs, but not for analysing its log data on infrastructure log servers. accordingly, the system logs need to contain all data of interest to the system-level administrators.

All logs could (should) be sent to a SQL database. Syslog, Syslog-ng and many other logging formats can be sent to MySQL, Informix and other database formats. This provides the capability to create custom queries to a database in order to retrieve selected logs.

Security Software

Most companies (and other organisations) use several types of network-based and host-based security software to detect malicious activity, protect systems and data, and support incident response efforts. Accordingly, security software is a major source of computer security log data. Common types of network-based and host-based security software include the following:

· Anti-malware Software. The most common form of antimalware software is antivirus software, which typically records all instances of detected malware, file and system disinfection attempts, and file quarantines. Additionally, antivirus software might also record when malware scans were performed and when antivirus signature or software updates occurred. Antispyware software and other types of antimalware software (e.g., rootkit detectors) are also common sources of security information.

· Intrusion Detection and Intrusion Prevention Systems. Intrusion detection and intrusion prevention systems record detailed information on suspicious behaviour and detected attacks, as well as any actions intrusion prevention systems performed to stop malicious activity in progress. Some intrusion detection systems, such as file integrity checking software, run periodically instead of continuously, so they generate log entries in batches instead of on an ongoing basis. This section includes Tripwire and a NIDS (such as Cisco Secure IDS or SNORT).

· Remote Access Software. Remote access is often granted and secured through virtual private networking (VPN). VPN systems typically log successful and failed login attempts, as well as the dates and times each user connected and disconnected, and the amount of data sent and received in each user session. VPN systems that support granular access control, such as many Secure Sockets Layer (SSL) VPNs, may log detailed information about the use of resources.

· Web Proxies. Web proxies are intermediate hosts through which Web sites are accessed. Web proxies make Web page requests on behalf of users, and they cache copies of retrieved Web pages to make additional accesses to those pages more efficient. Web proxies can also be used to restrict Web access and to add a layer of protection between Web clients and Web servers. Web proxies can keep a record of all URLs accessed through them. This would include the ISA systems and other Web Filtering systems.

· Vulnerability Management Software. Vulnerability management software, which includes patch management software and vulnerability assessment software, typically logs the patch installation history and vulnerability status of each host, which includes known vulnerabilities and missing software updates. Vulnerability management software may also record additional information about hosts’ configurations. Vulnerability management software typically runs occasionally, not continuously, and is likely to generate large batches of log entries.

· Authentication Servers. Authentication servers, including directory servers and single sign-on servers, typically log each authentication attempt, including its origin, username, success or failure, and date and time. This would include the Security and Application events from the Active Directory DNS servers.

· Routers and Switches. Routers may be configured to permit or block certain types of network traffic based on a policy. Routers that block traffic are usually configured to log only the most basic characteristics of blocked activity (as is the case generally). These logs should be increased to incorporate both allowed and dropped traffic. Switch traffic should log unusual events associated with VLANs and any violations of ACLs.

· Firewalls. Like routers, firewalls permit or block activity based on a policy; however, firewalls use much more sophisticated methods to examine network traffic. Firewalls can also track the state of network traffic and perform content inspection. Firewalls tend to have more complex policies and generate more detailed logs of activity than routers. In many companies, logging is disabled for both allowed and blocked traffic rendering the logs of little use. The Firewall logging needs to be changed to record both ALLOWED and DENIED Traffic to and from the network.

General Servers and Services

The following services used within many organisations create log files that are essential to maintain. The logs for all of these services need to be centralised and maintained for at least 12 months for all production systems that run them.
  • DNS (All active servers)
  • DHCP (All active servers)
  • LDAP
  • Network Devices
  1.  Firewalls (Cisco ASA, Load Balancers)
  2.  Cisco Router and Switch Logging (Both internal and external switches)
  •  SAMBA logging (Unix/Linux SMB Servers)
  •  Web server logs (Apache, IIS, TomCat, PHP etc)
  •  Proxy Logs (Websense, ISA)
  •  Tripwire logs (Servers)
  •  Mail Server logs (Sendmail, Exchange etc)
  •  Databases (MySQL, Informix, MSSQL)

Operating System (Server) logging

Operating systems (OS) for servers, and networking devices (e.g., routers, switches) usually log a variety of information related to security. The following security-related OS data should be maintained for at least 1 year (if not longer):

· System Events. System events are operational actions performed by OS components, such as shutting down the system or starting a service. Typically, failed events and the most significant successful events are logged, but many OSs permit administrators to specify which types of events will be logged. The details logged for each event also vary widely; each event is usually time-stamped, and other supporting information could include event, status, and error codes; service name; and user or system account associated with an event.

· Audit Records. Audit records contain security event information such as successful and failed authentication attempts, file accesses, security policy changes, account changes (e.g., account creation and deletion, account privilege assignment), and use of privileges. OSs typically permit system administrators to specify which types of events should be audited and whether successful and/or failed attempts to perform certain actions should be logged. The Windows Domain systems should be configured to log security events.

Client Systems

In general, the centralised logging of all client services would be too onerous for most companies. As such, it is recommended that organisations maintain the following client logs as a minimum:
  • · AV logging (Malware detected etc)
  • · Deep Freeze logs(and similar products)
  • · Authentication logs (from Domain Controllers)

Minimum Document Retention Guidelines

Australia/NZ USA UK
Basic Commercial Contracts 6 years after discharge or completion 4 years after discharge or completion 6 years after discharge or completion
Deeds 12 years after discharge A minimum of 6 years after discharge 12 years after discharge
Land contracts 12 years after discharge 6 years after discharge 12 years after discharge
Product liability A minimum of 7 years Permanent A minimum of 10 years
Patent deeds 20 years 25 years 20 years
Trade marks Life of trade mark plus 6 years Life of trade mark plus 25 years Life of trade mark plus 6 years
Copyright 75 years after author’s death 120 years after author’s death 50 years after author’s death
Contracts and agreements (government construction, partnership, employment, labour, etc.) A minimum of 6 years Permanent A minimum of 7 years
Capital stock and bond records 7 years after discharge Permanent 12 years after discharge

Any logging from systems associated with the above listed areas should be maintained for the period listed as a minimum.

Issues and reasons to take logging seriously

The maintenance of system and security logs is critical to ensuring the security of both a system and network. In addition, the failure to maintain an adequate audit trail can have severe repercussions. A couple of these are noted below.

Due Care and Due Diligence

Management is required to implement and preserve a suitable set of internal controls to check illegal and unscrupulous goings-on. A failure to implement due care and due diligence can constitute negligence.

PCI-DSS

Section 10 of the PCI-DSS states that:
Logging mechanisms and the ability to track user activities are critical in preventing, detecting, or minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting, and analysis when something does go wrong. Determining the cause of a compromise is very difficult without system activity logs.”

This section of the PCI requirements then sets out the formats required for logging.

Tuesday, 9 August 2011

Man-In-The-Middle Attacks

MITM attacks have always been a favourite of the average hacker as it is relatively easy to perform using readily-available tools. Furthermore, once an MITM assault has been launched successfully, it opens the way for more malicious and aggressive types of attacks, potentially resulting to more damage and irreversible harm.

Several areas of concern exist around MiTM attacks. These include administrative systems (such as Databases and UNIX servers) that run insecure protocols. This post, in the end, hopes to raise awareness and thus improve the overall site security at your organisation.

Man-In-The-Middle (MITM) Attack – an overview

An MITM attack is an attempt to intercept the communication between two computers, with the purpose of reading or altering the information passing between them, preventing the data to be sent to either or both parties, or creating (“crafting”) data and sending to either or both parties.


In the ideal scenario above, information within a session passes privately between the client and server, and vice versa.


In a man-in-the-middle attack, an attacker hijacks the supposedly private session. The attacker makes it appear to the client that it is still communicating directly with the server, while in truth, any information being sent to, and received from, the server is being read and maybe even being modified by the attacker. The same deception is performed on the server, where it thinks that it is still communicating with a valid source (the client) when in fact the requests for information are coming from the attacker.

MITM proxy tools
As the term implies, these tools generally perform “proxy only” functions, where they merely act as “sniffers” to read the data passing between the Client and the Server. The harvested information can be used for more aggressive attacks later on.
  • Achilles (http://www.mavensecurity.com/achilles)
  • WebScarab and its followers
  • Paros Proxy (http://www.parosproxy.org)
  • Burp
  • Spike Proxy
  • ProxyFuzz
  • Odysseus
  • Fiddler
MITM attack tools
These tools are used for more aggressive, and usually more licentious, attacks. Aside from establishing a proxy between the Client and the Server, these tools can also re-route traffic, inject commands, introduce specially-crafted packets, and alter information in the data stream.
  • PacketCreator
  • Ettercap (http://ettercap.sourceforge.net/download.php)
  • Dsniff

MITM risks with clear-text protocols

The table below lists some protocols that send information in plain or “clear” text that have been noted to be used within the networks in I have audited.

Protocols Port Numbers
Telnet TCP 23
FTP TCP 20/21
POP3 TCP 110
HTTP TCP 80
IMAP TCP 143
SMTP TCP 25
MySQL TCP 3306
Of particular concern is Telnet to administratively sensitive devices. This is common in the management of routers and switches. Several UNIX hosts have the telnet service running as well as some network infrastructure. Being located within an internal network should not be seen as a fix. The MySQL protocol is also of concern. This can be easily secured using SSL or through other forms of encryption.

Obviously, sending information in unencrypted form makes it possible for an attacker employing MITM to easily obtain that information. With relative ease using various tools and techniques, the attacker can then tamper with the data’s integrity (modify the data) and authenticity (make it appear that the data came from a legitimate source).

As an example, let us say that the Client has been using telnet to connect to a remote Server. Let us also assume that an Attacker has previously done some reconnaissance and obtained the IP addresses of the client and the remote server. The following is a likely scenario for carrying out the MITM attack:
  1. The attacker launches a TCP sniffer tool such as Hunt, Juggernaut or Ettercap and waits for a session to be established between the Client and the Server.
  2. The client logs in to the server using Telnet.
  3. The attacker is alerted of a new connection.
  4. The attacker performs packet sniffing (read all information passing through the session). With Ettercap, usernames and passwords can be easily obtained. The attacker saves the harvested information for later use.
  5. The attacker can then inject commands into the data stream without the Client knowing, and which the Server will interpret as valid commands from the Client.
  6. When the session is terminated by the client, the attacker can log in into the server using the previously obtained username and password of the Client.
  7. If the account has administrator privileges, the attacker can install more malicious tools such as rootkits to further compromise the integrity of the server.
Attack Mitigation:
To mitigate MITM attacks against clear-text protocols, it is advised that their use be minimised or discontinued altogether. There are also more secure protocols that can be used in their place as listed below:
Protocols / Port Numbers Encrypted / Port Numbers
Telnet / 21 SSH / 22
FTP / 20, 21 SCP or SFTP / 22
POP3 / 110 POP3S / 995
HTTP / 80 HTTPS / 443
IMAP / 143 IMAPS / 993
SMTP / 25 SMTP over TLS/SSL / 465
MySQL / 3306 Encrypt the traffic

MITM risks with SSL

Secure Sockets Layer (SSL) protocol was designed to provide security for client/server communications over networks. This is accomplished by providing endpoint encryption and authentication.

SSL is widely implemented in web (HTTP) sessions, such as in online banking and eCommerce. Despite its well-publicised susceptibility to MITM attacks, HTTPS (HTTP over SSL) continues to be a popular choice for web security implementations.

The main flaw with SSL (without client side certificates – which are still rare) is it only authenticates the server, not the client. The client is given a level of assurance of the authenticity of the server using a validation process of the server’s digital certificate by the browser, but not vice versa.

The MITM attack is carried over an HTTPS connection by establishing independent SSL sessions – one between the Attacker and the Server, and another between the Attacker and the Client. When the Client attempts to establish an HTTPS connection with the Server, it is actually the Attacker it is connecting to, which at this point is masquerading as the Server. Usually, the client’s browser warns the user that the digital certificate used is not valid. Below are some examples of warnings issued by some of the more common web browsers:
Firefox:

Google Chrome


Internet Explorer:


Some users particularly the less-savvy may ignore the warning, usually because of lack of understanding of the risks involved. In some cases there are no warnings at all, as in situations when the Server certificate itself has been compromised by the attacker or, although less likely but still possible, when the attacker certificate was authenticated by a trusted Certificate Authority or CA (such as VeriSign, Thawte, etc) and the Common Name (CN) is the same with the original web site.

When the warning is ignored, the user in effect accepts the site as legitimate. The browser will then proceed with loading the website. If the website is in fact a bogus site, every transaction performed by the Client will be read and open to tampering by the man-in-the-middle attacker. From the Server’s point of view, since it does not authenticate the client, it does not know or care whether the computer it is transacting with is legitimate or not, so it treats every transaction as valid.

Attack Mitigation:
To mitigate MITM attacks against SSL, users are advised to at least pause and investigate warnings issued by their browsers before proceeding on accessing a website, particularly when the website will be used to facilitate confidential and/or sensitive transactions.

On the server-side of things, it is recommended that all SSL protocols be upgraded to the latest versions, which is currently SSL version 3 or ideally be replaced with TLS 1.0.

MITM risks with SSH

Secure Shell (SSH) was designed as a replacement to some of the clear-text protocols specifically Telnet, rsh and rlogin which have been proven to be vulnerable to MITM attacks.

In an SSH environment, computers or network devices exchange information inside a secure channel. SSH uses a public-key (PK) to encrypt the data, which then requires a private key to decrypt it.

Currently SSH version 1 (SSH1) and its upgrade SSH2 exist, however the two are regarded as entirely different protocols. Furthermore, SSH2 is deemed more secure than SSH1, the latter being known to be vulnerable to MITM attacks. Fortunately, very few SSH servers use SSH1 nowadays.

One of the more popular MITM attacks exploits the weaknesses of the SSH1 protocol, in what is known as an “SSH downgrade” attack. This type of attack tricks the server and the client to communicate via SSH1 instead of SSH2.
The following is a likely SSH downgrade attack scenario.
  1. The attacker, using Ettercap or other ARP poisoning tools, sends fake ARP requests across the network which will enable the attacker to capture and modify the packets intended for the target machine.
  2. The client initiates a request for an SSH connection to the SSH server. Usually, it requests for an SSH2 connection.
  3. The attacker sends a reply back to the client saying that the server supports only SSH1.
  4. If the client accepts, it will establish an SSH1 connection with the server. Due to the weak encryption protocols of SSH1, any information such as usernames and passwords can be read by the attacker.
It should be noted that this attack is only successful if the server has SSH1 enabled and that the the client accepts the request to “downgrade” from SSH2 to SSH1.

Attack Mitigation:
To mitigate MITM attacks against SSH, server and client configurations should be made to support SSH2 only. Any transaction requests to the contrary should be treated with suspicion and should warrant an elevated level of caution and investigation.

There are also tools that can be employed to detect ARP and DNS poisoning.

Articles...

My latest...

LulzSec, Anonymous … freedom fighters or the new face of evil?

Monday, 8 August 2011

SUDO

There are ways to improve on the Linux “all-or-nothing” security model.

Root is almost always connected with the global privilege level. In some extraordinary cases (such as special UNIX’es running Mandatory Access Controls) this is not true, but these are rare. The super-user or “root” account (designated universally as UID “0”) includes the capacity to do practically anything on a UNIX system. RBAC (role-based access control) can be implemented to provide for the delegation of administrative tasks (and tools such as “SUDO” or super-user do also provide this capability). RBAC provides the ability to create roles. Roles, if configured correctly, greatly limit the need to use the root user privilege. RBAC both limits the use of the “su” command and the number of users who have access to the root account. Tools such as SUDO successfully provide similar types of control, but RBAC is more granular than tools such as SUDO allowing for a far greater number of roles on any individual server. It will come down to the individual situation within any organization as to which particular solution is best.

For now, sudo is a simple free tool that will allow us to create a good level of control over what the users on a Linux system can access.

Basically, you do not want to have to give all your users the root password and hope for the best. The simple answer for this is “sudo.”

The purpose of sudo is to allow users to run selected (or all) commands with privilege and enhanced logging. When configured, users use their own password and not the root password.

More, the Administrator (root usually) can setup separate groups of commands and access to the system for different users and groups.

You never need to issue the root password to the users!
On top of this, a user who runs an Unauthorised command can be set such that an alert will be issued. This could be an email for instance.

Sudo solves three primary issues in Linux. These are:

  1. Least Privilege
  2. Accountability
  3. Termination
We can restrict users to only have access to a part of the operating system where they have a need. More, we can log what is done and also alert to when users try to exceed their privilege.

Finally, when a user leaves, as they do not have the root password, there is no need to run about changing password on systems, just lock or alter the individual users account (and believe me, it can be a real pain to change root passwords).

There are methods that can be used to bypass sudo (such as a vi shell break) but these are beyond today’s post.

The access restrictions and  alerting are configured using the "/etc/sudoers" file:

In coming days, I will load a few examples on how to configure this file and how to user SUDO to restrict access to key files.


See the following page for some more information on the command:
http://linux.about.com/od/commands/l/blcmdl8_sudo.htm