Wednesday, 4 November 2009

Quiet lately.

I have not posted as of late to any extent. This is about to change again. I have been working on the data collections for my second doctorate (a PhD on the quantification of Information systems risk). I am going to start posting some of the data and results on a daily basis.

For those interested, the research proposal is as follows (a small sample of the data will be posted following this):

Introduction

For decades, information security practitioners have engaged in qualitatively derived risk practices due to the lack of a scientifically valid quantitative risk model. This has lead to both a misallocation of valuable resources with alternative uses and a corresponding decrease in the levels of protection for many systems.

Using a combination of modern scientific approaches and the advanced data mining techniques that are now available, this research effort is aimed at creating a game theoretic quantitative model for information systems risk that incorporates both contract theory and the methods developed within Behavioural Economics.
Theory

The introduction of Game Theory and Behavioural Economics have created a foundation for the rationalisation of information security processes which lead to improved allocation of economic resources. The optimal distribution of economic resources across information system risk allocations can only lead to a combination of more secure systems for a lower overall cost.

This research will incorporate the game theoretic multi-player decision problem. Agents in the model will be deemed to be rational with well defined preferences, include the ability to strategically reason using their knowledge and belief of other players and to act according to a combination of both economic "first thought" and deep strategic thinking. Solutions to these models will be sought through a combination of the following game devices:

– Equilibrium: evolutive (steady state) games

– Heterogeneous sequential games

– Rationalizability: deductive reasoning

The models will detail the existence of strictly dominating games where these exist in information security practices and propose methods to improve these models. Existing information security practices in existing organisations will be classified into the following game types:

– Non-cooperative vs. cooperative game

– Strategic vs. extensive game

– Perfect vs. imperfect information

In this process, bounded rationality, behavioural game aspects and other feedback effects will be investigated. Social capital based on fairness and reciprocity will be defined as it applies to the economically efficient application of risk processes associated with Information systems.

Contract Theory is used to explain the creation of agreements and “contracts” in the presence of information asymmetry. This is approached through the combination of adverse selection, moral hazards and the “signalling game”. In this, adverse selection is defined as the “Principal not having been informed of the other agent’s private information ex-ante” such as in George Akerlof’s “Market for lemons”. This application of game theory can be shown to explain many aspects of the software industries predisposition to create insecure software. Possible solutions to the software insecurity problem will be approached.

The issue of Moral hazard, where the principal is not informed of the agent’s private information ex-post will be investigated and applied to insurance, monitoring, employee’s effort, and other aspects of information security. The model of the signalling game (where the agent tries to convey useful info to principal) will be investigated as a means to signal ability (a secure system) whist jointly avoiding the problem of free riding by other parties to the model.

Behavioural Economics is based on the foundation that people are predictably irrational (from the perspective of external parties) and explains many of the inadequacies associated with theoretical economics. Research (such as that where a 25 cent chocolate bar is shown to be adequate compensation such that 70% of individuals will reveal a password) will be aligned to misaligned attributions of risk and expected utility when compared to prospect theory.

The behavioural effect of Loss Aversion (defined as propensity of information security professionals to minimise the impact of loss even against risks that have expectation values of greater gain) will be explored in association with concepts of social capital and cognitive biases such as the endowment effect (for instance where an individual is “willing-to-reveal” at high price, “willing-to-protect” at low price). These issues will be appraises against psychological propensities for both anchoring and adjustment and the Status quo bias (the predisposition to resist changing an established behaviour, unless incentive is overwhelmingly compelling).

Finally, the valence effect (as is associated with an individual’s overestimation of the likelihood of favourable events being associated and impacting oneself) will be modelled in association to its impact and causal relationship with respect of information security and the feedback effect from rational ignorance and “Cold-Hot Empathy”.

These models will be evaluated using large volume datasets and bootstrapping algorithms.

Central Concepts
Information security is commonly seen as a technical solution to an overarching issue that must be addressed in all instances. The idea being that we must ensure safety at any cost. The reality is far from this, and in fact, this fallacy has led to a gross misallocation of funds and less secure systems.
The reality is that, like all safety issues, information security is based on a set of competing trade-offs between economics constraints.

For instance, the issue of Misaligned Incentives can easily be demonstrated in Online Banking and Bank card fraud. These issues vary across jurisdictions (e.g. US banks being liable for the costs of card fraud; unless proved otherwise and UK banks not being liable without proof of negligence). In the UK, banking facility staff propound that their systems are “more secure” leading to a greater propensity to assert that their clients are lying or have mistaken. This provides the banks in the UK with a lower incentive to react. As with Pin cards in the past, this can be shown to lead to a condition where complaints are far less likely to be taken seriously by the personnel in the bank. This has lead to a paradox where banks in the UK expend greater sums of capital in securing their systems (leading to the assertions that they are more secure than US banks) whilst simultaneously facing a greater volume of fraud cases.

This propensity has been expounded in the past to be attributed to careless (or lazy) staff (resting on their ‘more secure’ laurels). The reality may vary greatly from these perceived situations, but little quantification of the overall costs has been completed. For instance, no study linking the significantly high correlations between the uptake and use of card based and online banking systems in communities and the decrease in physical crime (assaults, robbery and muggings) was noted at the time of writing although the data is readily available to support this assertion. It can be easily shown that there is a high correlation (with calculated correlations of between R=78 to R=93 in various western countries from 1995 to 2007).

The goals of any economically based quantitative process should be to minimise cost and hence minimise risk through the appropriate allocation of capital expenditure. To do this, the correct assignment of economic and legal liability to the parties best able to manage the risk (this is the lowest cost insurer) is essential and needs to be assessed. This will allow insurance firms to develop expert systems that can calculate risk management figures that can be associated with information risk. This will allow for the correct attribution of information security insurance products that can be provided businesses generally. Like all insurance, the addition of co-payment and cost reduction features (such as the reduction of costs in home insurance that comes with the installation of a back-to-base alarm system) will provide organisations with the incentive to protect their users, clients and systems.

Externality, or the quantitative and qualitative effects on parties that are affected by, but who are not directly involved in a transaction is likewise seldom quantified, but is an integral component of any risk strategy. The costs (negative) or benefits (positive) that apply to third parties are an oft overlooked feature of economics and risk calculations. For instance, network externality (a positive effect that can be related to Metcalfe’s law; value of a network = 2x the network’s number of user) attributes positive costs to most organisations with little associated costs to themself. In these calculations, the time-to-market and first-mover advantages are critical components of the overall economic function with security playing both positive and negative roles at all stages of the process.

These externalities also have negative effects, such as those in the software industry where the “Ship it Tuesday and get it right by version 3” rationality has become a commonplace paradigm. This archetype has also moved into other negative externalities (such as malware with the advance of botnets). In the past, malware was seen as little more than a nuisance by many infected users whose hosts would send spam emails to others and grow the infection model. As the user was not directly affected themself, there was often a low incentive for the user to react. This would lead to the installation of known malware (such as Bonzo-Buddy and Wack-a-mole) being actively installed by many users.

Interdependent risk issues also apply and must be incorporated into any valid quantitative risk model for information systems risk. The issue of “free riding” (where another party gains from the economic expenditure of another party) is also a critical feature of this form of model.

Information Asymmetry, such as the adverse selection of poorly created and maintained software or the contradiction resultant from sites that seek and obtain TRUSTe certification being significantly less trustworthy than those that forego such assertions[1] is also a key component of the model. It will be demonstrated that the software and information security industries are a “market of lemons”. As companies can rarely differentiate between a secure and well tested software product or security vendor and the poorly developed and insecure alternative, there is little incentive for an organisation to invest in the secure option.

Issues of law, online privacy and behavioural targeting and associated issues including the general predisposition of people to not read privacy statements, EULA and click through contracts and the impact of these actions will also be addressed.
Research Objective

The goal of this research project is to create a series of quantitative models for information security. Mathematical modelling techniques that can be used to model and predict information security risk will be developed using a combination of techniques including:

· Economic theory, and Econometrics

· Quantitative financial modelling,

· Behavioural Economics,

· Algorithmic game theory and

· Statistical hazard/survival models.

The models will account for Heteroscadastic confounding variables and include appropriate transforms such that variance heterogeneity is assured in non-normal distributions. Process modelling for integrated Poisson continuous-time process for risk through hazard will be developed using a combination of:

· Business financial data (company accountancy and other records),

· Anti-Virus Industry data

· Legal databases for tortuous and regulatory costs and

· Insurance datasets.

This data will be coupled with hazard models created using Honeynets (e.g. Project Honeynet), reporting sites such as the Internet Storm Centre and iDefence. The combination of this information will provide the framework for the first truly quantitative security risk framework.

Support has been sought and received from SANS (including DShield), CIS (Centre for Internet Security) and the Honeynet project. At present, the DShield storm centre receives logging from over 600,000 organisations. This is a larger quantity of data than is used for actuarial data in the insurance industry. The problem being that this information is not collated or analysed in any quantitatively sound manner. This data will provide the necessary rigour in which to model survival times for types of applications. There is also a body of research into quantitative code analysis for risk that could be incorporated.

The aim of this research is to create a series of models (such as are used within mechanical engineering, material science etc) and hence to move Information Risk modelling towards a science (instead of an art). Stengel R.F. (1984,1996) “Optimal Control and Estimation” provides an indication of such a framework in systems engineering.

Some of the methods used in the creation of the risk framework will include

· Random forest clustering,

· K-means analysis,

· Other classification algorithms, and

· Network associative maps in text analysis forensic work.

The correlation of reference data (such as IP and functional analysis data) between C&C (Command and Control) systems used in “botnets” is one aspect of this research.

Start from the outside (the cloud and perimeter) and working inwards to the network, the risk model would start by assessing external threats and move into internal threat sources, becoming gradually become more and more granular as one moves from network to individual hosts and finally to people (user behaviour) and application modelling.

The eventual result will be the creation of a model that can incorporate the type of organisation, size, location, application and systems used and the user awareness levels to create a truly quantitative risk model. This would be reported with SE (standard error) and confidence level rather than a point estimate.

To begin, a number of questions can be answered and a number of related papers can be published on this topic. For instance, the following are all associated research topics in this project:

1 Is a router/firewall stealth rule effective

2 What types of DMZ are most effective for a given cost

3 How economical is the inclusion of additional router logging outside the perimeter firewall

4 Are "drop" or "reject" rules more effective at limiting attacks - by type

5 How do Firewalls, IDS, IPS influence and impact system survival times

The creation of a classification model (published on the SANS reading room site soon) that allows for the remote determination of an application versions for DNS software (which is to be expanded to other applications and devices – e.g. routers) has already been completed and published.

I would like to have a collection of data from the honeynet project aligned with this information. We can collect and model survival times for types of applications. There is also a body of research into quantitative code analysis for risk that could be incorporated.

Code to import data from hosts and networks, using raw “pcap traces” will be developed such that system statistics and other data can be collated into a standardised format. This code will be developed in “R” and “C++”.

This will enable the creation and release of actuarially sound threat risk models that incorporate heterogeneous tendencies in variance across multidimensional determinants while maintaining parsimony. I foresee a combination of Heteroscadastic predictors (GARCH/ARIMA etc) coupled with non-parametric survival models. I expect that this will result in a model where the underlying hazard rate (rather than survival time) is a function of the independent variables (covariates). Cox's Proportional Hazard Model with Time-Dependent Covariates would be a starting point, going to non-parametric methods if necessary.

The end goal will be to create a framework and possibly a program that can assess data stream based on a number of dependant variables (Threat models, system survival etc) and covariates and return a quantified risk forecast and standard error.
Research Questions

The creation of a quantitative, econometric and game theoretic model for information security and risk has widespread applications. Those listed in the prior chapter just being the figurative tip of the iceberg.

For instance, questions such as the following should be readily able to be modelled and hence answered:

– Do health providers (such as hospitals) have adequate and efficient incentives to protect patient medical records?

– Is Software certification effective?

– Do Social networks help in collaborative security efforts?

– What are the Economics of Malware, Piracy and related topics?

– What are the most economically effective methods to control and optimise levels of cyber-crime?

– What is the optimum capital investment in information security or when is a system too secure or not secure enough?

– Can we create effective metrics to quantify risk?

– Are Vulnerability markets effective?

– Should software and online contracts provide contractual provisions that refund some or all of their value if security vulnerability is found/exploited? Is this approach economically efficient?

– Is re-insurance necessary for information and cyber-risk insurance due to the global nature of information systems risk?

Clearly, the creation of an efficient model for information risk needs to incorporate costs outside the realm of information systems alone. As online payment systems can be shown to lead to a reduction in contact crimes (as previously noted), any information systems risk model must account for these related externalities

Chapter outline of dissertation and intended output

1. Theoretical background

i. Risk, Probability and Games of Economic value

ii. Methodology

2. Probability Theory and Analysis

i. Concepts and Theory’s

ii. Probability distributions and models

iii. Monte Carlo Analysis (stochastic analysis)

iv. Time series and heteroscedastics

3. Game theory

i. Games people play

ii. How risk suffers -

iii. Cryptography and game theory

iv. Algorithmic mechanism design

v. Market equilibria

4. Advanced Concepts

i. Bayesian implementations

ii. Pricing and resource allocation – game Theoretic Models

iii. Misaligned incentives and informational asymmetries

iv. Game theoretic approaches to cyber crime

5. Introduction to the research areas

i. Sources of data

ii. Strictly Dominating Models

iii. Methodology

iv. Analysis of Data

6. Economic Fundamentals – Rationality and Efficiency - as it applies to information technology

i. Rational Behaviour, Preferences and Prices

ii. Pareto Optimality vs. Unitarianism

iii. Cost Benefit and Hazard Survival

iv. Transaction Costs

7. Economic effects of legislation and regulation and its effect on information system risk

i. Property rights or Communal rights of knowledge

ii. Compensation for regulation?

iii. Cost Minimisation and liability rules

8. Secure Design and implementation

i. Optimisation in Design and operation

ii. Maintenance and Support

iii. Test, Evaluation and Validation

iv. Models for Economic Evaluation

9. System Monitoring and Controls

i. Queuing theory and Analysis

ii. Design for reliability vs. Maintainability & supportability (serviceability) vs. usability – setting the efficient optimal

iii. Design for Producibility and disposability

iv. Design for affordability

10. Software Security

i. Asymmetric Information and the market for lemons

ii. A market for secure software?

iii. Designing for security

11. Modelling and measuring risk

i. Network and system controls

ii. Expert systems (PCap and automation)

iii. Systems measurement

iv. Overall risk and economic constraints

12. Conclusions

1. Aim: not more than 15 pages A4 per chapter. 12*15=180 + 20 pages for introduction, content, etc. = 180 pages A4 = 230 page book.

2. Added to the dissertation:

a. CD-ROM with modelling program that can be used for assessment and risk quantification.

b. Data and collected first stage and second stage models.

c. Statistical programs and correlation engine.

3. Intended output:

a. PhD dissertation: 230 page book (approx)

b. Journal articles: the topics/contents of 10 out of 12 chapters will be designed to be suitable for publishing as separate articles.

c. PCap module written in R and C that can take direct network feeds (TCP/IP) and report on anomalous traffic (with a learning feature and feedback cycle to minimise error with use).
References

· N. Kolmogorov, C. R. Acad. Sci. USSR 30, 301; ibid. 32, 16 (1941).

· W. Du and M. J. Atallah, “Protocols for secure remote database access with approximate matching,” in 7th ACM Conference on Computer and Communications Security (ACMCCS 2000), The First Workshop on Security and Privacy in E-Commerce, Athens, Greece, November 1-4 2000. [Online]. Available: http://portal.acm.org

· --, “Privacy-preserving data mining,” in Proceedings of the 2000 ACM SIGMOD Conference on Management of Data. Dallas, TX: ACM, May 14-19 2000, pp. 439”450. [Online]. Available: http://doi.acm.org/10.1145/342009.335438

· C. Yao, “How to generate and exchange secrets,” in Proceedings of the 27th IEEE Symposium on Foundations of Computer Science. IEEE, 1986, pp. 162”167.

· Calvo-Armengol, C. Ballester and Y. Zenou (2004) “Who”s who in crime networks “ wanted the key player” In IUI Working Paper Series 617, 2004, The Research institute of industrial Economics

· Gorban, I. Karlin, H. Ottinger, Non-classical additive entropy functions and their maximizers, ETH”Zurich preprint, June 2002.

· Gorban, I. Karlin, Phys. Rev. E 67 (2003) 016104;

· Olemski, Statistical theory of self-similar time series [arXiv : cond-mat/0210667

· Ozment and S.E. Schechter (2002) “Milk or Wine: Does software security improve with age”“ In 15th Usenet Security Symposium, July 2006

· Robledo, Criticality in non-linear one-dim maps: RG universality map and nonextensive entropy, [arXiv : cond-math/020209].

· Schrijver, A combinatorial algorithm minimizing submodular functions in strongly polynomial time, Journal of Combinatorial Theory, Series B 80 (2000), 346”355.

· Abreu, D., Brunnermeier, M., 1997. Synchronization risk and delayed arbitrage. Journal of Financial Economics 66.

· Aliprantis, C., Border, K., 1999. Infinite Dimensional Analysis: A Hitchhiker's Guide.

· Araújo, A., Mas-Colell, A., 1978. Notes on the smoothing of aggregate demand. Journal of Mathematical Economics 5.

· Arrow, Kenneth J. (1974) “Essays in the Theory of Risk-Bearing” NY: American Elsevier

· Ausubel, Lawerence M,. Peter Cramton, and Raymond J. Denekere (2002) “Bargaining with Incomplete Information” In “Handbook of Game Theory. Vol 3 Amsterdam: Nth Holland”

· Baird, Douglas, Robert Gertner and Randal Picker (1994) “Game Theory and the law” Cambridge, MA: Harvard University Press

· Becker, Gary S (1968) “Crime and Punishment: An Economic Approach” 76 Journal of Political Economy 169

· Boehm, B.W., Software Engineering Economics, Prentice Hall, Upper Saddle River, NJ 1981

· Brock, W., Durlauf, S., 2001. Interactions-based models. In: Heckman, J., Leamer, E. (Eds.), Handbook of Econometrics. In: Handbook of Econometrics, vol. 5. Elsevier, pp. 32973380 (Chapter 55).

· Brown, John P. (1973) “Toward an Economic Theory of Liability” 2 Journal of Legal Studies 323

· Brunnermeier, M., Morgan, J., 2004. Clock games: Theory and experiment. Working Paper. Princeton University.

· Castro (2004) On non-extensive statistics, chaos and fractal strings, Physic A

· Clifton, “Using sample size to limit exposure to data mining,” Journal of Computer Security, vol. 8, no. 4, pp. 281”307, Nov. 2000. [Online]. Available: http://iospress.metapress.com/openurl.asp”genre=article&issn=0926-227X&%volume=8&issue=4&spage=281

· Clifton, M. Kantarcioglu, X. Lin, J. Vaidya, and M. Zhu, “Tools for privacy preserving distributed data mining,” SIGKDD Explorations, vol. 4, no. 2, pp. 28”34, Jan. 2003. [Online]. Available: http://www.acm.org/sigs/sigkdd/explorations/issue4-2/contents.htm

· Camerer, Conlin and Richard H. Thaler, Anomalies: Ultimatums, Dictators, and Manners. Journal of Economic Perspectives 9, no. 2 (Spring 1995): 209-219

· Chhikara, R., Folks, J., 1989. The Inverse Gaussian Distribution. Marcel Dekker.

· Communicating the Economic Value of Security Investments: Value at Security Risk, Rolf Hulth‚n, WEIS 2008.

· Competitive Cyber-Insurance and Internet Security, Nikhil Shetty, Galina Schwartz, Mark Felegyhazi, Jean Walrand, WEIS 2009.

· Agrawal and C. C. Aggarwal, “On the design and quantification of privacy preserving data mining algorithms,” in Proceedings of the Twentieth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems. Santa Barbara, California, USA: ACM, May 21-23 2001, pp. 247”255. [Online]. Available: http://doi.acm.org/10.1145/375551.375602

· D. W.-L. Cheung, J. Han, V. Ng, A. W.-C. Fu, and Y. Fu, “A fast distributed algorithm for mining association rules,” in Proceedings of the 1996 International Conference on Parallel and Distributed Information Systems (PDIS”96). Miami Beach, Florida, USA: IEEE, Dec. 1996, pp. 31”42.

· D. W.-L. Cheung, V. Ng, A. W.-C. Fu, and Y. Fu, “Efficient mining of association rules in distributed databases,” IEEE Transactions on Knowledge and Data Engineering, vol. 8, no. 6, pp. 911”922, Dec. 1996.

· Dan Ariely, Predictably Irrational, 2008.

· Dick, Andrew R. (1995) “When does Organised Crime Pay” A Transaction Analysis” 15 International Review of Land and Economics

· Dixit, A., Pindyck, R., 1994. Investment Under Uncertainty. Princeton U. Press.

· Cohen, M. Datar, S. Fujiwara, A. Gionis, P. Indyk, R. Motwani, J. D. Ullman, and C. Yang, “Finding interesting associations without support pruning,” in Proceedings of the 16th International Conference on Data Engineering, San Diego, California, Feb. 28 “ Mar. 3 2000. [Online]. Available: http://www.computer.org/proceedings/icde/0506/05060489abs.htm

· Eckstein, Z., Wolpin, K., 1989. The specification and estimation of dynamic stochastic discrete choice models: A survey. Journal of Human Resources 15.

· Epstein, Richard (1973) “A Theory of Strict Liability” 2 Journal of Legal Studies 151

· F. Zambonelli, M. Gleizesb, M. Mameia, and R. Tolksdorf. Spray computers: Explorations in self-organization. Pervasive and Mobile Computing, 1:1–20, March 2005

· Fabrycky, W.J., G.J. Thuesen, and D. Verma (1998) “Economic Decision Analysis” 3rd Ed. Prentice Hall, Upper Saddle River NJ.

· FINK, Evelyn C., Scott Gates & Brian D. Humes. Game Theory Topics: incomplete Information, Repeated Games and N-Player Games, Thousand Oaks, CA Sage Publications 1998

· Fiorenti, Gianluna, and Sam Peltzman (EDs) (1995) “The Economics of Organised Crime” Cambridge: Cambridge University Press

· Fourni‚, E., Lasry, J.-M., Lebuchoux, J., Lions, P.-L., Touzi, N., 1999. Applications of Malliavin calculus to Monte Carlo methods in finance. Finance and Stochastics 3.

· G.A Akerlof (1970) “The Market for “Lemons”: Quality Uncertainty and the market mechanism” Q.J Econ., 84(3):488-500, 1970

· G.J. Woeginger, When does a dynamic programming formulation guarantee the existence of a fully polynomial time approximation scheme (FPTAS)”, INFORMS

· G.P. Morriss and L. Rondoni, J. Stat. Phys. 75, 553 (1994).

· Geistfield, Mark (2000) “Reforming Products Liability” In Boudwin Bouckaert and Gerrit De Geest (EDs) “Encyclopaedia of Law and Economics” Cheltenham: Edward Elgar

· Gihman, I., Skorohod, A., 1972. Stochastic Differential Equations. SpringerVerlag.

· Glaeser, E., Scheinkman, J.A., 2002. Non-market interactions. In: Dewatripont, M.,

· Glasserman, P., 2004. Monte Carlo Methods in Financial Engineering. Springer.

· Granovetter, M., 1978. Threshold models of collective behavior. American Journal

· Barendregt and H. Geuvers, Proof-checking using dependent type systems, pp. 1149” 1238 in: J.A. Robinson and A. Voronkov (eds.), Handbook of automated reasoning, Volume 2, Chapter 18, Elsevier, 2001.

· Kunreuther and G. Heal. (2003) “Interdependent security” Journ. Risk and Uncertainty, 26(2-3):231-249, March-May 2003.

· Varian. “System reliability and free riding” IN L. Jean Camp and Stephen Lewis (EDs) Economics of Information Security, Advances in information Security, 12:1-15. Kluwer Academic Publishers, 2004

· H.A. Lorentz, Proc. Amst. Acad. 7, 438 (1905).

· H.P. van Ditmarsch. Descriptions of game actions. Journal of Logic, Language and Information, 11:349{365, 2002.

· H.P. van Ditmarsch. Knowledge games. PhD thesis, University of Groningen, 2000. ILLC Dissertation Series DS-2000-06.

· H. Yang, X. Meng, and S. Lu. Self-organized network layer security in

· mobile ad hoc networks. In Proceedings of the 1st ACM Workshop on

· Wireless Security, pages 11–20, Atlanta, USA, 2002.

· Hansen, L., Turnovsky, S. (Eds.), Advances in Economics and Econometrics: Theory and Applications, Eight World Congress. Cambridge University Press.

· Hartley J. R (1998) “Concurrent Engineering: Shortening Lead Times, Raising Quality and Lowering Costs” Productivity Press NY, NY 1998

· He, H., Keierstead, W., Rebholz, J., 1998. Double lookbacks. Mathematical Finance 8.

· Hong, H., Kubik, J., Stein, J., 2004. Social interactions and stock-market participation. Journal of Finance 59.

· Hopenhayn, H., Squintani, F., 2004. Preemption games with private information. UCLA Working Paper.

· Huang, J., Subrahmanyam, M., Yu, G., 1996. Pricing and hedging American options: A recursive integration method. Review of Financial Studies 9.

· Hylton, Keith N. (1990) “The Influence of Litigation Costs on Deterrence under Strict Liability and under Negligence” 10 International Review of Law and Economics 161

· IEEE 1413-1998, IEEE Standard Methodology for Reliability Predictions and Assessment for Electronic Systems and Equipment, IEEE NY. NY. 1998

· Information Security Economics “ and Beyond, Ross Anderson and Tyler Moore, Information Security Summit 2008.

· ISO/IEC 15939 “Software Engineering “ Software Measurement Process” 2002

· Iyengar, S., 1985. Hitting lines with two-dimensional Brownian motion. SIAM Journal of Applied Mathematics 45.

· Lloyd, M. Niemeyer, L. Rondoni and G.P. Morriss, CHAOS 5, 536 (1995).

· Machta and R. Zwanzig, Phys. Rev. Lett. 50, 1959 (1983).

· J.-P. Bouchaud and M. Potters, Theory of Financial Risks: From Statistical Physics to Risk Management (Cambridge University Press, Cambridge, 2000).

· Ju, N., 1998. Pricing an American option by approximating its early exercise boundary as a multipiece exponential function. Review of Financial Studies 11.

· Kannan and R. Telang (2004) “Economic analysis of market for softweare vulnerabilities” In Proc. 3Rd Workshop on the Economics of Information Security, May 2004

· Karatzas, I., Shreve, S., 1991. Brownian Motion and Stochastic Calculus. Springer.

· Kobila, T., 1993. A class of solvable stochastic investment problems involving singular controls. Stochastics and Stochastics Reports 43.

· Atallah, E. Bertino, A. Elmagarmid, M. Ibrahim, and V. Verykios, “Disclosure limitation of sensitive rules,” in Knowledge and Data Engineering Exchange Workshop (KDEX”99), Chicago, Illinois, Nov. 8 1999, pp. 25”32. [Online]. Available: http://ieeexplore.ieee.org/iel5/6764/18077/00836532.pdf”isNumber=18077&%prod=CNF&arnumber=00836532

· J.P. Hubaux, L. Buttyán, and S. Capkun. The quest for security in mobile ad hoc networks. In Proceedings of the 2nd ACM international symposium on Mobile ad hoc networking & computing, pages 146–155, Long Beach, USA, 2001.

· J. Atallah and W. Du, “Secure multi-party computational geometry,” in Seventh International Workshop on Algorithms and Data Structures (WADS 2001), Providence, Rhode Island, USA, Aug. 8-10 2001. [Online]. Available: http://www.cerias.purdue.edu/homes/duw/research/paper/wads2001.ps

· M. Lapidus, M. Frankenhuysen, Fractal strings, Complex Dimensions and the Zeros of the Zeta Function, Birkhauser, New York, 2000

· M. Laurent, A comparison of the Sherali-Adams, Lovïasz-Schrijver and Lasserre relaxations for 0-1 programming, Mathematics of Operations Research 28 (2003), 470-496.

· M. Laurent, Lower bound for the number of iterations in semidefinite relaxations for the cut polytope, Mathematics of Operations Research 28 (2003), 871”883.

· M. Li and P.M.B. Vitïanyi, An introduction to Kolmogorov complexity and its applications, Second Edition, Springer-Verlag, 1997.

· M. Sparow (1990) “The application of network analysis to criminal intelligence: An assessment of the prospects” Social Networks, 13:253-274

· M. Haque and S.I. Ahamed. Security in pervasive computing: Current status and open issues. International Journal of Network Security, 3(3):203–214, 2006.

· M.H. Albert, R.E.L. Aldred, M.D. Atkinson, H.P. van Ditmarsch, and C.C. Handley. Safe communication for card players by combinatorial designs for two-step protocols. Australasian Journal of Combinatorics, 33:33{46, 2005.

· M.L. Katz and C. Shapiro (1985) “Network Externalities, competition and compatibility” Amer. Econ. Rev, 75(3):424-440, June 1985

· Mamer, J., 1987. Monotone stopping games. Journal of Applied Probability 24.

· Managing Online Security Risks, Hal Varian, New York Times, Jun 1, 2000.

· Manski, C., 1993. Identification of endogenous social effects: The reflection problem. Review of Economic Studies 60.

· Mark Nelson & Jean-loup Gailly () The Data Compression Book 2nd edition Cambridge Press.

· McConnell, S., “Software Project Survival Guide” Microsoft Press Redmond WA, 1998

· McManus, C., 1992. How common is identification in parametric models” Journal of Econometrics 53.

· Merton, R., 1990. Continuous-Time Finance. Blackwell Publishers.

· Morris, S., 1995. Co-operation and timing. CARESS Working Paper #95-05.

· Musa, J.D. (1998) “Software Reliability Engineering: More Reliable Software, Faster Development and Testing” McGraw Hill, NY NY

· Goldreich, S. Micali, and A. Wigderson, “How to play any mental game - a completeness theorem for protocols with honest majority,” in 19th ACM Symposium on the Theory of Computing, 1987, pp. 218”229. [Online]. Available: http://doi.acm.org/10.1145/28395.28420

· Cvitanoviïc, J.-P. Eckmann, and P. Gaspard, Chaos, Solitons and Fractals 6, 113 (1995).

· Wang and M. C. Gonzalez (2009)Understanding spatial connectivity of individuals with non-uniform population density. Phil Trans R Soc A 367, 3321-3329

· Pakes, A., Pollard, D., 1989. Simulation and the asymptotics of optimization estimators. Econometrica 57.

· Paula, µureo de (2007) “Inference in a synchronization game with social interactions”, Journal of Econometrics, 148 (2009) 56_71

· Peter Sestoft (2008) Systematic software testing IT University of Copenhagen, Denmark1 Version 2, 2008-02-25

· Plott, Charles R., and Kathryn Zeiler (2005) “The willingness to Pay/Willingness to Accept Gap” 95Amercian Economic Review 530

· Polinsky, A. Mitchel and Steven Shavell (1998) “Punitive Damages: An Economic Analysis” 111 Harvard Law Review 869

· Port, S., Stone, C., 1978. Brownian Motion and Classical Potential Theory. Academic Press.

· Priest, George L. (1981) “A Theory of the Consumer Product Warranty” 90 Yale Law Journal 1297

· Priest, George L. (1987) “The Current Insurance Crisis and Modern Tort Law” 96 Yale Law Journal 1521

· Pu Wang, Marta C. Gonz lez, C‚sar A. Hidalgo,Albert-L szl¢ Barab si (2009) “Understanding the Spreading Patterns of Mobile Phone Viruses” Science 22 May 2009: Vol. 324. no. 5930, pp. 1071 “ 1076 DOI: 10.1126/science.1167053

· Agrawal and R. Srikant, “Fast algorithms for mining association rules,” in Proceedings of the 20th International Conference on Very Large Data Bases. Santiago, Chile: VLDB, Sept. 12-15 1994, pp. 487”499. [Online]. Available: http://www.vldb.org/dblp/db/conf/vldb/vldb94-487.html

· Albert, H. Jeong and A.I. Barabasi (2000) “Error and attack tolerance of complex networks” Nature 406(1):387-482, 2000

· R. Anderson (2001) “Why information security is hard “ an economic perspective” In 17th Annual Computer Security Applications Conf. Dec 2001 New Orleans LA.

· R. Artuso, Phys. Lett. A 160, 528 (1991).

· R. Bohme & G. Kataria(2006) “Modles and measures for correlation in cyber insurance” In Proc. Of 5Th Workshop on Economics of Information Security, June 2006, Cambridge UK

· R. E. L. Aldred, M. D. Atkinson, H. P. van Ditmarsch, C. C. Handley, D. A. Holton, and D. J. McCaughan. Permuting machines and priority queues. Theoretical Computer Science, 349:309{317, 2005.

· R. N. Mantegna and H. E. Stanley, An Introduction to Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, 2000).

· R. Wirth, M. Borth, and J. Hipp, “When distribution is part of the semantics: A new problem class for distributed knowledge discovery,” in Ubiquitous Data Mining for Mobile and Distributed Environments workshop associated with the Joint 12th European Conference on Machine Learning (ECML”01) and 5th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD”01), Freiburg, Germany, Sept. 3-7 2001. [Online]. Available: http://www.cs.umbc.edu/”hillol/pkdd2001/papers/wirth.pdf

· R.J.F. Cramer and V. Shoup, A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack, pp. 13”25 in: Proceedings of Crypto 1988, LNCS 1462, Springer-Verlag, 1998.

· Rubinstein, Ariel (1982) “Perfect Equilibrium in a Bargaining Model” 50 Econometrica 97

· Russel, R.S. (2002) “Operations Management” 4th Ed. Prentice Hall, Upper Saddle River NJ.

· Nagaraja and R. Anderson “The topology of covert conflict” In 5th Workshop on the Economics of Information Security, June 2006

· S.E. Schechter (2002) “How to buy better testing” LNCS 2437:73-87 Springer

· Sanchez, N & Nugent J.B (2000) “Fence Laws vs Herd Laws: A nineteenth century Kansas Paradox” 76 Land Economics

· Schroeder, M. Fractals, Chaos, Power Laws: Minutes from an Infinite Paradise. NY. WH Freeman 1991

· Scotchmer, Suzanna (2004) “Innovation and Incentive” Cambridge, MA: MIT Press

· Smith, GM. (2001) “Statistical Process Control and Quality Improvement” 4th Ed. Prentice Hall, Upper Saddle River NJ.

· Spontaneous Order: The Self-organization of Economic Life ... Osborne, Martin (2004)

· Strogatz, S. Sync: The Emerging Science of Spontaneous Order. NY Hyperion 2003

· Bohr, M. H. Jensen, G. Paladin and A. Vulpiani, \Dynamical systems approach to turbulence" (Cambridge University Press, Cambridge, 1998).

· T. Hoeholdt, J.H. van Lint, and R. Pellikaan, Algebraic geometry codes, pp. 871”961 in: V.S. Pless and W.C. Huffman (eds.), Handbook of coding theory, Part 1, Elsevier, 1998.

· T. Moore (2005) “Counteracting hidden-action attacks on networked systems” In Proc 4Th Workshop on the Economics of Information Security, June 2005

· Taylor, R.P Order in Pollock”s Chaos, Scientific American Dec 2002, Pp 116-121

· Thuesen, G.J. and W.J. Fabrycky (2001) “Engineering Economy” 9th Ed. Prentice Hall, Upper Saddle River NJ.

· Frisch, \Turbulence: The legacy of A. N. Kolmogorov" (Cambridge University Press, 1995).

· Viscusi, W. Kip. (1991) “Reforming Products Liability” Cambridge, MA: Harvard University Press

· Du, “A study of several specific secure two-party computation problems,” Ph.D. dissertation, Purdue University, West Lafayette, Indiana, 2001. [Online]. Available: http://www.cerias.purdue.edu/ homes/duw/research/duthesis.pdf

· W.N. Vance, Phys. Rev. Lett. 96, 1356 (1992).

· When 25 Cents is too much: An Experiment on Willingness-To-Sell and Willingness-To-Protect Personal Information, Jens Grossklags and Alessandro Acquisti, WEIS 2007.

· White, Michelle and Donald Wittman (1983) “A Comparison of regulation and Liability Rules under Imperfect Information” 12 Journal of Legal Studies 413

· Wittman, Donald (1977) “Prior Regulation vs. Post Liability: The Choice between Input and Output monitoring” 6 Journal of Legal Studies 193

· Woodward, Susan E. (1985) “Limited Liability in the Theory of the firm” 141 Journal of Institutional and Theoretical Economics 601

· N. Li, J.C. Mitchell, and W.H. Winsborough. Beyond proof-of compliance: Security analysis in trust management. Journal of the ACM, 52(3):474–514, May 200

[1] See “Adverse Selection in Online “Trust” Certifications” by Benjamin Edelman,WEIS 2006.

No comments: