Tuesday, 9 November 2010

Legal rules and software

It is argued [11, 13] that negligence rules are required to force software vendors to act optimally. However, the effects of reputation and the marginal cost effect from reduced sales are incentives in themselves for the vendor to act optimally. The supply of imperfect information to the vendor through incomplete and inadequate feedback channels is a problem that can be solved in part through the creation of a market for vulnerability research. The effects of reputation on the vendor and the assignment of risk through this process result in fewer negative externalities then occur because of a legislative approach.

There are good reasons for this allocation of risk. What little the software vendor can do is generally already being expressed (as noted, the vendor has an interest in reducing the damage to their reputation). That is C(Omega), is already as close to the optimal level as is economically efficient for all parties, whereas C(Omega)[i] can be easily achieved through the user's self-insurance (through mitigation, controls etc). The user has a better idea of the environment and uses that they intend to expose the software. Hence, the user is a better estimator (and reducer) of consequential damages than the software vendor.

Damage may be expressed as a function of both the user (X) and the vendor (Y) as D(x,y) where the expected damage can be expressed as a function,  E[D(x,y)]=P(x)P(y)D(x,y). Here the probability P(x) is controlled by the user and  P(y) is a function of the vendor and D(x,y) is hence related to the actions of both the user and vendor. For instance, the vendor may increase the testing of software and the user can install compensating controls that are designed to mitigate software flaws or to detect and intercept these as they occur.

The level of damages suffered by a user depends on both the pre-breach behavior of the user and the vendor. The vendor is in a position where reputation influences sales (demand) and hence the willingness to add layers of testing and additional controls (all of which increase the cost of the software). As the market for software varies in its elasticity [11, 19] from the highly inelastic in small markets with few competitors (e.g. Electricity markets) to highly elastic (e.g. Games), the user has the ability to determine their needs in the best manner. The user may select customized software with warranties designed to reduce the levels of breach that can occur [11, 13]. This comes with an increased cost.

Software vendors (unless specifically contracted otherwise) do not face strict liability for the damage associated with a breach due to a software vulnerability. Although negligence rules for software vendors have been called for [2], this creates a sub-optimal outcome. The user can (excepting certain isolated instances);
  1. select different products with an expectation of increased security ,
  2. add external controls (through the introduction of external devices, create additional controls or use other software that enhances the ability of the primary product),
  3. Increase monitoring for attacks that may be associated with the potentially vulnerable services (such as by the use of an IDS).
By limiting the scope of the user's responsibility , the user's incentive to protect their systems is also limited. That is the user does not have the requisite incentive to take the optimal level of precautions. Most breaches are not related to zero day attacks [7]. Where patches have been created for known vulnerabilities that could lead to a breach, users will act in a manner (rational behavior) that they expect to minimize their costs. Whether risk seeking or risk adverse, the user aims to minimize the costs that they will experience. This leads to a wide range of behavior with risk adverse users taking additional precautions and risk neutral users can accept their risk by minimizing their upfront costs (which may lead to an increase in loss later).In any event, the software vendor as the cause of a breach is not liable for any consequential damages . This places the appropriate incentives on the user to mitigate the risk. As is noted below, the vendor has the incentive to minimize the risk to their reputation.

The software vendor could offer to be liable for consequential damages (where they would guarantee losses for all software flaws) but this would increase the cost of the testing process. As there is no way to ensure and hence guarantee the security of software [12, 20] (this is, make it bug free), there is no way for the vendor to be certain they have found all the flaws in their software. As each possible bug is discovered, the amount of effort required to find the next bug increases geometrically [17, 20].
The software vendor could insure for these losses and offer consequential damages to the user. This would entail a greater cost. The fact that few users seek out vendors that are willing to provide (and that few users would be willing to pay for) this level of assurance is strong evidence that the users are unwilling to pay for the extra costs that such a level of liability would entail. Industry custom is generally a high-quality indicator of the efficient allocation of liability in contracts in the absence of trade restrictive government regulation. Software markets involve bargaining and contractual negotiation where costs can be efficiently transferred to third parties .
A market system for the discovery of bugs is at a comparative disadvantage as the tester does not have access to either source code (in general) and is constrained through existing legislative instruments (such as the US DMCA) that make the reverse engineering of software expensive and in some instances illegal. The large number of potential vulnerability researchers as well as the existing pool of researchers adds both opportunities from organized companies and for the creation of new ventures. This has an additional consequential benefit in that many of the potential criminal recruits (into cybercrime organizations) are also the same people that could become vulnerability researchers.
This leads to an optimal level of allocation to the role of vulnerability researcher and decreases the supply of recruits to criminal organizations. This has a consequential side effect of increasing the marginal costs to cybercrime making it more expensive for criminals to conduct their actions. The increase in costs for criminal organizations moves the indifference curves and consumption optimums of the cybercrime organization in a way that reduces crime.

Blanchard & Fabrycky [4] remind us that:
"When the producer is not the consumer, it is less likely that potential operation problems will be addressed during development. Undesirable outcomes too often end up as problems for the user of the product instead of the producer."

Software has an "agency problem" that can be associated with production, shrink wrapped software moves issues that an internal production house would mitigate into the mainstream. In regards to the rise of the shrink-wrapped software industry, Brookes [8] notes:

For the developer in the shrink-wrapped industry, the economics are entirely different from those in the classical industry: development cost is divided by large quantities; packaging and marketing costs loom large. In the classical in-house development industry, schedule and the details of function where negotiable, development cost might not be; in the fiercely competitive open market, schedule and function quite dominate development cost.

Brooks [8] further asserted that the distinct development environments of in-house and shrink-wrapped software give rise to divergent programming cultures. Shrink-wrapped software can cost a percentage of the cost of developing software in-house. Even when large numbers of users run the same software (such as Microsoft Word in many organizations), the costs of developing the software in-house is generally a three times the cost [8] of an equivalent off-the-shelf package.

Even in-house developed software is not bug free. The truth of the matter is that in-house software is more likely to contain bugs on release [8]. As the development costs are generally fixed in in-house development projects, failures that result in schedule slippages commonly result in cuts to the testing and debugging stages of the project. This disastrous effect leads to more bugs in the final product.

In-House software may seem to have a lower rate of bugs/vulnerabilities, but the truth of the matter is that the more users of the software exist, the greater the number of bugs that will be uncovered in any period. This may seem to add a beneficial aspect to in-house developed software, but the interconnectivity of systems greatly increases the number of people that can find a software bug in the application. The distinction is that external parties will have a far lower rate of reporting than internal users (this is, attackers will sit on a discovered vulnerability and maintain this as a zero-day attack path against the organization).

Widely deployed shrink-wrapped software on the other hand is likely to have the same or less than the number of bugs, that is:

# Bugs in Shrink-ware software <= bugs in In-House software

The issue comes from the costs of minimizing software bugs. The costs of conducting program verification are generally 10x the cost of developing the software. As shrink-wrap software is generally a third of the cost of in-house software (for mainstream applications), the cost of developing secure software comes to over 30x the cost of an off-the-shelf product. Solutions that are more reasonable can be deployed to minimize the losses associated with the bugs. Hence, it is rarely cost effective to ensure that software is perfect.

References...
2. Arora, A., Telang, R. & Xu, H. (2004) “Optimal Time Disclosure of Software Vulnerabilities”, Conference on Information Systems and Technology, Denver CO, October 23-2; See also Arora, A. &; Telang, R. (2005), “Economics of Software Vulnerability Disclosure”, IEEE Security and Privacy, 3 (1), 20-2
4. Blanchard, B.S., & Fabrycky, W. J., (2006) “Systems Engineering and Analysis”. 4th ed. Upper Saddle River, N.J.: Pearson Prentice Hall
7. BeyondTrust Software (2010) “BeyondTrust 2009 Microsoft Vulnerability Analysis: 90% of Critical Microsoft Windows 7 Vulnerabilities are Mitigated by Eliminating Admin Rights”, http://www.beyondtrust.com/downloads/whitepapers/documents/wp039_BeyondTrust_2009_Microsoft_Vulnerability_Analysis.pdf
8. Brookes, F. (1995) “The Mythical Man-Month”. Addison-Wesley
11. Donald, D. (2006), "Economic Foundations of Law and Organization" Cambridge University Press
13. Durtschi, C., Hillison, W., & Pacini, C. (2002) “Web-Based Contracts: You Could Be Burned!” Journal of Corporate Accounting & Finance, Volume 13, Issue 5 , Pp 11 – 18.
17. Mills, H. D. (1971) "Top-down programming in large systems", Debugging techniques in large systems, R. Rustin Ed., Englewoods Cliffs, N.J. Prentice-Hall
18. Reed, Chris (2004) “Internet Law Text and Materials”, 2nd Edition, Cambridge University Press, UK
19. Stolpe, M. (2000). Protection Against Software Piracy: A Study Of Technology Adoption For The Enforcement Of Intellectual Property Rights. Economics of Innovation and New Technology, 9(1), 25-52.
20. Turing, Alan (1936), “On computable numbers, with an application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society, Series 2, 42 pp 230–265

Sunday, 7 November 2010

Programming Verification

For smaller programs, formal verification does show promise in determining the security of many code samples. This remains an immensely expensive exercise [6] and the economics of software are unlikely to result in this becoming a standard process. Nor does verification guarantee a bug free program, mathematical proofs can also be faulty or the design itself can be faulty. In addition to this, the use of external libraries and even flaws in a compiler can add bugs.

Verification can reduce the programming testing load, but does not negate it. In addition, verification is an NP complete problem [14]. As the size of the program increases, the effort and difficulty involved with verification rises. Worse, the increase in difficulty is geometric. For these reasons, verification is an incredibly expensive exercise that (outside of selected military systems - many of which have displayed flaws) does not provide the answer to creating secure software.

Model checking is also only as effective as the people who validate the model. For instance, a recent flaw with the US military drones was due to a flawed model [16].

At present, only two operating systems are formally verified:

  1. Secure Embedded L4 microkernel by NICTA's,
  2. Integrity (Operating System), Green Hills Software.
Adams [1] noted that a third of all software faults take more than 5000 execution-years to manifest themselves. The secluded EAL6+ software sampled by Adams is not statistically significant over all software, but it does provide evidence of the costs. This also demonstrates why only two operating system vendors have ever completed formal verification. The "Secure Embedded L4 microkernel" by NICTA comprises 9,300 lines of code, of which 80% has been formally verified at a cost of 25 person years of work. The US$ 700 ploc costs for this exercise (the low estimate) demonstrates that formal verification is not a feasible solution for most software.

For instance, Microsoft's operating systems have over 10,000,000 lines of code. At a cost of 20x the current development rates, Microsoft Windows 7 Professional would sell for $7,000 instead of the usual $350. This is clearly not an economically supportable position. Further, many incidents (in fact the majority) are not the direct result of software flaws, and would not be mitigated by perfect software (if this could be even thought to be possible). With over 150,000,000 Microsoft Windows users, the costs of creating a secure version of Windows (ignoring the add-on software such as Adobe) would be in the order of $110 billion dollars. This figure exceeds the $60 billion in losses that have been estimated to occur annually from computer crime and other flaws directly attributable to software bugs.

The result is that there is a market for formally verified software (as can be seen from those who have done this), but it is small and definitely not suited to be forced on all software users. Worse, this additional cost would reduce the amount that users can expend on securing a system. As mitigating bugs without securing a system design will not mitigate all security flaws, this approach will not result in more secure systems.

References...
1. Adams, N.E., (1984) "Optimizing preventive service of software product," IBM Journal of Research and Development, 28(1), p. 2-14
6. Beach, J. R., & Bonewell, M. L. (1993). Setting-up a successful software vendor evaluation/qualification process for `off-the-shelve' commercial software used in medical devices. Paper presented at the Computer-Based Medical Systems, 1993. Proceedings of Sixth Annual IEEE Symposium on.
14. Garey, Michael R. & Johnson, David S. (1979) “Computers and Intractability: A Guide to the Theory of NP-Completeness” W. H. Freeman, USA
16. MacAskill, Ewen “US drones hacked by Iraqi insurgents” http://www.guardian.co.uk/world/2009/dec/17/skygrabber-american-drones-hacked, guardian.co.uk, Thursday 17 December 2009