In economic terms, we want to assign liability such that the optimal damage mitigation strategy occurs. The victim will mitigate their damages where no damages for breach apply in respect of the optimal strategy and payoffs. The rule that creates the best incentives for both parties is the doctrine of avoidable consequences (marginal costs liability).
Blanchard & Fabrycky (2006) note:
"When the producer is not the consumer, it is less likely that potential operation problems will be addressed during development. Undesirable outcomes too often end up as problems for the user of the product instead of the producer."
Both software and Security in general have an "agency problem". Only by measuring and reporting these costs in financial and economic values can we start to truly fix the problems.
Mitigation of damages is concerned with both the post-breach behaviours of the victim and the actions of the party to minimise the impact of a breach. In a software parlays', this would incur costs to the user of the software in order to adequately secure their systems. This again is a trade-off. Before the breach (through software failures and vulnerabilities that can lead to a violation of a system's security) the user has an obligation to install and maintain the system in a secure state.
The user is likely to have the software products of several vendors installed on a single system. As a consequence of this, the interactions of the software selected and installed by the user span the range of multiple sources and no single software vendor can account for all possible combinations and interactions.
As such, any pre-breach behaviour of the vendor and user of software needs to incorporate the capability of the vendors to not only minimise their own products, but the interactions of other products installed on a system.
There are several options that can be deployed in order to minimise the effects of a breach due to a software problem prior to the discovery of a vulnerability, these include:
- The software vendor can implement protective controls (such as firewalls)
- The user can install protective controls
- the vendor can provide accounting and tracking functions
- The vendor can employ more people to test software for vulnerabilities
- The software vendor can add additional controls
This is not to say that no liability does or should apply to the software vendor. The vendor in particular faces a reputational cost (discussed later) if they fail to maintain a satisfactory level of controls or do not respond to security vulnerabilities quickly enough or suffer to many problems.
The accumulation of a large number of software vulnerabilities by a vendor has both a reputational cost to the vendor as well as a direct cost to the user (time to install and the associated downtime and lost productivity). As a consequence, the accumulation of software vulnerabilities and the associated difficulty of patching or otherwise mitigating these is a cost to the use that can be investigated prior to a purchase (and is hence a cost that is assigned to new vendors even if they experience an exceptionally low rates of patching/vulnerabilities). As users are rational in their purchasing actions, they will incorporate the costs of patching their systems into the purchase price.
The probability of a vulnerability occurring in a software product will never approach zero. Turning and Distraka demonstrated that it is not possible to prove that a software product is bug free. As a consequence, the testing process by the vendor can be displayed as a hazard model. In this, it is optimal for the vendor to maximise their returns such that the costs of software testing is balanced against their reputation.
The cost of finding vulnerabilities can also be expressed as a optimal function through the provisions of a market for vulnerabilities. In this way, the software vendor maximises their testing through a market process. This will result in the vendor extending their own testing to the point where they cannot efficiently discover more bugs. Those bugs that are sold on market are costed and the vendor has to pay to either purchase these from the vulnerability researcher (who has a specialisation in uncovering bugs) or increase their own testing. The vendor will continue to increase the amount of testing that they conduct until the cost of their testing exceeds the cost of purchasing the vulnerability.
This market also acts as an efficient transaction process for the assignment of negligence costs. The user still has to maintain the optimal level of controls that are under their influence (installation, patching frequency etc) , whilst the vendor is persuaded to pay the optimal level of costs for testing and mitigation.
The vendor should not be liable for avoidable consequences. Where the user has failed to patch, to install and configure controls and to otherwise mitigate the possible damages that they can suffer, the vendor has no responsibility. This costs of mitigation is a part of the total costs of ownership for the software.
The optimal amount of precaution occurs when the last dollar expended by each party on the reduction or prevention of damages reduces the expected damage by at least $1. This can be expressed formally.
 See behavioural economics and rational behaviour.
 It may be demonstrated that sub-optimal behaviour does exist where users limit maintenance (patching) in certain conditions.
 I will let you Wiki these…
 This will be demonstrated as a Poisson distributed function
 Demonstrate that reputation has value to a vendor
 The breaching party is never liable for the damage that could have been mitigated under the doctrine of avoidable consequences
* Blanchard, BS & Fabrycky, WJ (2006) Systems Engineering and Analysis, 4th Edition Pearson Prentice Hall, Upper Saddle River, NJ, USA