For smaller programs, formal verification does show promise in determining the security of many code samples. This remains an immensely expensive exercise  and the economics of software are unlikely to result in this becoming a standard process. Nor does verification guarantee a bug free program, mathematical proofs can also be faulty or the design itself can be faulty. In addition to this, the use of external libraries and even flaws in a compiler can add bugs.
Verification can reduce the programming testing load, but does not negate it. In addition, verification is an NP complete problem . As the size of the program increases, the effort and difficulty involved with verification rises. Worse, the increase in difficulty is geometric. For these reasons, verification is an incredibly expensive exercise that (outside of selected military systems - many of which have displayed flaws) does not provide the answer to creating secure software.
Model checking is also only as effective as the people who validate the model. For instance, a recent flaw with the US military drones was due to a flawed model .
At present, only two operating systems are formally verified:
- Secure Embedded L4 microkernel by NICTA's,
- Integrity (Operating System), Green Hills Software.
For instance, Microsoft's operating systems have over 10,000,000 lines of code. At a cost of 20x the current development rates, Microsoft Windows 7 Professional would sell for $7,000 instead of the usual $350. This is clearly not an economically supportable position. Further, many incidents (in fact the majority) are not the direct result of software flaws, and would not be mitigated by perfect software (if this could be even thought to be possible). With over 150,000,000 Microsoft Windows users, the costs of creating a secure version of Windows (ignoring the add-on software such as Adobe) would be in the order of $110 billion dollars. This figure exceeds the $60 billion in losses that have been estimated to occur annually from computer crime and other flaws directly attributable to software bugs.
The result is that there is a market for formally verified software (as can be seen from those who have done this), but it is small and definitely not suited to be forced on all software users. Worse, this additional cost would reduce the amount that users can expend on securing a system. As mitigating bugs without securing a system design will not mitigate all security flaws, this approach will not result in more secure systems.
1. Adams, N.E., (1984) "Optimizing preventive service of software product," IBM Journal of Research and Development, 28(1), p. 2-14
6. Beach, J. R., & Bonewell, M. L. (1993). Setting-up a successful software vendor evaluation/qualification process for `off-the-shelve' commercial software used in medical devices. Paper presented at the Computer-Based Medical Systems, 1993. Proceedings of Sixth Annual IEEE Symposium on.
14. Garey, Michael R. & Johnson, David S. (1979) “Computers and Intractability: A Guide to the Theory of NP-Completeness” W. H. Freeman, USA
16. MacAskill, Ewen “US drones hacked by Iraqi insurgents” http://www.guardian.co.uk/world/2009/dec/17/skygrabber-american-drones-hacked, guardian.co.uk, Thursday 17 December 2009