Monday, 14 January 2008

Systems Review

Many people seem to make the over-simplification that all code can be written correctly and tested completely. This is that no matter how long and complex there is a way of determining the error rate - this is a fallacy.

The majority of libraries used in development (excluding open source eg. Linux) are complied object code. In expecting that code is verified, are we expecting that the world stop using all code unless they have the source? That all source be checked? I fail to see how this could be completed and hence the issues with GCC a number of years back.

Dijkstra developed the method "correct by construction". He also did extensive work on the mathematical proof of algorithms. References to these works are attached below.

Kert Godel, Alan Turning and Alonzo Church (GTC) did work which resulted in "Computability Theory". They discovered that certain basic problems cannot be solved by computers. Cohen, Hollingworth and Dijkstra all developed this theory further.

Now I get to error determination. GTC demonstrated in computational theory that it is not possible to create a machine that can determine wether a mathematical statement is true or false. All code and programming is a mathematical statement or algorithm. The determination of the codes function is a mathematical proof (see Cohen and Dijkstra).

As it is not possible for either an automata or turning machine to determine the correctness of the programme, it is not possible to determine the effects of code in general and only the simplest of automata are determinable.

Dijkstra's started work on formal verification in the 1970's. Formal verification was the prevailing opinion at the time. This was that one should first write a program and then provide a mathematical proof of correctness.

"The Cruelty of Really Teaching Computer Science" (Dijkstra, 1988) saw Dijkstra trying to push computable correctness. This missed the need for engineers to compromise on the one hand with the physical world and on the other with cost control.

This is the issue. To move ahead and develop code that people want we can not complete mathematical software verifications. No machine (at least yet known) can verify code. The term machine refers to the computer science idea of a machine - not a physical item.

To state that all code should be verified would be great for myself. I am a mathematician. Computers can not verify code (see the theory of computation). This would make my mathematical skills in greater demand and help next time I go for a raise. (See Dijkstra, Turing et al and all the other people who created the foundations of computer science).

The use of finite state machines, labelled transition systems, Petri nets, timed automata, hybrid automata, process algebra, formal semantics of programming languages such as operational semantics, denotation semantics, Hoare's logic or any other existing method of computational verification just adds to the complexity.

I recomend reading a paper of Dijkstra's paper "On the Cruelty of Really Teaching Computing Science". In this, Dijkstra argues for formal verification against software engineering. Yet we still trust in software security...


Cohen, Fred, “Protection Testing”,, September 1998

Cohen, Fred, 1997, “Managing Network Security, Penetration Testing?”, Available from

Cohen, Fred, 1996, “National Info-Sec Technical Baseline, Intrusion Detection and Response” Lawrence Livermore National Laboratory, Sandia National Laboratories December, 1996

Cohen, Fred, 1992, “Operating System Protection, Through Program Evolution” research for ASP, PO Box 81270, Pittsburgh, PA 15217, USA

Dijkstra, Edsger W. (1976). A Discipline of Programming. Englewood Cliffs, NJ: Prentice Hall

Hollingworth, D., S. Glaseman and M. Hopwood, "Security Test and Evaluation Tools: an Approach to Operating System Security Analysis," P-5298, The Rand Corporation, Santa Monica, CA., September 1974.

No comments: