The problem asks for an algorithm that takes, as input, a
statement and answers "Yes" or "No" according to whether the statement
is universally valid.
Alan Turning wrote his
thesis (1936) on answering
These types of questions are not directly related to computers and
were asked before the first computer was even built. Is a question
only a question after it has been published? Did Thales of Miletus
(548 BC) ask this question? It might be that the answer to a
question one might not be able to put into words. It might be that the
answer is not what a people can accept or understand.
We seem to attribute knowledge to the few people that have access to
The Turing test (1950) asks if we can tell if it is a computer or
not. René Descartes (1637) asked this same question.
We should be asking whether the computer can tell if we are human.
Should we be asking metaphysical questions of automata or calculators?
Does a gear in a clock have a soul? Should we be asking this
question before building the clock?
To think in science fiction terms is to waste time when building
"In just six weeks from the time the design was started, we had the
motor on the block testing its power." -- Orville Wright
"See everything, overlook a great deal, correct a little." -- Pope
"Even if you do learn to speak correct English, whom are
you going to speak it to?" -- Clarence Darrow
must be taught how to think, not what to think." -- Margaret Mead
"Maids want nothing but husbands, and when they have them, they want
everything." -- William Shakespeare
"It's simple, if it
jiggles, it's fat." -- Arnold Schwarzenegger
A computer can only execute one statement at a time and cannot
determine its correctness. It is up to the human to validate the
correctness of every instruction. The readability of code,
testing, tracing and logging is required for every statement.
The current technology in verification of logic does not exist. We
can, independent of the technology, create all kinds of analysis and
testing procedures after the fact, but this does not ensure correctness.
A = 1
We cannot tell if the above statement is correct. The computer
will execute this statement, and any analysis will have to assume
correctness. The only way to know correctness for any statement is
by human design. It is from this axiom that we must start.
A = 3, B = 4, C = 2, D = A * B * (C / 12)
The above statement cannot be understood, too much information is
missing, may not exist in the source code, or it might only exist in
the mind of the author.
A = 3 ft as height
B = 4 ft as width
C = 2 inches as depth
D = A * B * (C / 12 inches per foot) as ?
Humans can only determine the correctness when all relevant information
is known. Missing information requires an assumption. The location
of information wastes time. Any interpretation requires knowledge from
external sources. Hidden information in proprietary systems and data
structures is a blindfold on correctness.
The current computer languages fail in all aspects of verifiability
by humans. We must change computer technology to allow humans to
know if a program is correct. Not through analysis, but by providing
complete context for every line of source code. The actual
implementation of correctness comes by expanding our vocabulary which
reduces the volume of words necessary for understanding.
Readability of Code:
Reading our current technologies in coding software is impossible.
Our current compilers all perform the same level of work which give the
programmer a five word vocabulary. From these five words we group them
together to create new phrases which we must interrupt as we try to
relate them into reality. To make things worse, we must represent all
knowledge as numbers, which we must again interrupt their meaning.
Source code should never require external information or comments. If
either is true then the code is not readable. A book that requires a
dictionary only means the lack of knowledge on the part of the reader,
not on the part of the author. This should be true for source code.
The lines of code for any application should be reduced by half
every 10 years.
We should be getting better at development, and
our applications should be easier to write, this is not happening. Not
when every programmer must write in first principles. We should be
removing all assumptions and the need for external documentation.
DOS 1.0 has about 4,800 lines of code. Today Windows has 50 million lines.
Linux has 12 million lines.
An application written in
The purpose of the application in all the languages is the same. The
logic does change slightly from language to language. This should never
happen, but it does.
Mechanical vs. Electronics:
Is there any difference in Charles Babbage's Analytical Engine and a
electronic computer? An AI abacus is possible if we have enough beads,
or so I am told. Size and speed have no effect on logic.