Original by Chris Hillman
(Last modified by Chris Hillman 2 Feb 2001.)
A few years after his revolutionary application of Shannon's concept of
entropy to the study of dynamical systems, Kolmogorov saw how to introduce
a new concept of entropy, the algorithmic complexity
of a finite sequence of bits. At virtually the same time,
Ray Solomonoff devised a notion of algorithmic probability
and Martin-Loef used similar ideas to develop a theory of algorthmic randomness.
Meanwhile,
Gregory Chaitin
(then an undergraduate at CUNY) independently discovered many of the same ideas.
Later, Chaitin discovered a a startling connection with the notorious
incompleteness theorem
of Kurt Goedel.
Here are a several expository papers explaining these ideas:
-
Lecture Notes on Descriptional Entropy and Randomness,
by
Peter Gacs
(Computer Sciences, Boston University).
A good introduction to the notion of algorithmic entropy (a.k.a.
"descriptional entropy" or "Kolmogorov complexity").
-
Randomness and Mathematical Proof,
by Gregory J. Chaitin (IBM Research).
This is an electronic reprint of a 1975 Scientific American article
in which Chaitin discussed the connection he discovered
betweeen Goedel's Theorem and algorithmic complexity.
-
Randomness in Arithmetic,
by Gregory Chaitin (a reprint of a 1987 Scientific American article).
-
How to Run Algorithmic Information Theory on a Computer
by Gregory Chaitin,
who has continued to develop increasingly sophisticated versions of this theory over the past
thirty odd years. In the "classical" versions of this theory,
it is very difficult to actually compute or even estimate the algorithmic entropy of a
specific binary string; indeed, these entropies are formally uncomputable.
In this talk, Chaitin proposes a new version of algorithmic information theory which gets around
this limitation by fixing a particular universal Turing machine (i.e., roughly speaking, working with
a specific general programming language related to LISP).
You can find a
web tutorial
which allows you to play around with the theory,
plus many, many other on-line papers/books at
Chaitin's home page.
-
The Discovery of Algorithmic Probability,
by Ray Solomonoff (Oxbridge Research).
A fascinating and very personal account of the inception of algorithmic probability.
Computer scientists are, for (obvious?) reasons, interested in the mathematical theory
of logical reasoning, which turns out to have some connections to maximal entropy.
See:
-
Random Worlds and Maximum Entropy,
by Adam J. Grove (NEC Research Institute),
Joseph Y. Halpern (IBM Almaden Research Center),
Daphne Koller (Computer Science, Stanford University).
This paper studies a kind of axiomatic system, the "random world",
which includes not only axioms (specific statements
accepted as true) but also "statistical facts".
For each world, the authors define an entropy (relative to some statement)
and show how maximal entropy methods can be used to compute "the degree of belief"
in the statement.
Further Reading
-
An Introduction to Kolmogorov Complexity and its Applications,
by
Ming Li
and
Paul Vitanyi, Springer-Verlag, 1993.
This readable introduction is still the only textbook on algorithmic information theory;
the second edition has recently appeared.
-
The Limits of Mathematics,
by G. J. Chaitin (IBM Research), Springer, in press.
"This course uses algorithmic information theory to show that mathematics
has serious limitations, and features a new more didactic approach to
algorithmic information theory using lisp and Mathematica software. The
thesis of the book is that the incompleteness phenomenon discovered
by Gödel is much more widespread and serious than hitherto suspected."
Back to
Entropy on the World Wide Web.