"Et tu, Brute!" The correspondence between Alonzo Church and
Emil Post
Liesbeth
De Mol
Universiteit Gent, Gent, Belgium
The year 1936 marks a fundamental episode for the rise of (theoretical)
computer science: Alonzo Church, Emil Post and Alan Turing each proposed
an identification between informal and intuitive notions such as effective
calculability or computability and some formal device such as
lambda-definability or Turing's machines. Given the formal equivalence
between these different devices, these identifications are all logically
equivalent versions of the famous Church-Turing thesis. This thesis, if
accepted, poses a fundamental limitation on what is 'computable'. Put
informally, any problem that cannot be computed by a Turing machine (or
any other logically equivalent device) is considered uncomputable.
Despite the fact that these results are well-known and have been
studied in much detail, with most attention going to Turing's
contributions, the Church-Turing thesis is still a hot topic both from a
historical as well as from a contemporary perspective. It is not so
well-known that Church and Post have extensively communicated on this and
other related topics: the correspondence between Church and Post spans a
period starting in 1936 until Post's premature death in 1954, containing
no less than 77 letters and is unique in its genre.
It offers direct access to the thoughts and doubts of two of the key
figures in computer science. The aim of this talk is to discuss some
aspects of the (unfinished) transcription and annotation of this
correspondence, with a focus on the Church-Turing thesis.
Search for the language for algorithms
Helena Durnová
Masaryk University, Brno, Czech Republic
One of the crucial landmarks in the history of programming is ALGOL 60, an 'algorithmic language'. This
language
was explicitly
designed to be 'as close as possible to mathematical notation'. What were the reasons behind advocating
mathematical notation for
the formulation of algorithms? Are there any common traits between those and the ones for calling computers
mathematical machines?
Nowadays, the word algoithm denotes such solutions to a problem that do not require the use of creative
thinking.
of the solver is
not necessary. Such solution were described in mathematical texts since antiquity. The attempts at explaining
such procedures
accurately can commonly be found in mathematical articles. In mathematics of the early 20th century,
formalization of the language
was one of the ways to avoid the danger of making a mistake.
In my talk, I will show the echoes of Turing's 1936 and 1950 papers in the search of the language for
algorithms
and explore the
connection between computer science and mathematics which seems to be so obvious.
Materialization of Universal Turing Machines
Rainer Glaschick
Heinz Nixdorf MuseumsForum, Paderborn, Germany
A curious object is on display at the Heinz Nixdorf MuseumsForum: a
physical Turing machine constructed by Gisbert Hasenjäger.
Hasenjäger
(1919-2006) was a German mathematician and logician and professor of logic
at the universities of Münster and Bonn in Germany. During the second
world war, he was assigned to the central cryptographic unit of the German
army and unwittingly competed with Alan Turing, as he was assigned the
task of breaking the Enigma code (to test its security). He did not find
the vulnerabilities that Turing exploited in breaking the code, and later
said: "Fortunately! Otherwise the war would have lasted even longer!" In
the 1950s and 1960s, Hasenjaeger built several demonstration models of
Turing machines. The machine on display at the Newton Institute is a
Turing machine with modifications by Moore and Wang. The speaker
investigated the details of the mechanism of this machine and other
physical Turing machines that are at the Heinz Nixdorf MuseumsForum in
Paderborn, Germany and will report on his experiences with these curious
machines.
The typical Turing degree
Andy Lewis
University of Leeds, Leeds, United Kingdom
The Turing degree of a real measures the computational difficulty of
producing its binary expansion. Since Turing degrees are tailsets, it
follows from Kolmogorov's 0-1 law that for any property which may or may
not be satisfied by any given Turing degree, the satisfying class will
either be of Lebesgue measure 0 or 1, so long as it is measurable. So
either the typical degree satisfies the property, or else the
typical degree satisfies its negation. Further, there is then some level
of randomness sufficient to ensure typicality in this regard. I shall
describe a number of results in a new programme of research which aims to
establish the (order theoretically) definable properties of the typical
Turing degree, and the level of randomness required in order to guarantee
typicality.
A similar analysis can be made in
terms of Baire category, where
a standard form of genericity now plays the role that randomness plays in
the
context of measure. This case has been fairly extensively examined in the
previous literature. I shall analyse how our new results for
the measure theoretic case contrast with existing results for Baire
category, and also provide some new results for the category
theoretic analysis.
This is joint work with George Barmpalias and Adam Day.
Logic in philosophy: The case of determinism
Thomas Müller
Universiteit Utrecht, Utrecht, The Netherlands
Logic, once a branch of philosophy, has found a home within
philosophy's
methodological toolbox. I will illustrate this development by looking at
philosophical discussions of determinism. Such discussions take place
within
various philosophical contexts: philosophy of language, philosophy of
science, textual interpretation in the history of philosophy, metaphysics,
and within philosophical logic conceived as a separate subdiscipline of
philosophy. I will trace out similarities and differences between these
discussions, highlighting the role of logic as a tool. In the end, I will
try to give an overview of open challenges for a logical approach to
determinism.
Comparing theories
Michael
Rathjen
University of Leeds, Leeds, United Kingdom
The fact that "natural" theories, i.e., theories which have something
like an "idea" to them, are almost always linearly ordered with regard to
logical strength has been called one of the great mysteries of the
foundation of mathematics. There are various ways, though, of comparing
the strength of theories S and T. For example one could take S ≤ T to
be
saying that the consistency of T implies the consistency of S (say
provably in PA) or that S is interpretable in T or that every Fermat-type
theorem of S is also a theorem of T or ... . The talk will survey
different notions of comparison and their relationships.
Using paradoxical methods, e.g., self-reference Rosser-style, one can
distill theories with incomparable logical strengths and show that the
degree structure of logical strengths is dense in that between two
theories S<T one can always find a third Q such that S<Q<T.
But are there
"natural" examples of such phenomena? We also know how to produce a
stronger theory by adding the consistency of the theory. Can we get a
stronger theory by adding something weaker than consistency that is still
"natural"? These and other questions will be broached in the talk.