Home
Speakers
Programme Committee
Paper Submission
PhD Colloquium
Programme
Registration
Local Information
DVMLG
Alan Turing Year 2012

Abstracts.
"Et tu, Brute!" The correspondence between Alonzo Church and
Emil Post
Liesbeth
De Mol
Universiteit Gent, Gent, Belgium
The year 1936 marks a fundamental episode for the rise of (theoretical)
computer science: Alonzo Church, Emil Post and Alan Turing each proposed
an identification between informal and intuitive notions such as effective
calculability or computability and some formal device such as
lambdadefinability or Turing's machines. Given the formal equivalence
between these different devices, these identifications are all logically
equivalent versions of the famous ChurchTuring thesis. This thesis, if
accepted, poses a fundamental limitation on what is 'computable'. Put
informally, any problem that cannot be computed by a Turing machine (or
any other logically equivalent device) is considered uncomputable.
Despite the fact that these results are wellknown and have been
studied in much detail, with most attention going to Turing's
contributions, the ChurchTuring thesis is still a hot topic both from a
historical as well as from a contemporary perspective. It is not so
wellknown that Church and Post have extensively communicated on this and
other related topics: the correspondence between Church and Post spans a
period starting in 1936 until Post's premature death in 1954, containing
no less than 77 letters and is unique in its genre.
It offers direct access to the thoughts and doubts of two of the key
figures in computer science. The aim of this talk is to discuss some
aspects of the (unfinished) transcription and annotation of this
correspondence, with a focus on the ChurchTuring thesis.
Search for the language for algorithms
Helena Durnová
Masaryk University, Brno, Czech Republic
One of the crucial landmarks in the history of programming is ALGOL 60, an 'algorithmic language'. This
language
was explicitly
designed to be 'as close as possible to mathematical notation'. What were the reasons behind advocating
mathematical notation for
the formulation of algorithms? Are there any common traits between those and the ones for calling computers
mathematical machines?
Nowadays, the word algoithm denotes such solutions to a problem that do not require the use of creative
thinking.
of the solver is
not necessary. Such solution were described in mathematical texts since antiquity. The attempts at explaining
such procedures
accurately can commonly be found in mathematical articles. In mathematics of the early 20th century,
formalization of the language
was one of the ways to avoid the danger of making a mistake.
In my talk, I will show the echoes of Turing's 1936 and 1950 papers in the search of the language for
algorithms
and explore the
connection between computer science and mathematics which seems to be so obvious.
Forcing Axioms
Mirna Džamonja
University of East Anglia, Norwich, United Kingdom
Forcing axioms are a well known tool for obtaining consistency results
and avoinding iterations of complex forcing orders. These
axioms are also
of interest in their own right. I shall discuss some forcing axioms
at ℵ_{1}, the cardinal where our knowledge of such axioms is
best developed, and at successors of singular cardinals where we have only
recently been able to formulate such axioms.
Materialization of Universal Turing Machines
Rainer Glaschick
Heinz Nixdorf MuseumsForum, Paderborn, Germany
A curious object is on display at the Heinz Nixdorf MuseumsForum: a
physical Turing machine constructed by Gisbert Hasenjäger.
Hasenjäger
(19192006) was a German mathematician and logician and professor of logic
at the universities of Münster and Bonn in Germany. During the second
world war, he was assigned to the central cryptographic unit of the German
army and unwittingly competed with Alan Turing, as he was assigned the
task of breaking the Enigma code (to test its security). He did not find
the vulnerabilities that Turing exploited in breaking the code, and later
said: "Fortunately! Otherwise the war would have lasted even longer!" In
the 1950s and 1960s, Hasenjaeger built several demonstration models of
Turing machines. The machine on display at the Newton Institute is a
Turing machine with modifications by Moore and Wang. The speaker
investigated the details of the mechanism of this machine and other
physical Turing machines that are at the Heinz Nixdorf MuseumsForum in
Paderborn, Germany and will report on his experiences with these curious
machines.
Fields of generalized power series
Salma Kuhlmann
Universität Konstanz. Konstanz, Germany
Fields of generalized power series are central objects in Model Theory and Algebra.
They play an important role in:
 ordered algebraic structures (Hausdorff's lexicographic orders, Hahn's groups),
 nonstandard models of arithmetic (integer parts),
 nonstandard models of ominimal expansions of the reals (exponentiation),
 model theory of valued fields (saturated and recursively saturated models, AxKochen principles),
 real algebraic geometry (nonarchimedean real closed fields),
 valuation theory (Kaplansky's embedding theorem),
 differential algebra (ordered differential fields, Hardy fields),
 difference algebra (automorphism groups),
 transcendental number theory (Schanuel's conjectures).
I will give an overview of my work with these fascinating objects in the last decade.
The typical Turing degree
Andy Lewis
University of Leeds, Leeds, United Kingdom
The Turing degree of a real measures the computational difficulty of
producing its binary expansion. Since Turing degrees are tailsets, it
follows from Kolmogorov's 01 law that for any property which may or may
not be satisfied by any given Turing degree, the satisfying class will
either be of Lebesgue measure 0 or 1, so long as it is measurable. So
either the typical degree satisfies the property, or else the
typical degree satisfies its negation. Further, there is then some level
of randomness sufficient to ensure typicality in this regard. I shall
describe a number of results in a new programme of research which aims to
establish the (order theoretically) definable properties of the typical
Turing degree, and the level of randomness required in order to guarantee
typicality.
A similar analysis can be made in
terms of Baire category, where
a standard form of genericity now plays the role that randomness plays in
the
context of measure. This case has been fairly extensively examined in the
previous literature. I shall analyse how our new results for
the measure theoretic case contrast with existing results for Baire
category, and also provide some new results for the category
theoretic analysis.
This is joint work with George Barmpalias and Adam Day.
Logic in philosophy: The case of determinism
Thomas Müller
Universiteit Utrecht, Utrecht, The Netherlands
Logic, once a branch of philosophy, has found a home within
philosophy's
methodological toolbox. I will illustrate this development by looking at
philosophical discussions of determinism. Such discussions take place
within
various philosophical contexts: philosophy of language, philosophy of
science, textual interpretation in the history of philosophy, metaphysics,
and within philosophical logic conceived as a separate subdiscipline of
philosophy. I will trace out similarities and differences between these
discussions, highlighting the role of logic as a tool. In the end, I will
try to give an overview of open challenges for a logical approach to
determinism.
Comparing theories
Michael
Rathjen
University of Leeds, Leeds, United Kingdom
The fact that "natural" theories, i.e., theories which have something
like an "idea" to them, are almost always linearly ordered with regard to
logical strength has been called one of the great mysteries of the
foundation of mathematics. There are various ways, though, of comparing
the strength of theories S and T. For example one could take S ≤ T to
be
saying that the consistency of T implies the consistency of S (say
provably in PA) or that S is interpretable in T or that every Fermattype
theorem of S is also a theorem of T or ... . The talk will survey
different notions of comparison and their relationships.
Using paradoxical methods, e.g., selfreference Rosserstyle, one can
distill theories with incomparable logical strengths and show that the
degree structure of logical strengths is dense in that between two
theories S<T one can always find a third Q such that S<Q<T.
But are there
"natural" examples of such phenomena? We also know how to produce a
stronger theory by adding the consistency of the theory. Can we get a
stronger theory by adding something weaker than consistency that is still
"natural"? These and other questions will be broached in the talk.
Mathematics without proofs
Jean Paul Van
Bendeghem
Vrije Universiteit Brussel, Brussels, Belgium
Suppose a mathematical theory T is given, together with a set of
axioms and un underlying logic L. Obviously in such a theory there is a
notion of proof, "T Lproves A" and the set Th of theorems. The question I
want to address is whether it is possible to characterize Th in such a way
that no notion of proof is required. In the presentation an example will
be presented that suggests a positive answer. It relies on the use of a
random walk on a twodimensional lattice, inspired by ideas from cellular
automaton theory. At the same time, it will provide an example of
nonmonotonic mathematics.
