Europa

PhD EuroConference
FOUNDATIONS OF THE FORMAL SCIENCES III

Complexity in Mathematics and Computer Science


Abstracts of Talks:





Quantum algorithms and lower bounds for them


Andris Ambainis (Berkeley CA)


Quantum computing is an attempt to apply quantum mechanics to computer science. One of main results is Shor's algorithm for factoring integers in polynomial-time on a quantum computer. This means that if a quantum computer is built, today's public-key cryptography will become insecure. The other main result is Grover's algorithm. It provides a quadratic speedup for a general exhaustive search problem and is applicable to a large variety of problems.

In my talk, I will describe Grover's algorithm. Then, I will show several methods of proving lower bounds on quantum algorithms. These lower bound results show that Grover's algorithm is optimal both for the search problem originally considered by Grover and for several other problems.




Scheduling for Parallel Data Servers: on Balls-into-Bins Games


Petra Berenbrink (Warwick)


In the last couple of years, a dramatic increase in the need of storing huge amounts of data can be observed. The introduction of enterprise resource planning, on-line transaction processing, e-business, data warehousing, and especially the increasingly media-rich content found in intranets and the Internet are heavily contributing to the data load. This is overwhelming traditional storage architectures. A common approach to handle this huge amount of data is to combine storage devices into a dedicated network, called storage area network (SAN), that is connected to several LANs and/or servers.

The main problems occuring in the development of such storage area networks are data placement (distribution of the data over the disks), scheduling (scheduling of the disk accesses), and routing (determination of path systems and routing of the data through the network). In this talk, I will focus on the scheduling problem. I will show that the scheduling problem is very closely related to so-called balls-into-bins games. The study of balls-into-bins games in its different shades has a very long history. These very common models are used in order to derive a considerable number of results in the area of probability theory. In general, the goal of a balls-into-bins game is to allocate a set of independent objects (tasks, jobs, data requests) to a set of resources (servers, bins, disks). It is assumed that the balls are independent and do not know anything about the other balls. Every ball is allowed to choose i.u.r. a subset of the bins in order to be allocated in one of the bins. The aim is to minimize the maximum load of the bins.

In this talk, I will present several balls-into-bins games. I will sketch two variations of witness tree proofs that can be used to analyse the performance of these games. The method of witness tree proofs has been developed at the University of Paderborn. The great advantage of this proof technique is that it can be used to analyse most of the balls-into-bins games.




Algebras of minimal rank: recent developments


Markus Bläser (Lübeck)


Let R(A) denote the rank (also called bilinear complexity) of a finite dimensional associative algebra A. A well known lower bound for R(A) is the so-called Alder-Strassen bound R(A) > 2 dim A - t, where t is the number of maximal twosided ideals of A. There exists algebras for which this bound is sharp. Such algebras are called algebras of minimal rank. A lot of effort has been spent on the characterization of these algebras in terms of their algebraic structure during the last two decades. For some classes of algebras, like division algebras, commutative algebras, or local algebras, such a characterization could be obtained, see e.g. (1.) and the references given there. Algebras $A$ for which the quotient A/rad A contains factors of the form Dnxn with n >2 have withstood such characterization attempts for a long time. Recently, a characterization of semisimple algebras over arbitrary fields and of all associative algebras over algebraically closed fields has been obtained (2.). We review some of the new techniques introduced in (2.) and present examples that show the possibilities and limits of extending these techniques to obtain a complete characterization over arbitrary fields.

  1. P. Bürgisser, M. Clausen, M. A. Shokrollahi, Algebraic Complexity Theory, Springer, 1997.
  2. M. Bläser, Lower bounds for the bilinear complexity of associative algebras, Comput. Complexity, 9:73-112, 2000.


Recent Developments in Iterated Forcing Theory


Jörg Brendle (Kobe)


Iterated forcing, first developed by Solovay and Tennenbaum in the wake of Cohen's invention of the forcing method in the sixties to show the consistency of Suslin's hypothesis, soon evolved into a general and powerful tool to obtain a plethora of independence results, not only in set theory, but also in other areas of pure mathematics, notably in general topology, real analysis and abstract algebra. In particular, many statements concerning the combinatorial or topological structure of the reals have been shown to be consistent. Yet, almost all of these results have been obtained with two very specific iteration techniques, namely

  1. finite support iteration of ccc forcing, the technique originally developed by Solovay and Tennenbaum,
  2. countable support iteration of proper forcing, created by Shelah. Both approaches are well-understood, and both have rather obvious flaws which restrict the range of problems for which they can be used. An iteration of the first kind forces Martin's axiom for large Cohen algebras, while an iteration of the second kind makes the continuum at most aleph2.

    Recently, several alternative constructions have emerged, for example iterated forcing constructions with mixed support (they are typically used to get models where the continuum is large while the club principle holds), or Shelah's original theory of iterations along smooth templates (used for the consistency of a > d). The purpose of this lecture is to present the basic ideas underlying these recent techniques, and to discuss open problems in iteration theory.




    Classification problems in analysis and topology


    Riccardo Camerlo (Torino)


    It will be shown how descriptive set theory can provide a suitable framework to investigate complexity in mathematics, most notably in analysis and topology. Several examples will be presented together with a few useful techniques. Open problems will be discussed.




    Existential Second Order decision problems on graphs: Lower bounds, Complexity gaps, Time-Space trade-offs


    Stefan Dantchev (London)


    It is well known that the class of graph properties, expressible as existential second order logical sentences, is exactly NP. We study the complexity of these within a weak computational model, based on ``restricted'' branching programs. The model is inspired by proof complexity. It captures time and space of computations as the number of nodes in the branching program and its pebbling number, respectively. An important special case, we first consider, is when the branching program is a decision tree. We show how to interpret computations as two-player games. Based on this, we develop a general method for proving tight lower bounds. We also discuss the general case. We formulate the analogy of Riis' complexity-gap theorem for tree-resolution. Within the decision tree model, it states that every problem in NP is either polynomially or exponentially time solvable, thus ruling out the super-polynomial and sub-exponential complexities. We finally prove time--space trade-offs for some particular decision problems.




    Using Easy Optimization Problems To Solve Hard Ones


    Lars Engebretsen (Cambridge MA)


    In combinatorial optimization one typically wants to minimize some cost function given a set of constraints that must be satisfied. A classical example is the so called Traveling Salesman Problem (TSP), in which we are given a set of cities with certain distances between them and we are asked to find the shortest possible tour that visits every city exactly once and then returns to the starting point. This problem is believed to be hard to solve exactly---it has been shown to belong to a certain class of notoriously hard problems, the so called NP-hard problems. Despite huge efforts, there is no currently known efficient algorithm that solves any of the NP-hard problems and it is widely believed that no such algorithm exists. However, it is always possible to find an approximation to the optimum tour by solving a much easier problem, the so called Minimum Spanning Tree problem.

    This is an example of a general paradigm in the field of approximation algorithms for optimization problems: Instead of solving a very hard problem we solve an easy one, then we convert the optimal solution to the easy problem into an approximately optimal solution to the hard one. Obviously, it is important to be careful when converting the optimal solution to the easy problem into an approximately optimal solution to the hard one. In many application, the analysis of the algorithm is greatly simplified by the use of random selections. Typically, we then use information obtained from the solution to the easy problem to bias our selection of a solution to the hard one. Surprisingly, such a randomized algorithm can in many cases be converted into an algorithm that does not use any randomness.

    In this talk, I will describe several examples of the methodology outlined above, discuss commonly used techniques and some open problems.




    The complexity of generalized model-checking problems


    Martin Grohe (Edinburgh)


    A fundamental algorithmic problem, playing an important role in different areas of computer science, is the following model-checking problem:

    Given a finite relational structure A and a sentence S of some logic L, decide if A satisfies S.

    The name model-checking is most commonly used for the appearance of the problem in automated verification. However, the problem of evaluating a query against a finite relational database is of the same type. Constraint satisfaction problems in artificial intelligence can also be seen as model-checking problems. Moreover, many of the best-known algorithmic problems can be directly translated into model-checking problems.

    Often, we are not only interested in a model-checking problem itself, but also in certain variants. One example is the evaluation of database queries - model-checking in the strict sense only corresponds to the evaluation of Boolean queries, that is, queries with a yes/no answer, but queries whose output is a set of tuples of database entries also need to be evaluated. Constraint satisfaction problems provide another example - usually, we are not only interested in the question of whether a constraint satisfaction problem has a solution, but we actually want to construct a solution. Sometimes, we want to count the number of solutions, or generate a random solution, or construct a solution that is optimal in some sense. We refer to such problems as generalized model-checking problems.

    For many interesting logics, such as first-order logic or monadic second-order logic, generalized model-checking problems are computationally hard in general. However, there is a number of natural tractable restrictions. This talk is a survey of new algorithmic approaches to model-checking, mainly for first-order logic and monadic second-order logic.




    Infinite Time Turing Machines


    Joel David Hamkins (New York NY)


    What could one compute with an infinitely fast computer? One natural model for supertasks, for computations with infinitely many steps, is simply to extend the classical Turing machine concept into transfinite ordinal time. The same Turing machine hardware - a mechanical head moving back and forth on an infinite paper tape according to a finite algorithm having finitely many states - is extended to transfinite time by simply specifying the limit behavior of the machines at limit ordinal times. The resulting computability theory yields a notion of computation on the reals, concepts of decidability and semi-decidability for sets of reals as well as individual reals, two kinds of jump-operator and an oracle notion of relative computability. The theory reveals a rich, vast hierarchy of degree complexity on both the collection of reals and the collection of sets of reals. Ultimately, by leveraging our intuitions from concrete finite computations to the ethereal transfinite, we gain an understanding of the relative complexity of infinite objects. Many interesting open questions remain.




    On Information about Classifications in Channel Theory


    Makoto Kikuchi (Kobe)


    Channel Theory is a mathematical theory of information flow proposed by Barwise and Seligman in 1990's based on Situation Theory. Classifications are basic structure of the theory which are generalization of topological spaces or some algebraic structures, and on which information are represented. A classification consists of a set of types, a set of tokens, and a binary relation between them. In Channel Theory, two kinds of information about a classification called a theory, which stands for constraints about types, and a set of normal tokens are discussed. We shall show that how a theory is defined from a set of normal tokens and vice versa with or without complete knowledge about a classification, and give an abstract version of the completeness theorem which characterizes the regular closure of a theory. We shall also give formulations of non-deductive inferences called induction and abduction in the framework of Channel Theory.



    On the decidability and complexity of languages for talking about metric spaces


    Oliver Kutz (Leipzig)


    Contrary to the situation of temporal logics, where we do not only have an established model theory and proof theory, but also a detailed analyses of the expressive power of temporal languages starting with Kamp's classical result on the expressive completeness of temporal logic with respect to monadic first-order logic, the field of spatial logics is quite underdeveloped. Whereas a number of logics of space have been introduced in the past, investigations into the expressive power of theses languages have been almost entirely neglected. In the talk, I will briefly introduce a number of first-order and modal languages for reasoning about metric spaces, compare their expressive power and give a short analysis of the complexity of the satisfiability problem.


    Hausdorff dimension and Complexity Classes


    Elvira Mayordomo Camara (Zaragoza)


    Classical Hausdorff dimension (sometimes called fractal dimension) has been recently effectivized, thereby endowing various complexity classes with dimension structure and also defining the constructive dimensions of individual binary (infinite) sequences.

    I will introduce a new characterization of Hausdorff dimension and show examples of how this can be used in the settings of complexity classes, binary sequences and compression algorithms. I will then give a list of open problems.




    Axiomatizability of algebras of relations


    Szabolcs Mikulas (London)


    Algebras of relations have been the subject of extensive research in algebraic logic, but their interdisciplinary character is reflected by their applications in linguistics (Lambek calculus) and various branches of computer science (databases, access control, qualitative spatial reasoning, etc.).

    In this talk, we will concentrate on the finite axiomatizability problem for algebras of binary relations. It is well known that Tarski's representable relation algebras (RRAs) form a non-finitely axiomatizable equational class. On the other hand, some reducts of RRAs, i.e. where only subsets of the clone of RRAs are definable, are finitely axiomatizable.

    We present a step-by-step method for proving finite axiomatizability of certain reducts of RRAs. We then show that ultraproduct constructions serve as a tool for establishing non-finite axiomatizability. Finally, we review some (longstanding) open problems.




    Sacks forcing and the Sacks property


    Sandra Quickert (Bonn)


    Given a forcing notion P, then P has the Sacks property if and only if for any P-name r for a real (i.e., member of omegaomega) and for any forcing condition p in P there is a stronger forcing condition q < p and some I:omega to [omega]finite such that for any n, |I(n)|< 2n and q Vdash forall n r(n) in I(n). Thus, any new real in the generic extension is covered by a small set in the ground model. In this talk we present some consequences of the Sacks property, present forcing notions having the Sacks property, and investigate the question under which circumstances a forcing notion can or cannot have the Sacks property and additional properties at the same time.




    Models of set theory and their use


    Ralf-Dieter Schindler (Wien)


    We shall consider the most prominent types of models of set theory, forcing models and core models. Their use is ubiquitous in set theory. I will try to call your attention to applications in descriptive set theory. For instance, forcing models may kill any definable well-ordering of the reals, whereas core models can be used to create such definable well-orderings.


    What Makes the Difference? A Constructive Critique of Identity


    Peter Schuster (München)


    In the presence of scruples whether proofs by contradiction may be permitted in any computational context, say, it is fairly natural to doubt Leibniz' principle of identity that interchangeability follows from indistinguishability. Either property is further debatable on account of its universal and negative character, respectively.

    Besides being related to epistemic and temporal issues, all this grounds on the question to what extent the law of excluded middle may be applied to the infinite. It is therefore reasonable to manifest the problematic nature of identity by way of constructive analysis, some of whose basic concepts are illustrated in return, and to contemplate difference as a substitute for identity.

    If at all, difference may be verified in a finite amount of time by exhibiting some witness to distinction. In particular, difference constitutes a genuinely positive and existential property.




    Refinements of the constructible hierarchy


    Marc van Eijmeren (Bonn)


    Refinements of the constructible hierarchy allow a more precise control over the setup of the constructible hierarchy. This kind of control is needed in several applications, most notably in core model theory.

    Two different finestructure hierarchies will be presented and it will be investigated how they relate to each other.


    Complexity in Automated Verification


    Helmut Veith (Wien)


    During the last two decades, model checking has become an important tool for the verification of hardware and increasingly of software. The success of model checking is based on the combination of several techniques aimed at alleviating the inherent complexity of the problem. These techniques can be classified into two groups:

    In this talk, we will survey complexity results and algorithms in both directions.




    Descriptive Set Theory and Forcing


    Jindra Zapletal (Gainesville FL)


    I will show how large cardinal axioms can be used to organize many independence results in abstract analysis.




    Last changed: September 1st, 2001