Contributed Talks

Foundations of the Formal Sciences II



Informal rigour and the continuity principle


Mark van Atten


In the theory of Brouwer's choice sequences, the weak continuity principle for numbers says that a total function from choice sequences to natural numbers never needs more input than a initial segment to determine its output; hence all choice sequences sharing this segment will yield the same value. Veldman has shown that from this principle one derives that in intuitionistic analysis, all fully defined functions are continuous. (Brouwer needed stronger methods to prove the same.) The principle has always been held plausible, but there was no rigorous justification. This talk aims to provide one; the argument is an example of Kreisel's `informal rigour' and is in fact an application of Husserl's phenomenology to mathematics.

Picture in normal/large size
top
back to main page


Applications of the general theory of definitions: rational choice


André Chapuis


The general theory of definitions proposed by Gupta and Belnap is a prime example of new logical tools that make us re-evaluate traditional philosophical views and that open up new philosophical possibilities. The application to the concept of truth has created a new and exciting perspective which has been at the center of the intense debate on truth and paradox during the past decade. These new logical tools are useful and illuminating not only in the context of truth but more generally in approaching circular phenomena. I will argue in this talk that rational choice involving interdependent decisions is one such phenomenon and that the general theory of definitions sheds new light on the concept of rational decision and its difficulties. The solution I propose will be illustrated by examples that make clear both how the logical apparatus works and how it improves over existing solutions.

Picture in normal/large size
top
back to main page


Two-dimensionalism and the metaphysical possibility of zombies


Daniel Cohnitz


Krister Segerberg Robert Stalnaker and others developed a two-dimensional modal framework that enables us to represent the difference between metaphysical and epistemic possibilies a distinction made by Saul Kripke and Hilary Putnam. In his {\it The Conscious Mind} David Chalmers uses this two-dimensional framework to represent a modal argument against physicalism, i.e., the theory that all phenomenal truths can be reduced to physical ones. I will shortly describe the main idea behind two-dimensional modal logic then I will turn to Chalmers argument and show how he is making use of two-dimensional modalities. Finally I will briefly outline why I think that his modal argument fails.

Picture in normal/large size
top
back to main page


Possible Worlds Semantics for Predicates of Sentences


Volker Halbach, Hannes Leitgeb


In standard modal logic modal, deontic, temporal and similar notions are formalized as sentential operators. Possible worlds semantics for sentential operators can be provided ina straightforward way. As Kamp, Gupta and others have shown, possible worlds semantics is also feasible to some extent, if these notions are conceived as predicates of sentences. While models may be based of arbitrary frames on the operator view, models cannot be based on certain frames on the predicate view. Possible worlds semantics for predicates excludes certain classes of frames, e.g., reflexive frames, while others, e.g., converse wellfounded frames admit suitable models. We investigate the following problem: On which frames can one build possible-worlds models for predicates of sentences?

Picture in normal/large size
top
back to main page


Bayesian Networks in Philosophy of Science and Epistemology


Stephan Hartmann, Luc Bovens


Bayesian Networks are a well-known tool in Artificial Intelligence. We show how this powerful tool facilitates solving problems in philosophy of science and epistemology. Among the problems in philosophy of science is the question under which circumstances a hypothesis can be confirmed even if the measuring instruments are not fully reliable. Here Bayesian Networks help to relax various idealizations made in the classical Bayesian accounts. Relaxing these idealizations has consequences for the standard Bayesian treatment of the Duhem-Quine problem and the variety-of-evidence problem. In epistemology, Bayesian Networks can be applied to give an analysis of the notion of coherence in the coherence theory of justification. We build on this analysis to construct a probabilistic theory of belief expansion, which avoids the idealization of the success postulate in the AGM-approach.

Picture in normal/large size
top
back to main page


Semantics as a Formal Science: A Case Study of if-sentences


Wolfram Hinzen


Model-theoretic semantics is sometimes conceived as a way of providing a formal science of meaning. In this talk I discuss two foundational ideas that this project presupposes, first abstractly, then also concretely, namely as applied to the semantic analysis of conditionals. The first idea is that meaning determines conditions on reference and truth. The second, related idea is that mental states are individuated with respect to their intentional content. I argue against both ideas, building on recent arguments of Chomsky. Applied to English if-sentences, in particular, they yield results that seem contrary to empirical fact in standard cases. In this way conditionals are a good reminder of the way in which "logical analysis" of language can lead to fiction rather than fact. I end with the question what it could mean for a semantics to be formal.

Picture in normal/large size
top
back to main page


Skolem Ockham = Gödel - Platon(ismus)


Walter Hoering


Wir versuchen zunaechst Goedel genau zu lesen und finden bei ihm 1. einen Vergleich von Mengen als wichtig fuer die Mathematik und Dingen als wichtig fuer die Physik und 2. die Hoffnung auf immer weitere mengentheoretische Axiome zu stossen. Waehrend wir 1. zustimmen koennen beruht 2. auf einer naiven Auffassung der Wissenschaftstheorie der Physik. Wo die anvisierte Analogie zusammenbricht, wir in einigem Detail aufgezeigt.

Das Programm der ehrgeizigen Mengenlehre, die versucht zu einer linear geordneten Reihe von immer staerkeren mengentheoretischen Systemen zu kommen waere von einem Platonistischen Standpunkt aus zu rechtfertigen, oder auch wenn man hoffen koennte eine lineare Reihe von immer staerkeren fuer "die Mathematik" immer nuetzlicheren Systemen zu kommen. Fuer beide Annahmen gibtes aber keine hinreichenden Gruende.

Im Gegenteil, die Unvollstaendigkeitsresultate von Skolem, Goedel selbst und auch Hao Wang weisen in eine andere Richtung. Ja selbst wenn man eine syntaktisch vollstaendige Menge von mengentheoretischen Aussagen erster Ordnung voraussetzt, genuegt das nach Skolem immer noch nicht EINE abzaehlbare Struktur auszuzeichnen, geschweige denn eine ueberabzaehlbare Struktur festztulegen.

Historisch gesehen scheint hierbei die Mengenlehre einen Weg zu gehen, den vor ihr schon Geometrie und Logik gegangen sind: weg von der naiven Annahme EINES natuerlichen, besten Systems, hin zun einer pluralistischen Auffassung.

Picture in normal/large size
top
back to main page


The so-called materially valid inferences and the logic of concepts


Ludger Jansen, Niko Strobach


So-called materially valid inferences have caused much discussion (cf. Haack's "Philosophy of Logics", ch.2). Recently, they play a prominent role in Brandom's masterpiece "Making it Explicit". Without doubt, we do know that if Mary is a girl, she is a female child. However, neither the propositional calculus nor the standard predicate logic can account for that inference. The talk (1) introduces a formal system CMP that combines concept logic with predicate logic, (2) explains how concept logic (and its models) can be used to represent conceptual knowledge, and (3) shows how the purported materially valid inferences can be given a formalistic interpretation within CMP. Finally, (4) different types of inferences using conceptual knowledge will be distinguished (based on logical or empirical conceptual knowledge) that are all treated equally by Brandom, and it will be argued that we better keep them seperated.

top
back to main page


An analysis of design from the viewpoint of information flow


Makoto Kikuchi


Design is a creative activity which is not easy to be logically figured out. In engineering, it is an activity of defining an unknown entity which satisfies a given specification. The concept of design may be also used in biology in order to explain of the functions of living organisms. Some foundational theories have been proposed by engineers, but it may be far from the sufficient explanation of design. Channel Theory is a theory of information flow addressed by Barwise and Seligman in 1990's. In this talk, based on Channel Theory, I shall propose a logical framework for the analysis of design! from the viewpoint of information flow.

Picture in normal/large size
top
back to main page


Kripke- versus Kripke-Type Semantics


Oliver Kutz


We compare the more traditional approaches to the semantics of Modal Predicate Logic using standard extensions of Kripkean Possible Worlds Semantics with newer semantics such as Functor Semantics (cf. [gh91]), Metaframes (cf. [sksh93]) or the semantics of [kk00].

Standard Possible Worlds Semantics has turned out to be highly incomplete as wittnessed e.g. by a theorem of Ghilardis (cf.[gh91]). Besides the philosophical objections to the usual semantic approaches, this widespread incompleteness has motivated several generalizations of standard semantics that led to comprehensive completeness results.

These modified Kripke--semantics basically generalize the notion of a ``modal individual'' and the notion of ``counterpart''. The Counterpart--Theory itself is a popular tool for investigating philosophical problems, but is on the other hand not a suitable semantics for a general analysis of normal modal logics (cf. [hucr96] and [kutz00]).

We give a short description on how to prove the above mentioned completeness results, discuss the different notions of ``modal individual'' involved and illustrate the use of the ``deviant'' semantics by presenting a coherent model for ``Kripke's Puzzle''.

[gh91] Silvio Ghilardi, Incompleteness Results in Kripke Semantics, in: Journal of Symbolic Logic, Nr. 2 (p. 516--538), 1991.

[hucr96] G.E. Hughes and M.J. Cresswell, A new introduction to Modal Logic, Routledge, London, 1996.

[kk00] Marcus Kracht and Oliver Kutz, Elementary Models for Modal Predicate Logic, accepted at AiML 2000, Leipzig.

[kutz00] Oliver Kutz, Kripke--Typ Semantiken für die modale Prädikatenlogik, Master's thesis, Humboldt--Universität zu Berlin, Berlin, 2000.

[sksh93] D.P. Skvortsov und V.B. Shehtman, Maximal Kripke--type semantics for modal and superintuitionistic predicate logics, in: Annals of Pure and Applied Logic, Nr. 63 (p. 69--101), 1993.

Picture in normal/large size
top
back to main page


The Gupta-Belnap Fixed-Point Problem and the theory of clones of functions


Jose Martinez Fernandez


The work of Kripke, Visser and others established that in a first-order language a truth predicate can be defined by means of a fixed-point of a certain function (the \textit{jump} function) if the operators of the language are monotone for a certain class of orders. In this area of research, the Fixed-Point Problem consists in the characterization of the first-order interpreted languages that can possess a truth predicate defined by means of a fixed-point of the jump function. It is one of the problems Anil Gupta and Nuel Belnap left open in their book The Revision Theory of Truth (MIT Press, 1993).

In order to simplify the search for a solution, we define a propositional version of the problem, using the Stipulation Logic of Albert Visser (cf. his ``Semantics and the Liar Paradox", in Gabbay and Guenthner, Hand. of Phil. Logic, vol IV, Reidel, 1984). This version is refined applying the notion of a clone of operations. The definitive problem states: characterize the (k-valued) clones such that all their stipulations have a consistent valuation.

The strategy for the solution consists in classifying the clones using a special relation (the Gamma-relation) which is associated to the set of unary functions of the clone. Using the Galois connection between functions and relations, every Gamma-relation determines a clone which includes all the clones with the same set of unary functions. As the set of Gamma-relations is finite, the problem is reduced to the analysis of a finite number of clones. This number can be considerably reduced using properties of the inner automorfisms of the algebra of finitary functions. This strategy can be used to solve the Fixed-Point Problem in the two- and three-valued propositional cases, but the extension to higher numbers of truth-values faces a huge increase in the number of clones to be considered. The general k-valued case is still an open problem.

Picture in normal/large size
top
back to main page


Structural rules in DAT


Alice G.B. ter Meulen


Dynamic Aspect Trees (DAT, ter Meulen 1995, a.o.) constitute a dynamic system modelling how we reason in time about time. It is designed to take information presented in ordinary English texts as input and report its conclusions also in ordinary English sentences. A fundamental distinction is made between structure building dynamic information and structure preserving static information in constructing DATs. Natural deduction style rules introduce or eliminate temporal or aspectual information, based on the underlying context-dependent notion of situated inference. Common structural rules (Permutation, Monotonicity and Cut) are substantially constrained in DATs. A proper characterization of all valid situated inferences and their metalogical properties is still lacking, as well as a soundness and completeness proof.

Picture in normal/large size
top
back to main page


Generic Ontology of Linguistic Classification


Rainer Osswald


Classification is basic to linguistic theorizing. Examples range from taxonomies of traditional grammar and lexical classification by multiple inheritance hierarchies to feature-based grammatical theories. Formally, a classification T is a first-order theory consisting of universally quantified conditionals; antecedent and consequent predicates are assumed to be (finitary) geometric over a set G of one-place predicates, i.e. are built by finite conjunction and disjunction from members of G plus two predicates expressing existence and non-existence. Due to the lack of negation, first-order logic restricted to classificational statements is non-classical. A model of T is generic iff T-indiscernible elements of its universe are identical and coextensive predicates are T-equivalent. The generic universe of T can be represented by the T-closed, consistent subsets of G. It carries the structure of a directed-complete partial order (dcpo). Different types of classifications correspond to different types of dcpos. For instance, classifications that employ only conjunctive predicates, i.e. Horn theories, correspond to bounded- complete algebraic dcpos; those with conditionals restricted to subsumption and pairwise incompatibility of atomic predicates correspond to pairwise bounded-complete distributive algebraic dcpos. One can use these results e.g. to show that certain types of disjunctive classifications are equivalent to Horn theories, subject to a switch of G, by considering their generic universe. From a certain point of view, generic ontology helps to clarify the nature of linguistic entities. As pointed out by Quine, every theory determines an identity predicate and thus an ontology by identification of indiscernibles. Furthermore, the arbitrariness in representing generic entities reflects Quine's conception of ontological relativity. Finally, it is tempting to relate the assumption that different generic elements represent different linguistic entities to van Fraassen's notion of empirical adequacy.

Picture in normal/large size
top
back to main page


Some approaches to the problem of correct use of statistics in science


Vladimir Reznikov


The paper offers a logical and epistemological explanation of the problem of correct use of the probability theories in science. For this purpose, a formulation of the base property of statistical objects is given. The base properties are characterized by two positions. First, base properties are used in the proofs of all fundamental results in the probability theory and mathematical statistics. Secondly, the base properties are logically independent of other properties. The independence of events is an example of a base property in the probability theory, and the statistical homogeneity is an example of the base property in statistics. Owing to the logical independence of a base property, checking its presence in the researched data is a labor-consuming procedure. The paper offers the classification of statistical theories by modes of introduction of base properties and depending on the possibilities contained in these theories for the actual verification of these properties in the data under investigation.

The validity of using informal reasoning in choosing a formal model is studied. A formal statistical model possessing base properties is often accepted on the basis of informal reasoning. Informal reasoning is not free of logical errors. For example, it is supposed that the substantive independence of the conditions of conducted experiments proves the adequacy of the model with independent experiments. In fact, more correct is the following reasoning. If the formal model with independent experiments is adequate, there are certain grounds to suppose that the experiments were conducted under the conditions which were actually independent.

It is shown that many methods of classical statistics are not, in some technical sense, of the probability character, but rather of the probability-deterministic character. It is exhibited in attributing maximum joint probability to the realized events in the method of maximum likelihood, and in ignoring events with small probabilities in Kolmogorov's a principle of practical reliability used for the falsification of hypotheses. For didactic reasons, the paper argues for larger significance of the method of verification rather than the method of falsification in statistical research.

The conclusion contains arguments for the special role of the methodologist in the field of statistics.

top
back to main page


Locality and Category Theory


Andrei Rodin


The Cartesian methodological principle to treat all the natural phenomena uniformly allowed to overcome the Aristotle's distinction between "sublunar" and "celestial" worlds and made the basis of the Modern science. Technically this principle was supported by the mathematical concept of Cartesian frame which allowed to locate any object or event (no matter "sublunar" or "celestial") within one and the same frame. In spite of many remarkable successes of the Cartesian methodology, particularly in Classical mechanics, the later development of science suggested certain corrections. Thus Bohr's Correspondence principle weakens the Cartesian principle and allows the situation when objects of different scales are treated by different theories provided that the theories are properly "translatable" into each other but still are not parts of one general theory. The General Relativity allows only local frames (which are Cartesian but only locally applicable). This new features make not only to think more seriously about the Aristotle's ideas but also to look for better mathematical tools to replace Cartesian frames which are still ubiquitous in science. In my talk I am going to show that the Category Theory provides us with such tools which better comply with methodological demands of the contemporary science.

Picture in normal/large size
top
back to main page


Abductive Logic and Justification


Daniel Schoch


Abduction is generally understood as an inference of the best explanation. We call the quality "of an explanatory structure" given by a set of "non-deductive" inference rules the degree of coherence. In order to account for non-monotonic belief revision, we adopt a holistic view of "Ajustification" Coherence is not a property of belief systems but of semantic models. It Abalances successful versus unsuccessful instances of the inference rules under the given valuation of the propositions. The coherence relation induces an extension of classical logic ! ! where, under very general conditions, the set of propositions following abductively from a given premise by forming a filter fulfil the criteria of strongly rational beliefs.

We argue that abductive reasoning in this sense can only provide a justification of the Aacceptance of propositions by consensus. It does not guarantee that the propositions are in explanatory relations with other accepted propositions. In order to achieve what I call strong justification of a proposition P one has to add a condition saying that the abductive reasons for P in each model of P align.

Picture in normal/large size
top
back to main page


World-Travelling and Mood Swings


Kai Wehmeier


Philosophical accounts of metaphysical possibility and necessity are often motivated by natural language examples such as: "If Nixon had bribed Senator X, he would have gotten Carswell through" (counterfactual); "Under certain circumstances, Nixon would have gotten Carswell through" (metaphysical possibility); "No matter how things might have gone, Nixon wouldn't have gotten Carswell through" (metaphysical necessity). It is characteristic of these examples that the verbs never occur in the indicative, but rather in the conditional mood (or the subjunctive mood, in the protasis of the counterfactual conditional). Such morphological distinctions are, by contrast, completely absent from the standard formal languages of modal logic. Thus, when "Nixon got Carswell through" is expressed by "nGc", the possibility statement mentioned above is to be represented as "Poss(nGc)", using the one predicate letter "G" for both the indicative ("got through") and the conditional ("would have gotten through") a predicate. In the lecture, I shall provide a natural explanation for the indicative/non-indicative distinction in such formal languages (as interpreted by standard possible worlds semantics), point to certain difficulties faced by this explanation, explore alternative languages of modal logic, and apply these frameworks to a number of logical and philosophical problems.

Picture in normal/large size
top
back to main page


On Gupta-Belnap Revision Theories of Truth: the next stable set, and a rapprochement with Kripke


Philip D. Welch


We consider various concepts associated with the revision theory of truth of Gupta and Belnap. We categorize the notions definable using their theory of circular definitions as those notions universally definable over {\em the next stable set}. We give a simplified (in terms of definitional complexity) account of varied revision sequences - as a generalised algorithmic theory of truth. This enables something of a unification with the Kripkean theory of truth using supervaluation schemes.

Picture in normal/large size
top
back to main page


Constraints for an algebraic theory of language understanding


Markus Werning


In linguistics as well as in the philosophy of language there is rich evidence that natural language expressions can, at least in part, be paraphrased by a modified first order predicate logic. It can be argued, to be more precise, that natural languages have to be paraphrased by a formal language that comprises at least object and event constants, one-place and two-place predicates as well as identity, non-identity, the copula, pairing, conjunction, subjunction, disjunction, and negation some of those operation are apparently inter-definable. The language-of-thought doctrine, which! roots in Noam Chomsky's Universal Grammar and has been developed by Jerry Fodor, claims that natural language expressions express mental concepts. The insight into the paraphrasation of natural languages together with the language-of-thought doctrine implies that mental concepts are combined by means of a modified first order predicate logic with the features mentioned above. The main constraint for the individuation of mental concepts is the principle of compositionality. Except for finitely many idiomatic concepts, the semantical properties of a complex concept is determined by, and dependent on its syntax and the semantical properties of its primitive constituent concepts. Due to materialism, the properties of mental concepts (including their semantical properties) are determined by, and dependent on neuronal properties of the brain. From those premises it can be inferred, that there is a constituent preserving isomorphism between an algebra that includ! es the above mentioned logic and an algebra that comprises certain neuronal entities and operations. Based on recent neurobiological research on synchronous oscillations and on psycholinguistic data, the paper suggests how such a neuronal algebra might look like. Employing the tools provides by universal algebra, the paper develops a theory of natural language understanding. It also provides reason why quantifiers as well as negation and disjunction pose specific problems for linguistic theory.

Picture in normal/large size
top
back to main page


Scientific Method and Formal Learning Theory


Roger A. Young


In formal learning theory, following Gold, there is a school of thought (eg. Jain, Osherson et al., Kelly) that treats a scientist as interacting with a potentially denumerably infinite data stream and as seeking an empirically adequate hypothesis about the system that is generating it. Given appropriate assumptions about the system, it is possible to prove that the scientist can have a reliable method that will, within finite time, generate an empirically adequate hypothesis. For example, if the system is a deterministic finite state machine, then there are reliable algorithms. In contrast, many think that this formal research is far-removed from actual scientific method. The talk discusses various issues connected with this criticism. Are the assumptions made question-begging? What are we to say about analogue properties, or revision of the data? Do scientists in practice follow a reliable method? I argue that a version of the formal approach is viable.

Picture in normal/large size
top
back to main page