|
Abstracts and Slides
Invited Lectures and PhD Colloquium talks
- Dana Bartošová
Dual Ramsey Theorem and its
Applications in Topological Dynamics
The Dual Ramsey Theorem by Graham and Rothschild is a powerful partition
principle with many applications. We will present its recent applications to dynam-
ics of the homeomorphism group of the Lelek fan, of the group of linear isometries
of the Gurarij space and of the group of affine homeomorphisms of the Poulsen
simplex.
-
Elisabeth Bouscaren
Model Theory and its applications to algebraic geometry
We will try to present some of the model theoretic notions and tools which are important in the now classical
applications of model theory to Diophantine geometry, as well as one of the main concerns of modern model theory,
how to characterize classical mathematical structures by abstract conditions
(the Zilber trichotomy priniciple, Zariski structures). We will try to keep the talk as elementary as the subject allows.
- Ekaterina Fokina
Equivalence Relations and Computable Structures
Equivalence relations formalize the idea of resemblance between math-
ematical objects. In this talk we discuss several approaches to study the
role of equivalence relations in computable structure theory. Within one
approach, equivalence relations are used to measure the complexity of clas-
sification of computable structures. Another approach is to employ degrees
of atomic diagrams of structures to characterize the inherent complexity of
equivalence classes of structures, up to various equivalence relations. Yet an-
other direction so to use equivalence relations to compare the informational
content of universes for effectively given structures. For each approach, we
explain the context, give an overview of the most important results and list
some open questions.
- Laura Fontanella
Reflection of stationary sets, tree property and square
One of the most fruitful research area in set theory is the study of the so-called Reflections Principles.
Roughly speaking, a reflection principle is a combinatorial statement of the following form: given a structure S
(e.g. stationary sets, tree, graphs, groups...) and a property P of the structure, the principle establishes that
there exists a smaller substructure of S that satisfies the same property P. There is a tension between large cardinals
axioms and the axiom of constructibility
V = L at the level of reflection: on the one hand, large cardinals typically imply
reflection properties, on the other hand L satisfies the square principles which are anti-reflection properties. Two particular
cases of reflection received special attention, the reflection of stationary sets and the tree property. We will discuss the
interactions between these principles and a version of the square due to Todorcevic. This is a joint work with Menachem Magidor
and Yair Hayut.
- Philipp Hieronymi
Diophantine approximation, scalar multiplication and decidability
It has long been known that the first order theory of the expansion (ℝ,<,+,ℤ)
of the ordered additive group of real numbers by just a predicate for the set
of integers is decidable. Arguably due to Skolem, the result can be deduced easily from
Büchi's theorem on the decidability of monadic second order theory of one successor,
and was later rediscovered independently by Weispfenning and Miller. However, a consequence of Gödel's
famous first incompleteness theorem states that when expanding this structure by a symbol for multiplication,
the theory of the resulting structure (ℝ,<,+,*,ℤ) becomes undecidable.
This observation gives rise to the following natural and surprisingly still
open question: How many traces of multiplication can be added to (ℝ,<,+,ℤ)
without making the first order theory undecidable? We will give a complete answer
to this question when "traces of multiplication" is taken to mean scalar multiplication
by certain irrational numbers. To make this statement precise: for b in ℝ, let fb: ℝ → ℝ
be the function that takes x to bx. I will show that the theory of (ℝ,<,+,ℤ,fb)
is decidable if and only if b is quadratic. The proof rests on the observation that
many of the Diophantine properties (in the sense of Diophantine approximation) of b can be coded in these structures.
- Rosalie Iemhoff
Logics and Rules
As is well-known, logics can be defined in various ways: via models, via
proof systems, and in more exotic ways as well. In the approach via proof
systems rules of inference are the central notion. There are numerous proof
systems for a given logic, natural deduction and sequent calculi are two of
the most famous examples. But even if one considers one particular proof
system, there are countless ways to describe a given logic. To describe these
possibilities in a structured way, the admissible rules of a logic, the rules
under which the logic is closed, have to be established. This talk discusses
admissible rules, explains their connection with unification theory, treats
examples of logics where a full description of their admissible rules is known
and draws conclusions about possible proof systems for such logics.
- Dominik Klein
Social interaction -- a formal exploration
Abstract: TBA
- Daniel Körnlein
Quantitative analysis of iterative algorithms in fixed point
theory and convex optimisation
"What more do we know if we have proved a theorem by restricted means than if we
merely know that it is true?" In proof-mining practice, the additional information
is usually in the form of convergence rates or so-called rates of metastability.
Numerous results in this vein have appeared in the last 15 years, see e.g. [1,2].
The talk will focus on proof mining results obtained from proofs in fixed point theory
and optimization, particulary those using weak sequential compactness.
[1] U. Kohlenbach. Effective uniform bounds from proofs in abstract functional analysis.
In S. B. Cooper, B. Löwe, and A. Sorbi, editors, New computational paradigms, pages 223-258. Springer, New York, 2008.
[2] U. Kohlenbach. Recent progress in proof mining in nonlinear analysis. To appear in
a forthcoming book with invited articles by recipients of a Gödel Centenary Research
Prize Fellowship, 2016. Available from http://www.mathematik.tu-darmstadt.de/~kohlenbach/
- Gyesik Lee
Formal proofs, variable binding, and program extraction from proofs
We talk about some issues related to making use of formal proofs,
including variable binding and program extraction from proofs. First, we give a short introduction to formal proofs and explain why variable binding matters in doing formal proofs.
Then, we demonstrate how normalization by evaluation can be realized by program extraction from proofs. The example which will be used for the demonstration is related to the cut elimination process of the intuitionistic first-order predicate logic.
- Norbert Preining
Gödel Logics - a short survey
Gödel logics can be characterized in a rough-and-ready way as follows:
The language is a standard (propositional, quantified-propositional,
first-order) language. The logics are many-valued, and the
sets of truth values considered are (closed) subsets of [0,1] which
contain both 0 and 1. 1 is the `designated value,' i.e., a
formula is valid if it receives the value 1 in every interpretation.
The truth functions of conjunction and disjunction are minimum and
maximum, respectively, and in the first-order case quantifiers are
defined by infimum and supremum over subsets of the set of truth
values. The characteristic operator of Gödel logics, the Gödel
conditional, is defined by A → B = 1 if A ≤ B, and B otherwise.
Because the truth values are ordered, the semantics of Gödel logics is
suitable for formalizing comparisons. It is related in this respect to a
more widely known many-valued logic, Lukasiewicz (or `fuzzy') logic.
In contrast to Lukasiewicz logic, which might be considered a logic of
"absolute" or "metric comparison", Gödel logics are logics
of "relative comparison".
Approaching from a different angle, Gödel logic is one of the three basic
t-norm based logics which have received increasing attention in the
last 15 or so years (the others are Lukasiewicz and product logic).
Yet Gödel logic is also closely related to intuitionistic logic:
it is the logic of linearly-ordered Heyting algebras.
In the propositional case, infinite-valued Gödel logic can
be axiomatized by the intuitionistic propositional calculus extended
by the axiom schema (A -> B) V (B -> A). This connection
extends also to Kripke semantics for intuitionistic logic: Gödel
logics can also be characterized as logics of (classes of) linearly
ordered and countable intuitionistic Kripke structures with constant
domains. Furthermore, the infinitely valued propositional Gödel logic
can be embedded into the box fragment of LTL in the same way as
intuitionistic propositional logic can be embedded into S4.
We will give a short overview of this family, presenting propositional,
quantified propositional (weak second order), and first order logics,
including proof-theory, relation to intuitionistic logic, and recent
results on fragments.
- Szymon Toruńczyk
Computation with Atoms
The framework of Sets with Atoms has its roots in the works
of Fraenkel and Mostowski, and has been recently
rediscovered in the Computer Science community. I will show
how this framework can be used in order to represent
infinite structures in a finite way which can be processed
by algorithms. Other notions from model theory, notably
countable-categoricity, play a role in the complexity
analysis of such algorithms. I will present some
applications to verification of infinite state systems. I
will also present two working tools which effectively
manipulate such infinite structures.
- Charlotte Werndl
Calibration and Confirmation in Climate Science
A hotly debated issue on confirmation in climate science (as well as in philosophy) is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before, e.g. for calibrating parameters). This paper investigates the issue of use-novelty in the context of the mathematical methods provided by model selection theory. We will first argue that model selection theory provides a good method for evaluating many climate models. Then we will show that the picture model selection theory presents us with about use-novelty is more subtle and nuanced than the commonly endorsed positions by climate
scientists and philosophers. More specifically, we will argue that there are two main cases in model selection theory. On the one hand, there are the methods such as cross-validation where the data are required to be use-novel. On the other hand, there are the methods such as Bayesian confirmation or the Akaike Information Criterion (AIC) for which the data cannot be use-novel. Still, for some of these methods (like AIC) certain intuitions behind the use-novelty approach are preserved: there is a penalty term in the expression for the degree of confirmation by the data because the data have already been used for calibration. The common positions arguedience and philosophy are either that data should always be use-novel or that the use-novelty criterion is irrelevant. According to model selection theory these positions are too simple: whether or not data should be use-novel depends on the specific method used. For certain methods data should be use-novel, but for others they cannot and thus need not be use-novel.
Contributed Talks
-
Christopher Badura
Truth in Fiction via Non-Standard Belief Revision
Is the sentence `Sherlock Holmes lives in Baker Street 221b' true? Most
people would agree, or would give the proviso 'well, he does, in the
fiction'. In literary studies, many claims of this form are made: Romeo
loves Juliet. They both live in Verona. And so forth. All these
sentences, so some hold, are not literally true but true in the fiction.
Thus, one prefixes sentences like the mentioned ones by a fiction
operator `In the fiction f', which we represent by writing 'In f', and
then evaluates whether the resulting sentence of the form 'In f: p' is
true. The question then is: what are the truth conditions for a sentence
of this form?
Fiction operators such as `In the fiction f' (In f:) have seen applications
particularly in philosophy of fiction, but more broadly in any ontological/metaphysical debate.
For example there are fiction operator approaches
towards modality, mathematics and morality. Giving a suitable analysis for
when a sentence of the form 'In f: p' is true, is hence of importance. The
most famous approach has been David Lewis's analysis. However, it has certain
shortcomings, especially when applied to inconsistent fictions in which
not everything is true.
We
start by taking Lewis's Analysis 2 and give it a formal interpretation
that takes into account impossible worlds and ideas from belief revision
theory. Our formal framework comprises multi-agent plausibility models
with a domain of possible and impossible worlds, ordered by a group
plausibility ordering. This gives rise to Grove-style sphere models
which are known to be models for the AGM axioms. We extend these models
to an impossible world setting.
Then, a sentence of the form 'In f: p' is true under our interpretation of Analysis
2 iff. for any world that is, after revising with the explicit content
of the fiction, at least as plausible as any common belief world and
that makes the explicit content of the fiction true, it also makes p true.
On
our models, in inconsistent fictions not everything is true. Moreover,
if F is the explicit content of the fiction, we have that In f: F is a
logical truth, that is, in the fiction its explicit content is true,
which is desirable. Moreover, import of background beliefs and
background knowledge is accounted for by the plausibility ordering.
However, within the scope of the operator 'In f:', we can invalidate
almost any logical inference. This accounts for the anarchy of fiction
with respect to logic: for any logic L, it seems, we can write a fiction
where L fails.
-
Bernd Buldt
Mathematical practice and human cognition. A critical appraisal of Frank Quinn's Science of Mathematics
Frank
Quinn (of Jaffe-Quinn fame, see [1]) worked out the basics of his own
account of mathematical practice, an account that is informed by an
analysis of contemporary mathematics and its pedagogy (see [2]). Taking
this account as our starting point, we can characterize the current
mathematical practice to acquire and work with new concepts as a
cognitive adaptation strategy that, first, emerged to meet the
challenges posed by the growing abstractness of its objects and which,
second, proceeds according to the following three-pronged approach:
(i)
sever as many ties to ordinary language as possible and limit ordinary language explanations to an absolute minimum; (ii)
introduce axiomatic definitions and bundle them up with a sufficient
number of examples, lemmata, propositions, etc. into small cognitive
packages; (iii) practice hard with one new cognitive package at a time.
Drawing
on research in cognitive science, and especially in mathematics education,
I will then show how cognitive science provides supporting
evidence for the effectiveness of this mathematical practice. Time
permitting, I will then complement these findings from a
phenomenological perspective by exhibiting a fruitful convergence
between mathematics, cognitive science, and phenomeno- logical analysis.
The
full paper is divided into the following sections: Remark on
mathematical practice − Frank Quinns Contributions − Mathematical
practice: defining concepts − Mathematical practice: acquisition of
concepts − Mathematical practice: corroboration by cognitive science −
Mathematical practice: convergence with phenomenology.
References
[1] Jaffe,
Arthur, Quinn, Frank."Theoretical Mathematics: Towards a synthesis of
mathematics and theoretical physics," Bulletin of the American
Mathematical Society NS 29:1 (1993), 113.
[2] Quinn, Frank. Contributions to a Science of Mathematics, manuscript (October 2011), 98pp.
-
Bernd Buldt
On fixed points, diagonalization, and self-reference
We
clarify the respective role fixed points, diagonalization, and self-
reference play in proofs of Gödel's first incompleteness theorem. We
first show that the usual fixed-point construction can be reduced to a
double diagonalization; this is done to address widely held views such
as that fixed-point are "paradoxical" (Priest), or work by "black magic"
(Soare), or that their construction is "intuitively unclear" (Kotlarski).
We then discuss three notions of self-reference; this can be
seen an extension of a recent study by Halbach and Visser and is meant
to show that we do not (yet?) have a robust theory that would allow us
to establish a firm link between fixed points and self-reference.
- Huimin Dong and Norbert Gratzl
Open Reading for Free Choice Permission: A Perspective in Substructural Logics
This
talk proposes a new solution to the well-know Free Choice Permission
Paradoxes, combining ideas from substructural logic and non-monotonic
reasoning. Free choice permission is intuitively understood as "if it is
permitted to do A or B then it is permitted to do A and it is permitted
to do B." This is usually formalized as follows:
P (A ∨ B) ⊢ P A ∧ P B. (FCP)
There
are many well-known problems associated with FCP. In this paper we
focus on three of them. First, in many deontic systems adding FCP allows
for a form of conjunctive inference which seems clearly unacceptable:
if it is permitted to order a lunch then it is permitted to order a
lunch and not pay for it. Second, many deontic logics become
resource-insensitive in the presence of FCP. They validate inferences of
the form "if the patient with stomach trouble is allowed to eat one
cookie then he is allowed to eat more than one," which are also
counter-intuitive. Third, in its classical form FCP entails that the
classically equivalent formulas can be substituted to the scope of a
permission operator. This is also implausible: It is permitted to eat an
apple or not iff it is permitted to sell a house or not. The challenge
for a logic of free choice permission is to exclude such
counter-intuitive consequences while not giving up too much deductive
power. We need a suitable non-classical calculus for free choice
permission. We ce
permission. We suggest that the right way to do so is using a family of
substructural logics augmented with a principle borrowed from
non-monotonic reasoning.
The solution that we put forward in
this talk is to build the theory of FCP on top of a plausible calculus
for action types. This calculus must, in our view, be at the same time
relevant, non-monotonic and resource-sensitive. So the present talk can
be seen as a follow-up to a suggestion made in [1]. It is structured as
follows. We first clarify various background concepts related to free
choice permission, and apply the semantic strategy suggested in [1] to
analyze those three problems. In this section we emphasize that one of
the main goals of this talk is to balance the counter-intuitive
consequences of free choice permission with the deductive power of its
logic. Our solution is to use the substructural "normality" logics
which, we argue, preserve enough deductive power while keeping
counter-intuitive FCP inferences at bay. These logics are presented in
the third section, where it is shown that they are sounds and complete
and satisfy cut elimination.
References
[1] Albert J.J. Anglberger,
Huimin Dong, and Olivier Roy. Open reading without free choice. In
Fabrizio Cariani, Davide Grossi, Joke Meheus, and Xavier Parent,
editors, Deontic Logic and Normative Systems, volume 8554 of Lecture
Notes in Computer Science, pages 19-32. Springer International
Publishing, 2014.
-
Bernhard Fisseni.
Perspectives as Vector Spaces
Perspective
is an important concept in the informal modeling of narrative. E.g.,
Schmid distinguishes five dimensions of perspective (Perception,
Ideology, Space, Time, Language). Formal modeling of perspective has
concentrated on identifying 'subjective statements' (Wiebe). We discuss a
conceptualization of perspective which builds on the metaphor of a
mathematical (vector) *space*. The main claim is that such a notion of
perspective allows to compare perspectives, merge and unify
perspectivized information by operationalizing the metaphor: projecting
between different semantic spaces, e.g. the 'objective' space and the
spaces of individual and temporal stances. (The 'objective' space is
also just a perspective in this framework.)
The conceptualization
builds on the assumption that for two descriptions to be considered
different perspectives on the same state of affairs, they have to have
some facts and descriptive aspects in common, which can be captured as
(applications of) a common semantic subspace, but may also have
conflicting information and categorizations. What sets this concept of
perspective apart from other notions of semantic spaces or mental
spaces is that (a) the space is not just a container that is different
from other spaces, but that its internal structure and the 'co-ordinate
system' or basis are of prime importance for the comparison and merging
of different perspectives, and that (b) the space that is modeled is not
the space of a concrete application of the semantic inventories, but
the inventories themselves.
-
Christine Gaßner
BSS RAM's with ν-Oracle and the Axiom of Choice
We
deal with some questions arising from the comparison of two different
models of computation over mathematical structures. One of these models
is our BSS RAM model that results from a generalization of the Turing
machine and the BSS model of computation and allows to describe
algorithms on a high level of abstraction. The other model is a
machine-independent model of abstract computation over first-order
structures. Moschovakis defined it by generalizing the classical concept
of μ-recursive functions. In this connection the μ-operator was
replaced by a nondeterministic ν-operator. In order to better
understand the similarities and differences between both models, we
extend our BSS RAM's by introducing such a \nu-operator and study the
new machines for several structures. Finally, we discuss the
consequences in particular for well-ordered structures, but also for
unordered structures.
-
Lev Gordeev
Proof compression and NP = PSPACE
The
conjecture NP = PSPACE seems provable by elementary dag-like proof
compression techniques in standard natural deduction proof system for
propositional minimal logic, while using intermediate sequent calculus
of Joerg Hudelmaier.
- Walter Hoering
Über die Notwendigkeit des Zufalls (The author would be pleased to send his paper if asked by e-mail: hoering(at)lmu(dot)de)
Wir stellen eine vermutlich neue Art von formalen Systemen dar, in denen Zufallsentscheidungen eine
wesentliche Rolle spielen.
-
Radek Honzik
Definability of safisfaction in outer models
Let
M be a transitive model of ZFC. We say that a transitive model of
ZFC, N, is an outer model of M if M ⊆ N and
ORD ∩ M = ORD ∩ N. The outer model theory of
M is the collection of all formulas with parameters from M which
hold in all outer models of M (which exist in a universe in which M
is countable; this is independent of the choice of such a universe).
Satisfaction defined with respect to outer models can be seen as a
useful strengthening of first-order logic. Starting from an
inaccessible cardinal κ, we show that it is consistent to have a
transitive model M of ZFC of size κ in which the outer model
theory is lightface definable, and moreover M satisfies V = HOD. The
proof combines the infinitary logic L∞, ω, Barwise's
results on admissible sets, and a new forcing iteration of length
strictly less than κ+ which manipulates the continuum function
on certain regular cardinals below κ. We also review some
unpublished results of Mack Stanley which are directly related to our
topic.
The work is joint with Sy-David Friedman.
- Nick de Hoog
Empirical investigations of mathematical styles
Every
mathematician has their own style of doing mathematics. Such styles are
influenced by many factors, e.g. familiarity with research areas, proof
techniques and ideas, but also expectations and valuation of peers. In
developing one's personal style, local circumstances seem to play an
important role. Anecdotal evidence seems to suggest correlations between
non-mathematical cultural factors (e.g. nationality, gender, etc.) and
distinct mathematical research cultures. How might these claims be made
more precise and substantiated?
A preliminary report on two
questionnaire studies on cultural influences in mathematical research
will be presented. These studies were conducted during the
DMV-Jahrestagung in Hamburg (September 2015) and the 7ECM in Berlin
(July 2016) and form the cornerstone for a study on national styles in
mathematical practice.
-
Mirjana Ilic
A natural deduction system for positive contractionless relevant logic
We
give a normalizing system of natural deduction for positive
contraction--less relevant logic RW+o. Unlike other known
natural deduction systems of relevant logics, including RW+o,
constructed by Brady, Dunn and Urquhart, this one has a certain,
translational, relationship to a particular sequent calculus of
RW+o. Namely, due to the fact that our system is based on
general, instead of standard (special) elimination rules, it is possible
to define translations from one system to another, such that normal
natural deduction derivations correspond to cut-free sequent calculus
derivations and vice versa. Based on those translations, normalization
and closure under composition can be proved, with the help of
cut-elimination.
- Guram Bezhanishvili, Nick Bezhanishvili
and
Julia Ilin
Superintuitionistic logics and extensions of lax logic
Propositional
lax logic (PLL) is an intuitionistic modal logic. The modality of PLL,
called the lax modality, is quite peculiar, as it has features of both,
the box and the diamond modality from classical modal logic. The lax
modality arises in several contexts. From an algebraic point of view, it
corresponds to a nucleus on a Heyting algebra. The axioms of the lax
modality also describe the logic of Grothendiek topologies which was
studied by Goldblatt. Fairtlough and Mendler use the lax modality in
formal verification of computer hardware. Wolter and Zakharyaschev
studied PLL in the context of preservation results between
superintuitionistic to intuitionistic modal logics. It is well-known
that PLL is Kripke complete, has the finite model property and is
decidable.
In this paper, we make a first attempt to study
extensions of PLL. We will develop the machinery of Zakharyaschev's
canonical formulas for PLL. The canonical formulas allow us to axiomtize
the extensions of PLL in a uniform way. We are taking an algebraic
approach to this problem. Our main result concerns extensions of PLL
that arise by equipping a superintuitionistic logic L (i.e., an
extension of the intuitionistic propositional calculus (IPC)) with the
lax modality. These are extensions of the shape PLL+A, were A is a set
of formulas in the language of intuitionistic logic. We show that if
L=IPC+A has the finite model property, is tabular or Kripke complete,
then PLL+A enjoys the same property. Moreover, if the logic L is Kripke
complete and decidable, then also PLL+A is decidable. As a corollary we
obtain (an alternative proof of the fact) that PLL has the fmp and is
decidable.
Similar preservation results for extensions of the
basic intuitionistic modal logic IntK have been obtained by Wolter and
Zakharyaschev. Their proofs are based on embedding intuitionistic modal
logics into fusions of classical modal logics. Our proofs on the other
hand rely on the canonical formulas for PLL.
-
Maciej Kleczek.
A Fully Compositional Semantics for First-Order Languages
Our aim is to provide a semantics for first-order languages (with no
individual constants and function symbols), which takes into account
meanings of subatomic constituents of formulas.
The issue of the
meaning of a first-order variable is a vexed one [2], [4]. However, we
shall identify the meaning of each xi; ∈ Varα in ℑ with the i-th
projection function. The meaning of a formula φ in ℑ is identified with
the set of satisfying sequences. A look at the existing literature
reveals that concepts of the meaning of a first-order variable and a
first-order formula in ℑ have been developed independently [5], [6],
[7]. The meaning of an atomic formula, as a set of satisfying
assignments, is taken as primitive. This enables to doubt whether the
standard meaning assignment to first-order formulas in ℑ is fully
compositional.
In order to rectify it we claim that atomic
formulas are both syntactically and semantically complex. This idea is
not a novelty. It circulates in the algebraic literature [1], [8], [9].
Therein an atomic formula is regarded as syntactically and semantically
complex just in case the sequence of variables succeeding a predicate
symbol is not the proper initial segment of the canonical well-order on
Varα. Nevertheless, contrary to the algebraic approach, we shall regard
each atomic formula as complex.
Consequently we propose the
constituent analysis with following types of expressions: (a) an n-ary
sequences of variables for each n ∈ ω (b) predicate symbols (c) boolean
operators (d) indecomposable quantifiers prefixes of the form ∀xi and
∃xi for each i<α and (e) formulas. Next we define the class of
σ-structures Strσ. Its elements are tuples of the form ℑ' := 〈 Dℑ', Iℑ' 〉 where
Iℑ' (Pn): (Dℑ'n) → 2 for each Pn ∈ σ. The meaning of each
predicate symbol Pn (denoted by μℑ'(Pn)) is identified with the
interpretation function. Furthermore we define by the induction on the
length of a sequence, the meaning assignment to each member of Varα+:
(a) μℑ(xi) := xiℑ
(b) μℑ 〈x0, ... , xn-1, xn 〉 := μ ℑ
〈x0, ..., xn-1 〉 × μℑ (xn).
Crucially,
the product operation is non-commutative. This allows to explain
non-synonymy of R(v2,v3) and R(v3,v2) where R is interpreted as an
asymmetric relation. Next we define inductively the satisfaction
function ⊨' : ℑ × Lσ × D ℑn → 2 with the atomic clause ⊨'(ℑ', α,
P(x0, ... , xn-1)) := 1 iff μℑ'(Pn) o (μℑ 〈 x0, ..., xn-1
〉)[α] := 1.
Moreover we extract the algebraic content of the satisfaction function. Given Varα, we define for
each ℑ' ∈ Strσ the tuple Cf (ℑ')α := 〈Xf, ∧, ∨, *, ci, di,j, o,
×〉 i,j < α.
Members of the resulting class of algebras are called partial cylindric
function algebras of dimension α with product and composition. The
carrier of Cf(ℑ')α consists of: (1) functions corresponding to meanings
of : (a) n-ary sequence of variables for each n ∈ ω (b) predicate
symbols and (2) set of functions satisfying formulas. Next we define an
embedding f: Cs(ℑ)α → Cf(ℑ')α. Cs(ℑ)α is a cylindric set algebra of
dimension α generated by the standard first-order structure ℑ, which
corresponds to ℑ'. Hence the subalgebra of Cf(ℑ')α generated by the
image of h is locally finite dimensional and satisfies the analogue of
regularity. Furthermore via this embedding we recover algebraic
axiomatization of first-order logic.
Finally, relatively to this
class of algebraic structures we provide a fully compositional meaning
assignment in the manner of [4].
References:
[1] Andreka Hajnal, et.al. Algebraic Logic. In Handbook of Philosophical Logic vol2, ed. Dov Gabbay, et.al. Kluwer, 2001.
[2] Fine Kit, Semantic Relationism, Blackwell Publishing, 2007.
[3] Hodges Wilfrid, Formal Features of Compositionality. Journal of Logic, Language and Information. vol. 10. 2001
[4] Hodges Wilfrid, Truth in a Structure. Proceedings of the Aristotelian Society, Vol.86, 1986
[5] Janssen Theo, Compositionality. In Handbook of Logic and Language, ed. Johan van Benthem, et.al, MIT Press. 1997
[6] Kracht Marcus, Interpreted Languages and Compositonality, Springer Verlag, 2011
[7] Monk Donald, Mathematical Logic, Springer Verlag, 1976
[8] Monk
Donald, Substitutionless Predicate Logic with Identity, Archiv fur
Mathematische Logik und Grundlagenforschung, Vol.7, 1965
[9] Tarski
Alfred, A Simplified Formalization of Predicate Logic with Identity,
Archiv fur Mathematische Logik und Grundlagenforschung, Vol.7, 1964
[10] Visser Albert, Contexts in Dynamic Predicate Logic, Journal of Logic, Language and Information. vol 7, 1998.
-
Tomer Kotek and
Johann Makowsky
Integer Sequences arising from Graph Polynomials: An application of a Theorem by C. Blatter and E. Specker
A.
Mani and R. Stones in their paper from 2016 (SIAM-Journal of Discrete
Mathematics) study the sequence of integers tc a,b (n) = T(Kn,a,b),
where T(G,X,Y)$is the Tutte polynomial and a,b ∈ ℤ. For the case
a=1 this gives a sequence C(n,b) which for b=2 counts the number
of labeled connected graphs of order n.
They give a description of
the modular behavior of C(n,b) modulo prime powers pk for b ≠ 1
mod p. Their tools for the analysis of the modular behaviour of
C(n,b) are elementary number theory and group actions. Similar
methods have been employed in 1981 by C. Blatter and E. Specker using
additinally tools from logic to analyze the modular of a very large
class of combinatorial counting functions. In fact it follows from their work that the sequence C(n,2)
is ultimately periodic modulo every m ∈ ℕ.
A.
Mani and R. Stone also formulate a conjecture concerning the modular
behavior of tc(n,a,b). In this talk we study the modular behaviour of
tc(n,a,b), for a,b ∈ ℤ and a modulus m and prove a modified form of their conjecture. We
also study the modular behaviour of P(Kn,a,b) for a wide class of
graph polynomials. Our main technical contribution consists in showing
how the Specker-Blatter can be applied in this new context.
-
Stephan Krämer.
The Logic of Ground
Many
philosophers believe that there is an important relation of objective
priority that can obtain between some facts Γ and another fact P, which
they call ground. Roughly, the relation is supposed to obtain when the
latter fact obtains in virtue of the former facts. Standard examples
include: the fact that the ball is scarlet grounds the fact that the
ball is red; the fact that Kant was unmarried and the fact that Kant was
male together ground the fact that Kant was a bachelor, and the fact
that Socrates was a philosopher grounds the fact that Socrates was a
philosopher or a pirate. Some philosophers then put the notion of ground
to heavy philosophical work by claiming, for instance, that all
normative facts are grounded by non-normative facts, or that all mental
facts are grounded by physical facts.
If the notion of ground is
to play this key role in metaphysical theorizing, we had better have a
good understanding of its logic. As it stands, however, the study of the
logic of ground is very much in its infancy. The talk will provide a
sketch of the state of the art in this debate, and a brief overview of
my own work on the topic, focusing on the so-called pure and
propositional logics of ground.
The pure logic of ground
describes the purely structural properties of ground. The orthodox view
on these may be summarized by saying that ground is irreflexive,
transitive, and satisfies an amalgamation principle according to which
any the union of any two strict grounds of a fact is also a strict
ground. There are two semantics for pure logic of ground for which
soundness and completeness results have been obtained. One, by Louis
de Rosset, uses acyclic directed hypergraphs. The other, Kit Fine's
truthmaker semantics, is a form of situation semantics distinguished by
the fact that verification of a sentence by a situation is not subject
to monotonicity.
The propositional logic of ground attempts to
also capture the interaction between ground and conjunction,
disjunction, and negation. The guiding idea behind all existing
proposals is that truthfunctionally complex facts are grounded by their
constituents. Thus, any fact grounds any disjunction in which it occurs,
and any two facts jointly ground their conjunction. As yet, no adequate
semantics for such systems has been formulated, but some of my own work
suggests that one can develop such a semantics by extending Fine's
truthmaker semantics using a suitably understood notion of a mode of
verification. The idea is that if P is grounded by Γ, then any situation
verifying P does so by verifying Γ, so Γ represents one way, or mode,
in which P may be verified. By interpreting a sentence not by a mere set
of verifying situations, but by its modes of verifications, we may
obtain an adequate semantics for a plausible propositional logic of
ground.
- Philipp Lücke.
Forcings that characterize large cardinals
The
relative consistency of many combinatorial principles in set theory can
be established by collapsing a large cardinal to be the successor a
smaller cardinal. In many important cases, the resulting principle
implies that the relevant successor cardinal has the same large cardinal
properties in some canonical inner model and therefore the large
cardinal assumption was necessary for the consistency proof. We will
consider the question whether certain collapse forcings characterize
large cardinal properties through the validity of combinatorial
principles in their forcing extensions, in the sense that the collapse
forces the principle to hold if and only if the collapsed cardinal
possess the corresponding large cardinal property in the ground model.
It
is easy to see that the Levy collapse cannot characterize inaccessible
cardinals in this way. In contrast, other canonical collapses can
characterize many important types of large cardinals. For example, the
forcing that first adds a Cohen real and then Levy collapse a cardinal
to be ω2 can characterize inaccessible, Mahlo, weakly compact,
indescribable, measurable, supercompact and huge cardinals. In the case
of larger large cardinals, these characterizations make use of results
of Apter and Hamkins on induced measures in ground models. The
characterization of smaller large cardinals relies on classical
combinatorial characterizations of these cardinals. Since there are no
suitable combinatorial characterizations for indescribable cardinals,
the forcing characterizations of these cardinals rely on new results
that identify these cardinals as the images of the critical points of
certain elementary embeddings.
This is joint work in progress with Peter Holy (Bonn).
-
Luca San Mauro
Classifying Relations via Computable Reducibility
Computable
reducibility provides a natural way of ranking binary relations on
positive integers according to their complexity. Such reducibility is
defined as follows: let R and S be two binary relations, we say that R
is computably reducible to S iff there is a computable function f such
that, for all x and y, the following holds: xRy if and only if
f(x)Sf(y).
Computable reducibility has been object of study
for decades, being mostly applied to the case of equivalence relations.
In particular, a prominent problem in the area has been that of
characterizing universal equivalence relations, i.e. relations to which
all others relations, of a given complexity, can be computably reduced
(see [1]).
In this talk, we address the problem of universality
for a more general context than that of equivalence relations. First,
we prove that, contrary to the case of equivalence relations and
preorders (see [2]), for each level of the arithmetical hierarchy there
is a universal binary relation. Then, we define natural examples of
universal Σ0n binary relations and of universal Π0n
binary relations, obtained by fairly simple manipolations of the most
fundamental set-theoretic relations.
References [1] Uri
Andrews, Steffen Lempp, Joseph S. Miller, Keng Meng Ng, Luca San Mauro,
Andrea Sorbi, Universal computably enumerable equivalence relations, The
Journal of Symbolic Logic, vol. 79 (2014), no. 1, pp. 60 −88.
[2]
Egor Ianovski, Russell Miller, Keng Meng Ng, Andre Nies, Complexity
of equivalence relations and preorders from computability theory, The
Journal of Symbolic Logic, vol. 79 (2015), no. 3, pp. 859 − 881.
[3] Luca San Mauro, Infomal Proofs and Computability, PhD Thesis, SNS, Pisa (2015).
- Klaus Meer
Real interactive proofs
Interactive proofs provide a
powerful tool in complexity theory to verify statements. An almighty prover interacts in several rounds with a
verifier to convince the latter of a statement. The verifier usually is a randomized polynomial time machine and
should detect erroneous proofs with high probability. Shamir's famous result characterizes the class IP of
problems that can be accepted by such interactive proofs as PSPACE or, equivalently, by the class PAR of
problems decidable in parallel polynomial time.
We study interactive proofs in the framework of real number
complexity theory as introduced by Blum, Shub, and Smale. Since space resources alone are known not to make much
sense in real number computations the question arises whether a real version of IP can be similarly
characterized by real number complexity classes. We shall derive both an upper and a lower bound for the
power of real interactive proofs. In particular, we show that Shamir like techniques can be applied to give
interactive protocols for problems that can be solved in real parallel polynomial time. This is joint
work with M. Baartse.
- Hugo Nobrega
Tree games for Baire classes and partition classes
We report on ongoing work for characterizing subclasses of Borel functions in Baire space by games.
Among
previously known results are Wadge's game for continuous functions,
Duparc's game for Baire class 1 functions, Andretta's game for the class
of functions preserving Σ02 under preimages, and Semmes's games
for Baire class 2 functions, for the class of functions preserving
Σ03 under preimages, for the class of functions pulling back
Σ02 to Σ03 under preimages, and for the class of Borel
functions itself.
In this work, we characterize the Baire class
alpha functions---i.e., those functions which pull back Σ01 to
Σ0α + 1 under preimages---via a game for each fixed countable
ordinal alpha. This game is obtained by first relaxing Semmes's
labeled tree game for Borel functions via a notion of bisimilarity of
labeled trees common in computer science. We then restrict the
relaxed labeled tree game in terms of two measures of complexity of
trees which we introduce, called the pruning rank and the finitely
branching rank of a tree. Wadge's and Duparc's games can be seen as
particular cases of our games, but interestingly Semmes's game for Baire
class 2 is significantly different from the one we obtain.
Furthermore,
given a subclass Gamma of Borel functions which is characterized by a
restriction of the labeled tree game and which satisfies a certain weak
assumption, we obtain a game characterization of the class of functions
which are piecewise Gamma on a countable Σ0α partition of Baire
space---i.e., the class of functions f for which there exists a
partition 〈A_n ; n natural number〉 of Baire space into Π0α
sets such that the restriction of f to An is in Γ, for each n. Again,
this game is obtained by a suitable restriction of the labeled tree
game, and Andretta's game and Semmes's games for the class of functions
preserving Σ03 under preimages and for the class of functions
pulling back Σ02 to Σ03 under preimages are particular cases
of our construction.
- Stephane Le Roux,
Arno Pauly and Jean-Francois Raskin
On the decidability of Minkowski-games
Motivated
by control theory, we introduce Minkowski-games as a notion of games
played in a Euclidean vector space. The game state is a vector. The
protagonist choses from a finite list of subsets. Then the antagonist
picks a the new state from the Minkowski-sum of the previous state and
the chosen subset.
We consider two types of winning conditions,
and show that while both are determined, the winner is decidable for one
but not for the other.
-
Benjamin Rin
Realizability Semantics for Quantified Modal Logic
In
this talk, a non-Kripkean semantics for quantified modal logic is
presented that is based on Kleene's notion of realizability. This
semantics generalizes Flagg's 1985 construction of a model of a modal
version of Church's Thesis together with Shapiro's system of first-order
modal arithmetic (known as Epistemic Arithmetic). While the bulk of the
talk is devoted to developing the details of the semantics, to
illustrate the scope of this approach we observe that the construction
produces (i) a model of a modal version of Church's Thesis and a variant
of a modal set theory due to Goodman and Scedrov, (ii) a model of a
modal version of Troelstra's generalized continuity principle together
with a fragment of second-order arithmetic, and (iii) a model based on
Scott's graph model (for the untyped lambda calculus) which witnesses
the failure of the stability of non-identity. Of these, the talk will
focus primarily on (i).
-
Luis Rosa
A framework for testing the relevance of logic to reasoning
Much
has been discussed lately about the relevance of logic to the
normativity of human reasoning (both, in the epistemology literature and
in the cognitive psychology literature). In this presentation, we will
flesh out a semantic framework that allows us to test whether and to
what extent certain logical systems are normatively binding with respect
to human reasoning. The framework uses families of models that
represent the cognitive situations of reasoners. It is then possible to
change the reasoner's inferential accessibility relations in accordance
with different logics, which will give rise to alternative verdicts as
to what is rational for the reasoner to believe or doubt. As we expect
to show, logic can be relevant to the normativity of human reasoning
after all -- but only if we accept a certain form of logical pluralism.
-
Philipp Schlicht
Random reals and infinite time Turing machines
We
prove an analogue to Sacks' theorem for infinite time Turing machines:
if a real is infinite time Turing computable relative to all reals in a
set of positive measure, then it is infinite time Turing computable. To
this end, we prove results about random forcing over admissible sets and
several other results about infinite time machines. This is joint work
with Merlin Carl.
- Julian Schlöder
Classical Negation and Weak Rejections
It
is an old debate in logic whether we should admit the principle of
double negation elimination (DNE) or not -- whether we can be classical
logicians or should be intuitionists. Gentzen (1934) proposed to explain
the meaning of the logical connectives through (harmonious)
introduction and elimination rules. Dummett (1991) argues that this
licenses intuitionism, but not classical logic. A defence of classical
negation due to Smiley (1996) and Rumfitt (2000) goes as follows.
Classical negation can be axiomatised in bilateralist logics that track
both asserted and rejected content; they write +A for an assertion of A
and −A for a rejection. Bilateralism faces some challenges:
(a)
Dummett (2002) finds Rumfitt's defense of DNE almost question-begging,
since he uses − as a sign of falsity and stipulates to eliminate
rejected negations in the rule (− ¬ E.): − ¬A +A; (b) Dickie (2010)
criticises that bilateralists infer negated assertions from rejections;
she provides examples of so-called weak rejections to show that this is
unwarranted in general; (c) the Frege-Geach embedding problem, first
mentioned in Frege 1919: negation can embed, but rejection cannot.
I
present a weak rejection logic (WRL) that faces these problems head on
(joint work with Luca Incurvati). All rules from Rumfitt 2000 that are
invalid for weak rejections can be removed or appropriately weakened. To
solve the Frege-Geach problem, I show that a single principle is
sufficient: the Smilean inference +(¬ A → ¬ B), − A ⊢ − B. This rule has
been defended by Smiley (1996) and I provide further linguistic evidence
supporting it, building on Grice's (1991) notion of unassertability. I
show that, given the other bilateralist inference rules, Smilean
inference is equivalent to a weakened version of Rumfitt's (− ¬ E.) that
is still sufficient to derive +(¬ ¬ A → A). Thus, it is a natural
alternative to the rule Dummett finds objectionable.
WRL has a
number of interesting metalogical properties. First, it justifies
classical logic in the following sense. For each classical tautology φ,
it verifies that +φ is a WRL-tautology and that for a classically valid
inference φ ⊢ ψ it holds that +φ ⊢ +ψ. Thus, WRL restricted to +
is classical logic. Second, however, this does not hold for rejections;
most notably, a rejected premise breaks the Deduction theorem. Third,
WRL embeds into KD45 modal logic as follows: there is a natural mapping f
of the bilateralist language into the language of public commitment
modal logic (Asher and Lascarides, 2008). Then, WRL is sound and
complete for KD45 in the following sense: Γ ⊢ ± φ in WRL iff f [Γ] ⊨
f (± φ) in KD45.
References:
- Nicholas Asher and Alex Lascarides. Commitments, beliefs and intentions in dialogue. In SemDial 2008, 2008.
- Imogen Dickie. Negation, anti-realism, and the denial defence. Philosophical Studies, 150(2):161-185, 2010.
- Michael Dummett. 'Yes', 'No' and 'Can't say'. Mind, 111(442):289-295, 2002.
- Michael AE Dummett. The logical basis of metaphysics, volume 5. Harvard UP, 1991.
- Gottlob Frege. Die Verneinung: Eine logische Untersuchung. Beiträge zur Philosophie
des deutschen Idealismus, 1:143-157, 1919.
- Gerhard Gentzen. Untersuchungen über das logische Schließen. Mathematische Zeitschrift, 39, 1934.
- H Paul Grice. Studies in the Way of Words. Harvard University Press, 1991.
- Ian Rumfitt. 'Yes' and 'No'. Mind, 109(436):781-823, 2000.
- Timothy Smiley. Rejection. Analysis, 56(1):1-9, 1996.
- David Schrittesser
Maximal cofinitary groups
A
group of permutations of the set of natural numbers is cofinitary iff
the only group element with infinitely many fixed points is the
identity. A cofinitary group is maximal iff it is maximal under
inclusion among such groups. Can a maximal cofinitary group be definable? Kastermans showed that
in L, the answer is yes. In recent joint work with V. Fischer and A.
Törnquist, we showed that the same holds after adding arbitrarily many
Cohen reals to L.
- Andrei Sipos
Proof mining and positive-bounded logic
Proof
mining is a research program introduced by U. Kohlenbach in the 1990s,
which aims to obtain explicit quantitative information (witnesses and
bounds) from proofs of an apparently ineffective nature. This paradigm
in applied logic has successfully led so far to obtaining some
previously unknown effective bounds, primarily in nonlinear analysis and
ergodic theory. A large number of these are guaranteed to exist by a
series of logical metatheorems which cover general classes of bounded or
unbounded metric structures.
In order to apply these
metatheorems, the structures are typically formalized in higher-order
systems of arithmetic and analysis, using appropriate codings of real
numbers and related operations. The classes for which metatheorems have
already been proven include normed spaces and hyperbolic spaces.
Recently, Günzel and Kohlenbach have shown that, in principle, one
could obtain metatheorems for a plethora of classes of structures,
provided that they are formalized in positive-bounded logic (in the
sense of Henson and Iovino) and that some preparation of the axioms is
undertaken beforehand. We aim to extend this kind of results to other
clases of spaces formalizable in positive-bounded logic and to derive
concrete applications from existing and relevant prima facie
noneffective theorems for those spaces.
-
Florian Steinberg
Representations of spaces of integrable functions
We consider different ways to compute on spaces of integrable functions.
To make elements of function spaces accessible to digital computers, they can be encoded by string functions. Carrying
out computations on the function spaces can then be done by operating
on the codes, that is, computing functions on Baire space.
The framework of second order representations is a formalization of this devised by Kawamura and Cook.
The name "second order representation" is due to the use of second order complexity theory on Baire space.
Computationally it is equivalent to the TTE, the accepted model of computation over continuous structures.
In
particular Kawamura and Cook investigated the space of continuous
functions on the unit interval and succeeded to show that there is a
weakest second order representation such that evaluation is possible in
polynomial time. We use this result as a starting point for generalizations.
In
a first step we specify the weakest representation of the space of
integrable functions on the unit interval that renders integration
polynomial time computable and show that it is discontinuous with
respect to both the norm and the weak topology.
This is in contrast to the continuous functions where the norm is polynomial space computable.
We proceed to strengthen the representation enough to make it possible to compute the norm in polynomial space.
This necessitates to talk about integral moduli. Integral
moduli can be understood to classify integrable functions by their
approximizability by dyadic step functions just like the modulus of
continuity does for continuous functions. This is reflected in the
Frechet-Kolmogorov-Theorem, a Arezela-Ascoli style classification
Theorem of the compact subsets of spaces of integrable functions.
Finally
we use the strong counter examples provided by Friedman and Ko to show
that certain operators can not be expected to be rendered polynomial
time computable by any reasonable representation. Furthermore we
point out a surprising connection between the complexity of integrating a
continuous function and multiplying integrable functions whose product
is well defined.
-
Sarka Stejskalova
The tree property and the continuum function
We
will review some of the more recent results we have obtained jointly
with Sy-David Friedman and Radek Honzik regarding the relationship
between the tree property and the continuum function. We will discuss
the following key areas:
(a) The possibility of obtaining a
strong limit cardinal κ with 2κ (arbitrarily) large and
with the tree property at κ++. The possibility of having
κ = ℵω in the previous result.
(b) The
possibility of obtaining the results in (a) from more optimal
large-cardinal assumptions (supercompacts vs. strong cardinals of low
degree).
(c) The possibility of obtaining a model where the
continuum function below a strong limit ℵω is as arbitrary
as possible with the tree property holding at (some/all) cardinals
ℵn , 1 < n < ω
This work is a part of my PhD project.
-
Jan Walker
Axiomatizing Finitist Truth
Let
S be a theory in a language L. Define R(S) to be the theory in L + {T}
which is obtained by adding to S: firstly, for all formulae in which the
new predicate symbol T occurs, the corresponding substitution instances
of axiom schemata and rules of S; secondly, axioms to the effect that T
denotes partial truth as outlined by Saul Kripke [2]. In a seminal
article, Solomon Feferman [1] proposed to identify the statements in L
which are derivable in Ref(S) as the statements in L one ought to accept
if one has accepted S.
My talk is inspired by Feferman's
approach, but takes a different direction. It starts off with the
assumption that only theorems of finitist (primitive recursive)
arithmetic ought to be accepted and aims at answering the question: To
what extend can one reason about truth under this general assumption? To
be more specific, I will address truth-theoretic questions such as: How
to adapt the axioms for Kripke's notion of partial truth to what is
acceptable from the finitist perspective? How to devise finitist
variants of other predominant axiomatizations of truth to be found in
the literature? Is there a core feature shared by all these finitist
variants of axiomatizing truth?
In the sequel of my talk, I will
turn to a proof-theoretic analysis of (Turing-like) progressions in
which reflection (or formal soundness) principles are successively added
to theories in a language of typed truth, where each type level has its
own truth predicate axiomatized in accordance with finitist standards.
References
[1] Solomon Feferman, Reflecting on incompleteness, The Journal of Symbolic Logic, vol. 56 (1991), no. 1, pp. 1-49.
[2] Saul Kripke, Outline of a theory of truth, The Journal of Philosophy, vol. 72 (1975), no. 19, pp. 690-716.
- Gunnar Wilken.
Gödel's T as a case study for ordinal assignment
The
variant of Gödel's system T of primitive recursive functionals of
finite type which is based on typed lambda-calculus can be seen as a
paradigm for higher order rewrite systems. In an article joint with
A.Weiermann (LMCS 2012) T was analyzed using an assignment technique
extending Howard's original assignment from 1970.
In my talk I am
going to explain this technique under two aspects: the non-unique
assignment needed in the argument, which goes back to Howard's original
treatment, and Weiermann's collapsing technique, which allowed for a
derivation lengths classification of T.
-
Wolfgang Wohofsky and Jörg Brendle.
Borel's conjecture for the Marczewski ideal
We will show the following ZFC theorem which is recently finished work joint with Joerg Brendle: There
is no set of size continuum which is "s0-shiftable", i.e., which can
be translated away from every set in the Marczewski ideals0 (where a set of reals is in s0
if for every perfect set there is a perfect subset disjoint from it). We will concentrate on regular continuum, since the proof is easier in this case.
The
theorem is very much in contrast to the respective situation when s0
is replaced by the meager ideal: there are models (e.g., all models that
satisfy CH) with large meager-shiftable (i.e., strong measure zero)
sets.
Our original proof dealt with the reals in the sense of the
Cantor space 2ω. However, it can be generalized to other Polish
groups.
|
|