Abstracts.
Fine-grained Mathematical JustificationsJesse Alama
From
a corpus of formalized mathematical knowledge (definitions, theorems,
and proofs), one can extract fine-grained information about what
principles are sufficient for certain linguistic and justificatory
tasks: for a certain expressions to be well-defined, for a theorem to
be a well-formed formula, and for a proof to be successful. That they
are sufficient is clear: the proof checker has accepted a proof
without error. We report on some initial experiments going the other
way around, of computing necessary principles for mathematical
knowledge. The task here is evidently of a different character than
simply checking a proof. We report on our work on decomposing texts
in the Mizar proof-checking system to extract suitable information
for computing necessary principles.
An Empirical Perspective on Logical Constants
Denis Bonnay
The project of delineating a special class of logical constants is usually pursued from
the perspective of "pure" philosophy of logic. In this respect, it is not clear whether
being a logical constant constitue a natural kind. One might rather think that the
joint
characterization of logical consequence and of logical constancy is nothing but the
output of a certain reflective equilibrium involving general theoretical considerations
as well as particular judgments about validity.
However, from a broader linguistic and cognitive perspective, logical constants do seem
to belong to a natural kind, namely the class of functional expressions in natural
languages. Functional words are grammatical expressions that "glue" the different
constituents of a sentence together and they share a wide number of linguistic and
psycholinguistic properties (eg, functional categories are not productive,
psycholinguistic evidence suggests that access to functional words is different from
access to lexical words, etc.)
In this talk, my aim will be to clarify the connections between the
logico-philosophical
project of providing a principled chacterization of the class of logical constants
and
the empirical project in linguistics of studying functional words and functional
categories as such. As a tentative attempt, I will suggest that the hypothesis of the
innateness of grammatical notions and the hypothesis that (a generalized version of)
permutation invariance is a distinctive property of logical constant provide mutual
support to each other.
Multiplicative Quantifiers in First-Order Fuzzy Logics
Petr Cintula
Mathematical
fuzzy logics can be viewed as special class of substructural logics,
and the most prominent ones lack the rule of contraction. Therefore
there are two conjunctions: “additive" ^ (also called
lattice) and “multiplicative" & (also called
residuated). Informally speaking, additive conjunction allows using
either (or any) of its conjuncts as a premise for further inference,
while multiplicative conjunction allows using both (or all)
conjuncts. There is a long-standing problem of extending this
distinction to first-order fuzzy logics, which should analogously
contain an additive universal quantifier (any, ) and a multiplicative
one (all, ): while the additive quantifier can easily be defined
(semantically as the infimum in the lattice of truth values and
axiomatically by following Hajek's original approach), the same
cannot easily be done for the multiplicative quantifier. Formally, a
multiplicative quantifier should satisfy the following natural
conditions:
- from φ→ψ infer (Qx)φ → (Qx)ψ,
- from φ infer (Qx)φ,
- (Qx) φ → &{ φ(t) ; t in M} for each finite multiset M of terms.
Relevant Agents
Marta Bílková, Ondrej Majer, Michael Peliš, Greg Restall
In [4], Majer and Peliš
proposed a
relevant logic for epistemic agents, providing a novel extension of the
relevant logic R with a distinctive epistemic modality K, which is
at the one and the same time factive and an existential normal modal
operator. The intended interpretation is that Kφ holds (relative to a
situation s) if there is a resource available at s,
confirming φ. In this article we expand the class of models to the
broader class of egeneral epistemic framesf. With this generalisation we
provide a sound and complete axiomatisation for the logic of general
relevant epistemic frames. We also show, that each of the modal axioms
characterises some natural subclasses of general frames.
A new approach to fuzzy logics with truth stressers and depressers
Francesc Esteva, Lluìs Godo, Carles Noguera
A
number of papers have considered fuzzy logics with truth hedges, as
unary connectives (vt and st, for very true and
slightly true), which allow to stress or depress the truth value of
any given proposition. The equivalent algebraic semantics for these
logics turned out to be, in all cases, a variety, i.e. an equational
class of algebras. This nice result was obtained at the cost of
adding the axiom: vt(φ→ψ)→vt(φ)→vt(ψ)
(and analogously for st). This amounts to a strong
restriction for the algebraic semantics which has no natural
interpretation. For instance, it implies that over Lukasiewicz logic
the only possible non-Boolean function to interpret a depressing
hedge is the identity function. In this talk we will generalize the
approach of these previous works obtaining weaker fuzzy logics that
will overcome this drawback. Given a core fuzzy logic L we
define an expansion Lh with a new unary
connective h defined by the following additional axioms in the
case of a truth stresser:
(VTL1) hφ→φ
(VTL2) h1,
or the following axioms in the case of a truth depresser:
(STL1) φ→ hφ
(STL2) ~h0
and, in both cases, the following additional inference rule:
(MON) from (φ→ψ) or χ infer (hφ→hψ) or χ.
From
this presentation one can easily prove that the logic is complete
with respect to a semantics of linearly ordered algebras where hedges
are interpreted as any subdiagonal (superdiagonal) monotonic
function mapping the maximum (minimum) element to itself.
Moreover, if L extends BL we can prove that the corresponding
class of algebras is a variety. Finally we show these expansions with
h preserve standard completeness properties, i.e. if L
is complete with respect to chains defined over the real unit
interval, then so it is Lh.
Towards an experimental philosophy of uncertain reasoning
Niki Pfeifer
Experimental
philosophy is a recent trend in philosophy that applies empirical
methods to investigate philosophical intuitions. The main research
topics include intuitions on morality, consciousness, epistemology
and causation. The aim of my talk is to extend the domain of
experimental philosophy to uncertain reasoning. Specifically, I
critically survey previous philosophical and empirical work on
nonmonotonic reasoning and uncertain conditionals. I discuss how
"armchair philosophy" and experimental work can fruitfully
interact and illustrate my position with recent experimental results
on how people interpret and reason with conditionals.
A Simulation Based Analysis of Logico-Probabilistic Reasoning Systems
Paul Thorn and Gerhard Schurz
Systems
of logico-probabilistic (LP) reasoning characterize inference
from conditional assertions that are taken (semantically) to express
high conditional probabilities. There are several existent LP
systems. These systems differ in the number and type of inferences
they licence. An LP system that licenses a greater number of
inferences offers the opportunity of deriving more true informative
conclusions. But with this possible reward comes the risk
of drawing more false conclusions. By means of computer
simulations, we investigated four well known LP systems, systems O,
P, Z and QC, with the goal of determining which
system provides the best balance of reward versus risk.
In this talk, we explain why each of the four systems (O, P,
Z and QC) is a prima facie contender to be the
correct prescriptive theory of LP reasoning. We then present data
which suggests that (of the four systems) system Z has the
best claim to be the correct prescriptive theory of LP reasoning,
since it offers the best balance of reward versus risk.
Faithfulness in formal modelling
Sara Uckelman
There
is a strong analogy between the process of mathematical modeling used
in science and engineering and the process of modeling used in the
construction of formal models for historical logical theories, and
this can be contrasted with the use of philosophical modeling found
more generally in main-stream philosophy, such as the use of thought
experiments. Both processes are primarily descriptive, and only
derivatively prescriptive, because what is being modeled are
objective facts in the world, as opposed to, e.g., our pre-theoretic
intuitions. The descriptive nature of the process gives rise to the
question of faithfulness: When are we allowed to say that our logical
model is a faithful representation of the historical logical theory?
We
discuss two different ways that a model or a description can be
faithful: It can be faithful to the content and it can be faithful to
the context. We focus on faithfulness to content, and discuss two
benchmarks that can be used to determine the degree of faithfulness
of a model: its level of generation and what we call structural or
procedural agreement. We illustrate this discussions with examples of
both good and bad logical models of historical theories: The
ontological argument of Anselm of Canterbury, the formalization of
Aristotelian syllogistics in first-order logic, and models of
medieval theories of obligationes.
Be tolerant about vagueness, because it is unavoidable!
Robert van Rooij
Vagueness
is a pervasive feature of natural language. Although logicians don't
like it, because it
gives
rise to (Sorites) paradox, there is hardly a term in natural language
that is not vague. To save language from paradox, most logicians have
proposed that natural language terms are not tolerant: the tolerance
principle which states that `if x
has property P
and y
is indistinguishable from x,
y
has to have property P
as well' is giving up. But this is unfortunate, because the tolerance
principle seems to be constitutive of what it means to be vague. In
the first part of this
talk
it will be proposed that we should face the paradox by accepting
tolerance, rather than simply avoiding it. A new logical treatment
will be presented and defended.
In
the second part of the talk we seek to explain why vagueness is such
a pervasive feature of natural language in the first place. Making
use of evolutionary game theory and a new stochastic equilibrium
concept, it will be shown that vagueness is unavoidable once we take
our agents to be realistic bounded rational agents.