Sunday evening, 10.November 2013
19:00 Come-together and Dinner in Hotel Steinsgarten
Monday morning, 11.November 2013
9:00: Welcome and opening remarks
09:15 Hans Rott: A Guided Tour Through Belief Dynamics (Abstract)
10:15 Gerhard Brewka: Handling Exceptions
in Knowledge Representation: A Brief Introduction to Nonmonotonic Reasoning (Abstract)
11:15 Coffee break
11:45 Markus Knauff: Belief revision and Non-monotonic Reasoning in Cognitive Psychology (Abstract)
Monday afternoon, 11.November 2013
14:00 Andreas Glöckner: What is adaptive about adaptive decision making? (Abstract)
15:00 Gerhard Schurz: Realistic Belief Revision: Incremental Learning and Failure of Levi-Identity (Abstract)
15:30 Laura Martignon and Keith Stenning: The intensional origin of likelihoods in nonmonotonic reasoning (Abstract)
16:30 Coffee break
17:00 Matthias Unterhuber: Belief Revision and Weak Conditional Logics (Abstract)
17:30 Gerhard C. Bukow: Is the liar a problem for the revision debate? (Abstract)
Tuesday morning, 12.November 2013
10:00 Wolfgang Spohn: Why Ranks? A Very Brief Introduction into Ranking Theory (Abstract)
10:30 Gabriele Kern-Isberner: Multiple and iterated belief revision (Abstract)
11:30 Coffee break
12:00 Stefan Wölfl: Revision of spatial knowledge bases (Abstract) CANCELED, THE FOLLOWING TALKS START 30 MINUTES EARLIER!
12:30 Steffen Hölldobler: A New Computational Model for Human Reasoning (Abstract)
13:00 Estefania Gazzo: Reasoning with legal conditionals (Abstract)
14:00 End of Workshop
Abstracts in the Order of Presentation
This talk gives a survey of normative accounts of belief change that were developed within the logico-philosophical research in that area. I start with the core ideas of Alchourrón, Gärdenfors and Makinson's classical theory of belief contraction and revision of the 1980s, continue with multiple and iterated belief change, present a choice-theoretic interpretation of the problem of belief change, and briefly address two more recent developments, viz. descriptor revision (Hansson) and two-dimensional belief change (Rott).
Gerhard Brewka: Handling Exceptions in Knowledge Representation: A Brief Introduction to Nonmonotonic Reasoning
We present some of the most influential approaches to nonmonotonic reasoning, in particular Poole systems and their extensions, McCarthy's circumscription and Reiter's default logic. We focus on the the main underlying ideas. We also briefly sketch how the field has developed over the last three decades, and where the main research activities currently lie.
To understand how people change their opinion over time or in the light of new information that does not agree with their current belief is one of the most fascinating questions of psychology. In daily life, the underlying processes are highly complex and affected by several cognitive, emotional, motivational, and social factors. Given this complex interplay of many factors, one might be surprised that cognitive psychologists primarily use experimental paradigms in which, broadly speaking, an additional premise can lead to the suppression or withdrawal of a derived conclusion. However, cognitive psychologists adopt this approach for many reasons and indeed–by using this approach–were quite successful in understanding some important aspects of human belief revision and non-monotonic reasoning. I will give an overview about these insights and show how they are related to opinion change in everyday life.
There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. We contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts versus multiple-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes and contrast it theoretically and empirically against a multiple-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns – as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking) and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model in particular.
Im Vortrag argumentiere ich im
ersten Teil, dass die Theorien der Wissensrevision vom AGM-Typ wenig über den
kognitiven Prozess der Wissensrevision aussagen und dort, wo sie dies tun,
unrealistische Annahmen machen, z.B. die Annahme, dass in datengetriebenen
Revisionsprozessen kein Lernen involviert ist. Im zweiten Teil skizziere ich
Modelle von Wissensrevision, die Lernen beinhalten.
Da revisionistische Lernmethoden inkrementell sind, ist die Levi-Identität verletzt.
This talk grows out of an argument that the field of judgment and decision has an understandable but unfortunate tendency to assume that rationality has to be explicated in extensional systems e.g. classical logic or probability. We argue that this is mistaking the theory of how rational action should be justified, for the theory of rational action?loosely analogous to mistaking the police for morality.
The present talk presents an experiment descended from [Cummins, 1995] which reinterprets her findings, and our extensions, as derived from the workings of logic programming (LP) nets. LP is a radically intensional system, and if these findings are substantiated, they provide a source of broadly based frequency estimation by subjects. They put an intensional system back at the centre of rational action.
One question then becomes, what do we need probability theory for?
Belief change and revision are central notions of AGM belief revision as well as accounts of so-called epistemic conditionals, that is conditionals which we evaluate relative to our epistemic states. While it seems plausible to combine both approaches, Gaerdenfors has famously shown that we can account for conditionals in an AGM belief revision framework only on pains of triviality, at least when we adopt a full Ramsey test account of conditionals. While Gaerdenfors and others argue that we should thus reject an account of conditionals based on the (full) Ramsey test, one can equally well endorse the less popular strategy and argue that we should rather reject some of the postulates of AGM belief revision.
In my talk I will pursue the latter approach. I will first give a Ramsey test interpretation of Chellas-Segerberg semantics, a weak modal conditional logic semantics. I will then inquire which conditional logics ensue when we translate the individual AGM postulates into such a weak conditional logic framework and whether some of these translations seem objectionable from a conditional logic perspective.
The liar is a
classical paradox of circularity: (P) Agent X believes that "P is false". Is P
now true or false? What should X believe? Typically, circularity produces
sequences of revisions of X’s beliefs, whereas X believes that P, then beliefs
that not-P, then beliefs that P, etc. X tries to revise his beliefs
continually. The liar is understood as an instance of belief instability and
there are different proposals trying to describe the instable sequence of
revisions. I argue that none of these proposals captures the instability in a
normatively adequate way. A main concern is the missing of a decision procedure deciding between different ways handling the liar paradox. Further, I argue that an adequate way to handle the liar's revision must be a constructive way.
Ranking theory offers a complete and most fruitful format for modeling epistemic states, and indeed for modeling them in such a way that they hold beliefs, have reasons, etc. Therefore this format is interesting, if not obligatory for everyone somehow theoretically dealing with epistemic states. The talk will explain some basics of ranking theory in a way intelligible for non-insiders.
Belief revision has been shaped predominantly by the so-called AGM theory, named after a seminal paper by Alchourron, Gärdenfors, and Makinson in 1985 that set up a framework of rationality postulates for reasonable belief change when the prior knowledge is a deductively closed set of propositions, and the new information comes in also as a proposition. This theory prepared the grounds on which the field of modern belief revision grew. However, limitations of the AGM theory became apparent soon. First, AGM theory deals with just one step of revision, not caring about further revisions in the future. So, the need for an extended framework also dealing with ``iterated revision'' became apparent soon and has been a topic of intense research since the nineties of the last century. Further problems which are caused not by the AGM approach itself but by the chosen framework of classical propositional logic have been discussed in the broad community only quite recently: How to change beliefs rationally if both prior knowledge and new information need richer semantical frameworks than propositional logic? What to do if multiple pieces of new information (``multiple revision'') have to be integrated? In particular, this last problem has been ignored for a long time because in propositional logic, a set of propositions is equivalent to the conjunction of the propositions, i.e., in classical logic, one proposition can replace a set of propositions, so this case seemed to have been covered by AGM theory as well. However, counterintuitive examples showed that unsatisfactory belief sets result from this simplification.
In this talk, I will present an approach to belief revision from a broader point of view that offers quite natural methods for iterated revision and tackles the problem of multiple revision right from the beginning. This approach also takes the ideas of AGM as a starting point but investigates belief revision in richer epistemic structures like probabilities, or qualitative rankings. Therefore, it is compatible to AGM theory (and proposed extensions for iterated revision) in propositional logic, but is not trapped by its limitations that are caused by the classical propositional view. I will explain how this approach unifies belief revision in different semantical frameworks and offers powerful constructive approaches for belief revision even for very advanced scenarios, i.e., when an epistemic state has to be revised by a set of conditional beliefs. I will also address the distinction between revision and update briefly, and will illustrate the approach in various scenarios (multiple propositional belief revision, probabilistic belief revision, belief change of Spohn's ordinal conditional functions).
Most work in the literature on belief revision is settled in the rather simple logical framework given by propositional logic. That is, a general purpose, domain-independent logic is used that not only limits how beliefs and/or knowledge can be represented, but also restricts the logical machinery that is applicable for studying revision operators. In contrast to this observation, we find quite a variety of domain-specific logics in the AI literature on knowledge representation and reasoning---in particular logics that are tailored for representing, and reasoning about, temporal and spatial beliefs. But the question of how spatial and temporal belief sets can be revised has played only a minor role in the literature so far.
In my talk I will discuss some pros and cons of using domain-specific logics (in
particular spatial logics) in the context of belief revision. We will see that
such logics may require an adaptation of the AGM rationality postulates of
belief revision. On the other hand, applying domain knowledge may allow for
new revision operations that can not be expressed in terms of the simple
framework of propositional logic.
There are various well-studied human reasoning tasks like the suppression and the selection task. I will present a new computational model based on weak completion semantics of logic programs, which seems to be adequate for these tasks. I will compare the weak completion semantics to well-founded semantics and will discuss several open problems.
In everyday reasoning people often refuse to draw valid inferences if they can think of situations in which the consequent will not follow, even if the antecedent holds. These so called counterexamples have received a lot of attention in the psychological literature. However, the acceptance of a counterexample is not trivial: counterexamples are not always accepted as such, if their acceptance would imply going against personal values or attitudes. Such is the case for legal conditionals, where counterexamples describing exculpatory circumstances are not accepted by laypeople as long as the transgression described in the legal conditionals evokes a high moral outrage. The aim of this talk is to present some results on reasoning with legal conditionals and to open a debate about which underlying mechanism may be responsible for the observed results.