Learning and inferences: a logical model of a cognitive agent
18 octobre, 13h-15h.
Maison de la Recherche, salle F1.07
In my talk I will present what I think is a suitable logic for modelling the inferences and learning processes of a cognitive agent.
First, I will recall the theoretical background of belief revision and non-monotonic inferences. The most influential model for belief revision is the AGM model, called after its three originators, Carlos Alchourron, Peter Gardenfors and David Makinson. However this model suffers from a well-known problem, which is that it doesn't allow iteration. I will recall where does this problem come from, and will consequently argue that the AGM model is not suitable for modelling an agent's learning. In the field of non-monotonic logic, an important account was given by Sarit Kraus, Daniel Lehmann and Menachem Magidor, with a family of logics known as KLM logics or inference relations. Amongst there are the so-called rational consistent inference relations, which have been shown to be strictly equivalent to AGM's belief revision. I will recall this result and will argue that learning must be seen not as a revision of beliefs sets as in the AGM model, but rather as revision of an agent's dispositions to infer. That is, that what are to be revised are the very inference relations. There have been a few attempts in this sense, but non of these seems to be fully satisfying. I will argue that this comes from the fact that rational consistent relations are not the ones which are to be revised, but rather the inference relations of a more general class, which has not been characterized yet. I will present this class of inference relations, and will show how these relations can be revised in a natural and intuitive manner. Then I will present my own attempt to caraterize it.