Logical Inference
Logical Inference
Analogical inference in mathematics: from epistemology to the classroom (and back)
In this presentation, we will discuss adaptations of historical examples of mathematical research to bring out some of the intuitive judgments that accompany the working practice of mathematicians when reasoning by analogy. The main epistemological claim that we will aim to illustrate is that a central part of mathematical training consists in developing a quasi-perceptual capacity to distinguish superficial from deep analogies. We think of this capacity as an instance of Hadamard’s (1954) discriminating faculty of the mathematical mind, whereby one is led to distinguish between mere “hookings” (77) and “relay-results” (80): on the one hand, suggestions or ‘hints’, useful to raise questions but not to back up conjectures; on the other, more significant discoveries, which can be used as an evidentiary source in further mathematical inquiry. In the second part of the presentation, we will present some recent applications of this epistemological framework to mathematics education projects for middle and high schools in Italy.
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Logical Neural Networks
The work to be presented in this talk proposes a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning). Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation. Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case. The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge. It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.