← Back

Logical Neural Networks

Topic spotlight
TopicWorld Wide

logical neural networks

Discover seminars, jobs, and research tagged with logical neural networks across World Wide.
9 curated items7 Seminars2 ePosters
Updated over 2 years ago
9 items · logical neural networks
9 results
SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 2, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarNeuroscienceRecording

Merging insights from artificial and biological neural networks for neuromorphic intelligence

Charlotte Frenkel
TU Delft
Nov 9, 2022
SeminarNeuroscienceRecording

NMC4 Keynote: An all-natural deep recurrent neural network architecture for flexible navigation

Vivek Jayaraman
Janelia Research Campus
Nov 30, 2021

A wide variety of animals and some artificial agents can adapt their behavior to changing cues, contexts, and goals. But what neural network architectures support such behavioral flexibility? Agents with loosely structured network architectures and random connections can be trained over millions of trials to display flexibility in specific tasks, but many animals must adapt and learn with much less experience just to survive. Further, it has been challenging to understand how the structure of trained deep neural networks relates to their functional properties, an important objective for neuroscience. In my talk, I will use a combination of behavioral, physiological and connectomic evidence from the fly to make the case that the built-in modularity and structure of its networks incorporate key aspects of the animal’s ecological niche, enabling rapid flexibility by constraining learning to operate on a restricted parameter set. It is not unlikely that this is also a feature of many biological neural networks across other animals, large and small, and with and without vertebrae.

SeminarNeuroscience

Memory, learning to learn, and control of cognitive representations

André Fenton
New York University
May 6, 2021

Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.

SeminarNeuroscienceRecording

Memory, learning to learn, and control of cognitive representations

André Fenton
New York University
May 6, 2021

Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.

SeminarNeuroscienceRecording

Logical Neural Networks

Ndivhuwo Makondo
IBM Research-Africa & the University of Witwatersrand
Oct 20, 2020

The work to be presented in this talk proposes a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning). Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation. Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case. The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge. It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.

SeminarNeuroscienceRecording

The geometry of abstraction in artificial and biological neural networks

Stefano Fusi
Columbia University
Jun 10, 2020

The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.

ePoster

Intrinsic dimension of neural activity: comparing artificial and biological neural networks

Jacopo Fadanni, Giacomo Gasparotto, Rosalba Pacelli, Marco Dal Maschio, Marco Salamanca, Marica Albanesi, Pietro Rotondo, Michele Allegra

Bernstein Conference 2024

ePoster

Dynamical consequences of non-random connectivity in biological neural networks

Archishman Biswas, Arvind Kumar

COSYNE 2025