TopicNeuro

constraints

42 Seminars13 ePosters

Latest

SeminarNeuroscience

Neurobiological constraints on learning: bug or feature?

Cian O’Donell
Ulster University
Jun 11, 2025

Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.

SeminarNeuroscienceRecording

Off the rails - how pathological patterns of whole brain activity emerge in epileptic seizures

Richard Rosch
King's College London
Mar 15, 2023

In most brains across the animal kingdom, brain dynamics can enter pathological states that are recognisable as epileptic seizures. Yet usually, brain operate within certain constraints given through neuronal function and synaptic coupling, that will prevent epileptic seizure dynamics from emerging. In this talk, I will bring together different approaches to identifying how networks in the broadest sense shape brain dynamics. Using illustrative examples from intracranial EEG recordings, disorders characterised by molecular disruption of a single neurotransmitter receptor type, to single-cell recordings of whole-brain activity in the larval zebrafish, I will address three key questions - (1) how does the regionally specific composition of synaptic receptors shape ongoing physiological brain activity; (2) how can disruption of this regionally specific balance result in abnormal brain dynamics; and (3) which cellular patterns underly the transition into an epileptic seizure.

SeminarNeuroscience

Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings

Jascha Achterberg
University of Cambridge
Feb 1, 2023

Brain networks exist within the confines of resource limitations. As a result, a brain network must overcome metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. To observe the effect of these processes, we introduce the spatially-embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a 3D Euclidean space, where the communication of constituent neurons is constrained by a sparse connectome. We find that seRNNs, similar to primate cerebral cortices, naturally converge on solving inferences using modular small-world networks, in which functionally similar units spatially configure themselves to utilize an energetically-efficient mixed-selective code. As all these features emerge in unison, seRNNs reveal how many common structural and functional brain motifs are strongly intertwined and can be attributed to basic biological optimization processes. seRNNs can serve as model systems to bridge between structural and functional research communities to move neuroscientific understanding forward.

SeminarNeuroscienceRecording

Convex neural codes in recurrent networks and sensory systems

Vladimir Itskov
The Pennsylvania State University
Dec 14, 2022

Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 9, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscience

Lifelong Learning AI via neuro inspired solutions

Hava Siegelmann
University of Massachusetts Amherst
Oct 27, 2022

AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.

SeminarNeuroscienceRecording

Computation in the neuronal systems close to the critical point

Anna Levina
Universität Tübingen
Apr 29, 2022

It was long hypothesized that natural systems might take advantage of the extended temporal and spatial correlations close to the critical point to improve their computational capabilities. However, on the other side, different distances to criticality were inferred from the recordings of nervous systems. In my talk, I discuss how including additional constraints on the processing time can shift the optimal operating point of the recurrent networks. Moreover, the data from the visual cortex of the monkeys during the attentional task indicate that they flexibly change the closeness to the critical point of the local activity. Overall it suggests that, as we would expect from common sense, the optimal state depends on the task at hand, and the brain adapts to it in a local and fast manner.

SeminarNeuroscienceRecording

Flexible motor sequence generation by thalamic control of cortical dynamics through low-rank connectivity perturbations

Laureline Logiaco
Center for Theoretical Neuroscience, Columbia University
Mar 9, 2022

One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales to efficiently shape structured behaviors. I will present a model that clarifies how these complex computations could be performed in the mammalian brain, with an emphasis on the learning of an extendable library of autonomous motor motifs and the flexible stringing of these motifs in motor sequences. To build this model, we took advantage of the fact that the anatomy of the circuits involved is well known. Our results show how these architectural constraints lead to a principled understanding of how strategically positioned plastic connections located within motif-specific thalamocortical loops can interact with cortical dynamics that are shared across motifs to create an efficient form of modularity. This occurs because the cortical dynamics can be controlled by the activation of as few as one thalamic unit, which induces a low-rank perturbation of the cortical connectivity, and significantly expands the range of outputs that the network can produce. Finally, our results show that transitions between any motifs can be facilitated by a specific thalamic population that participates in preparing cortex for the execution of the next motif. Taken together, our model sheds light on the neural network mechanisms that can generate flexible sequencing of varied motor motifs.

SeminarNeuroscienceRecording

Parametric control of flexible timing through low-dimensional neural manifolds

Manuel Beiran
Center for Theoretical Neuroscience, Columbia University & Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 9, 2022

Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.

SeminarNeuroscienceRecording

Why Some Intelligent Agents are Conscious

Hakwan Lau
RIKEN CBS
Dec 3, 2021

In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. First, the agent is stipulated to be capable of (meta)representing subjective ‘qualitative’ sensory information, in the sense that it can easily assess how exactly similar a sensory signal is to all other possible sensory signals. This information is subjective in the sense that it concerns how the different stimuli can be distinguished by the agent itself, rather than how physically similar they are. For this to happen, sensory coding needs to satisfy sparsity and smoothness constraints, which are known to facilitate metacognition and generalization. Second, this qualitative information can under some specific circumstances acquire an ‘assertoric force’. This happens when a certain self-monitoring mechanism decides that the qualitative information reliably tracks the current state of the world, and informs a general symbolic reasoning system of this fact. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, the perceptual content presents itself as reflecting the state of the world right now, in ways that seem undeniably rational to the agent. At the same time, without effort, the agent also knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this architecture, for which consciousness might have arisen as a byproduct.

SeminarNeuroscienceRecording

NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks

Abdullahi Ali
Donders Institute for Brain
Dec 2, 2021

Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.

SeminarNeuroscience

An optimal population code for global motion estimation in local direction-selective cells

Miriam Henning
Silies lab, University of Mainz, Germany
Nov 4, 2021

Neuronal computations are matched to optimally encode the sensory information that is available and relevant for the animal. However, the physical distribution of sensory information is often shaped by the animal’s own behavior. One prominent example is the encoding of optic flow fields that are generated during self-motion of the animal and will therefore depend on the type of locomotion. How evolution has matched computational resources to the behavioral constraints of an animal is not known. Here we use in vivo two photon imaging to record from a population of >3.500 local-direction selective cells. Our data show that the local direction-selective T4/T5 neurons in Drosophila form a population code that is matched to represent optic flow fields generated during translational and rotational self-motion of the fly. This coding principle for optic flow is reminiscent to the population code of local direction-selective ganglion cells in the mouse retina, where four direction-selective ganglion cells encode four different axes of self-motion encountered during walking (Sabbah et al., 2017). However, in flies we find six different subtypes of T4 and T5 cells that, at the population level, represent six axes of self-motion of the fly. The four uniformly tuned T4/T5 subtypes described previously represent a local snapshot (Maisak et al. 2013). The encoding of six types of optic flow in the fly as compared to four types of optic flow in mice might be matched to the high degrees of freedom encountered during flight. Thus, a population code for optic flow appears to be a general coding principle of visual systems, resulting from convergent evolution, but matching the individual ethological constraints of the animal.

SeminarNeuroscienceRecording

Designing temporal networks that synchronize under resource constraints

Yuanzhao Zhang
Santa Fe Institute
Oct 22, 2021

Being fundamentally a non-equilibrium process, synchronization comes with unavoidable energy costs and has to be maintained under the constraint of limited resources. Such resource constraints are often reflected as a finite coupling budget available in a network to facilitate interaction and communication. In this talk, I will show that introducing temporal variation in the network structure can lead to efficient synchronization even when stable synchrony is impossible in any static network under the given budget. Our strategy is based on an open-loop control scheme and alludes to a fundamental advantage of temporal networks. Whether this advantage of temporality can be utilized in the brain is an interesting open question.

SeminarNeuroscience

Generative models of the human connectome

Prof Alex Fornito and Dr Stuart Oldham
Jun 10, 2021

The human brain is a complex network of neuronal connections. The precise arrangement of these connections, otherwise known as the topology of the network, is crucial to its functioning. Recent efforts to understand how the complex topology of the brain has emerged have used generative mathematical models, which grow synthetic networks according to specific wiring rules. Evidence suggests that a wiring rule which emulates a trade-off between connection costs and functional benefits can produce networks that capture essential topological properties of brain networks. In this webinar, Professor Alex Fornito and Dr Stuart Oldham will discuss these previous findings, as well as their own efforts in creating more physiologically constrained generative models. Professor Alex Fornito is Head of the Brain Mapping and Modelling Research Program at the Turner Institute for Brain and Mental Health. His research focuses on developing new imaging techniques for mapping human brain connectivity and applying these methods to shed light on brain function in health and disease. Dr Stuart Oldham is a Research Fellow at the Turner Institute for Brain and Mental Health and a Research Officer at the Murdoch Children’s Research Institute. He is interested in characterising the organisation of human brain networks, with particular focus on how this organisation develops, using neuroimaging and computational tools.

SeminarNeuroscience

The Brain’s Constraints on Human Number Concepts

Andreas Nieder
University of Tübingen
May 26, 2021

Although animals can estimate numerical quantities, true counting and arithmetic abilities are unique to humans and are inextricably linked to symbolic competence. However, our unprecedented numerical skills are deeply rooted in our neuronal heritage as primates and vertebrates. I argue that numerical competence in humans is the result of three neural constraints. First, I propose that the neuronal mechanisms of quantity estimation are part of our evolutionary heritage and can be witnessed across primate and vertebrate phylogeny. Second, I suggest that a basic understanding of number, what numerical quantity means, is innately wired into the brain and gives rise to an intuitive number sense, or number instinct. Third and finally, I argue that symbolic counting and arithmetic in humans is rooted in an evolutionarily and ontogenetically primeval neural system for non-symbolic number representations. These three neural constraints jointly determine the basic processing of number concepts in the human mind.

SeminarNeuroscience

Co-tuned, balanced excitation and inhibition in olfactory memory networks

Claire Meissner-Bernard
Friedrich lab, Friedrich Miescher Institute, Basel, Switzerland
May 20, 2021

Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.

SeminarNeuroscience

Smart perception?: Gestalt grouping, perceptual averaging, and memory capacity

Jennifer E. Corbett
Brunel University London
May 18, 2021

It seems we see the world in full detail. However, the eye is not a camera nor is the brain a computer. Incredible metabolic constraints render us unable to encode more than a fraction of information available in each glance. Instead, our illusion of stable and complete perception is accomplished by parsimonious representation relying on natural order inherent in the surrounding environment. I will begin by discussing previous behavioral work from our lab demonstrating one such strategy by which the visual system represents average properties of Gestalt-grouped sets of individual objects, warping individual object representations toward the Gestalt-defined mean. I will then discuss on-going work using a behavioral index of averaging Gestalt-grouped information established in our previous work in conjunction with an ERP-index of VSTM capacity (the CDA) to measure whether the Gestalt-grouping and perceptual averaging strategy acts to boost memory capacity above the classic “four-item” limit. Finally, I will outline our pre-registered study to determine whether this perceptual strategy is indeed engaged in a “smart” manner under normal circumstances, or compromises fidelity for capacity by perceptually-averaging in trials with only four items that could otherwise be individually represented.

SeminarNeuroscienceRecording

Neural mechanisms of active vision in the marmoset monkey

Jude Mitchell
University of Rochester
May 12, 2021

Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY

SeminarNeuroscience

Stereo vision in humans and insects

Jenny Read
Newcastle University
May 12, 2021

Stereopsis – deriving information about distance by comparing views from two eyes – is widespread in vertebrates but so far known in only class of invertebrates, the praying mantids. Understanding stereopsis which has evolved independently in such a different nervous system promises to shed light on the constraints governing any stereo system. Behavioral experiments indicate that insect stereopsis is functionally very different from that studied in vertebrates. Vertebrate stereopsis depends on matching up the pattern of contrast in the two eyes; it works in static scenes, and may have evolved in order to break camouflage rather than to detect distances. Insect stereopsis matches up regions of the image where the luminance is changing; it is insensitive to the detailed pattern of contrast and operates to detect the distance to a moving target. Work from my lab has revealed a network of neurons within the mantis brain which are tuned to binocular disparity, including some that project to early visual areas. This is in contrast to previous theories which postulated that disparity was computed only at a single, late stage, where visual information is passed down to motor neurons. Thus, despite their very different properties, the underlying neural mechanisms supporting vertebrate and insect stereopsis may be computationally more similar than has been assumed.

SeminarNeuroscienceRecording

Stability-Flexibility Dilemma in Cognitive Control: A Dynamical System Perspective

Naomi Leonard
Princeton University
Mar 26, 2021

Constraints on control-dependent processing have become a fundamental concept in general theories of cognition that explain human behavior in terms of rational adaptations to these constraints. However, theories miss a rationale for why such constraints would exist in the first place. Recent work suggests that constraints on the allocation of control facilitate flexible task switching at the expense of the stability needed to support goal-directed behavior in face of distraction. We formulate this problem in a dynamical system, in which control signals are represented as attractors and in which constraints on control allocation limit the depth of these attractors. We derive formal expressions of the stability-flexibility tradeoff, showing that constraints on control allocation improve cognitive flexibility but impair cognitive stability. We provide evidence that human participants adapt higher constraints on the allocation of control as the demand for flexibility increases but that participants deviate from optimal constraints. In continuing work, we are investigating how collaborative performance of a group of individuals can benefit from individual differences defined in terms of balance between cognitive stability and flexibility.

SeminarNeuroscienceRecording

The When, Where and What of visual memory formation

Brad Wyble
Pennsylvania State University
Feb 12, 2021

The eyes send a continuous stream of about two million nerve fibers to the brain, but only a fraction of this information is stored as visual memories. This talk will detail three neurocomputational models that attempt an understanding how the visual system makes on-the-fly decisions about how to encode that information. First, the STST family of models (Bowman & Wyble 2007; Wyble, Potter, Bowman & Nieuwenstein 2011) proposes mechanisms for temporal segmentation of continuous input. The conclusion of this work is that the visual system has mechanisms for rapidly creating brief episodes of attention that highlight important moments in time, and also separates each episode from temporally adjacent neighbors to benefit learning. Next, the RAGNAROC model (Wyble et al. 2019) describes a decision process for determining the spatial focus (or foci) of attention in a spatiotopic field and the neural mechanisms that provide enhancement of targets and suppression of highly distracting information. This work highlights the importance of integrating behavioral and electrophysiological data to provide empirical constraints on a neurally plausible model of spatial attention. The model also highlights how a neural circuit can make decisions in a continuous space, rather than among discrete alternatives. Finally, the binding pool (Swan & Wyble 2014; Hedayati, O’Donnell, Wyble in Prep) provides a mechanism for selectively encoding specific attributes (i.e. color, shape, category) of a visual object to be stored in a consolidated memory representation. The binding pool is akin to a holographic memory system that layers representations of select latent representations corresponding to different attributes of a given object. Moreover, it can bind features into distinct objects by linking them to token placeholders. Future work looks toward combining these models into a coherent framework for understanding the full measure of on-the-fly attentional mechanisms and how they improve learning.

SeminarNeuroscience

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard Medical School
Feb 4, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.

SeminarNeuroscienceRecording

Consciousness, falsification and epistemic constraints

Johannes Kleiner
Munich Center for Mathematical Philosophy
Dec 11, 2020

Consciousness is a phenomenon unlike any other studied in natural science. Yet when building theories and designing experiments, we often proceed as if this were not the case. In this talk, I present two recent investigations of mine which explore the implications of consciousness' unique epistemic context for scientific theory building and experimental design. The first investigation is concerned with falsifications of theories of consciousness and identifies a rather deep problem in the usual scheme of testing theories. The second is an axiomatization and subsequent formalization of some of consciousness' more problematic epistemic features that allows to precisely quantify where the usual scientific methodology ceases to be applicable. For both cases, I indicate ways to resolve the problem.

SeminarNeuroscience

Monkey Talk – what studies about nonhuman primate vocal communication reveal about the evolution of speech

Julia Fischer
Deutsche Primate Center
Oct 21, 2020

The evolution of speech is considered to be one of the hardest problems in science. Studies of the communicative abilities of our closest living relatives, the nonhuman primates, aim to contribute to a better understanding of the emergence of this uniquely human capability. Following a brief introduction over the key building blocks that make up the human speech faculty, I will focus on the question of meaning in nonhuman primate vocalizations. While nonhuman primate calls may be highly context specific, thus giving rise to the notion of ‘referentiality’, comparisons across closely related species suggest that this specificity is evolved rather than learned. Yet, as in humans, the structure of calls varies with arousal and affective state, and there is some evidence for effects of sensory-motor integration in vocal production. Thus, the vocal production of nonhuman primates bears little resemblance to the symbolic and combinatorial features of human speech, while basic production mechanisms are shared. Listeners, in contrast, are able learning the meaning of new sounds. A recent study using artificial predator shows that this learning may be extremely rapid. Furthermore, listeners are able to integrate information from multiple sources to make adaptive decisions, which renders the vocal communication system as a whole relatively flexible and powerful. In conclusion, constraints at the side of vocal production, including limits in social cognition and motivation to share experiences, rather than constraints at the side of the recipient explain the differences in communicative abilities between humans and other animals.

SeminarNeuroscienceRecording

How behavioral and evolution constraints sculpt early visual processing

Stephanie Palmer
The University of Chicago
Sep 22, 2020
SeminarNeuroscienceRecording

The consequences and constraints of functional organization on behavior

Dwight Kravitz
George Washington University
Aug 12, 2020

In many ways, cognitive neuroscience is the attempt to use physiological observation to clarify the mechanisms that shape behavior. Over the past 25 years, fMRI has provided a system-wide and yet somewhat spatially precise view of the response in human cortex evoked by a wide variety of stimuli and task contexts. The current talk focuses on the other direction of inference; the implications of this observed functional organization for behavior. To begin, we must interrogate the methodological and empirical frameworks underlying our derivation of this organization, partially by exploring its relationship to and predictability from gross neuroanatomy. Next, across a series of studies, the implications of two properties of functional organization for behavior will be explored: 1) the co-localization of visual working memory and perceptual processing and 2) implicit learning in the context of distributed responses. In sum, these results highlight the limitations of our current approach and hint at a new general mechanism for explaining observed behavior in context with the neural substrate.

SeminarNeuroscience

Neuronal morphology imposes a tradeoff between stability, accuracy and efficiency of synaptic scaling

Adriano Bellotti
University of Cambridge
Jul 20, 2020

Synaptic scaling is a homeostatic normalization mechanism that preserves relative synaptic strengths by adjusting them with a common factor. This multiplicative change is believed to be critical, since synaptic strengths are involved in learning and memory retention. Further, this homeostatic process is thought to be crucial for neuronal stability, playing a stabilizing role in otherwise runaway Hebbian plasticity [1-3]. Synaptic scaling requires a mechanism to sense total neuron activity and globally adjust synapses to achieve some activity set-point [4]. This process is relatively slow, which places limits on its ability to stabilize network activity [5]. Here we show that this slow response is inevitable in realistic neuronal morphologies. Furthermore, we reveal that global scaling can in fact be a source of instability unless responsiveness or scaling accuracy are sacrificed." "A neuron with tens of thousands of synapses must regulate its own excitability to compensate for changes in input. The time requirement for global feedback can introduce critical phase lags in a neuron’s response to perturbation. The severity of phase lag increases with neuron size. Further, a more expansive morphology worsens cell responsiveness and scaling accuracy, especially in distal regions of the neuron. Local pools of reserve receptors improve efficiency, potentiation, and scaling, but this comes at a cost. Trafficking large quantities of receptors requires time, exacerbating the phase lag and instability. Local homeostatic feedback mitigates instability, but this too comes at the cost of reducing scaling accuracy." "Realization of the phase lag instability requires a unified model of synaptic scaling, regulation, and transport. We present such a model with global and local feedback in realistic neuron morphologies (Fig. 1). This combined model shows that neurons face a tradeoff between stability, accuracy, and efficiency. Global feedback is required for synaptic scaling but favors either system stability or efficiency. Large receptor pools improve scaling accuracy in large morphologies but worsen both stability and efficiency. Local feedback improves the stability-efficiency tradeoff at the cost of scaling accuracy. This project introduces unexplored constraints on neuron size, morphology, and synaptic scaling that are weakened by an interplay between global and local feedback.

SeminarNeuroscienceRecording

Is Rule Learning Like Analogy?

Stella Christie
Tsinghua University
Jul 16, 2020

Humans’ ability to perceive and abstract relational structure is fundamental to our learning. It allows us to acquire knowledge all the way from linguistic grammar to spatial knowledge to social structures. How does a learner begin to perceive structure in the world? Why do we sometimes fail to see structural commonalities across events? To begin to answer these questions, I attempt to bridge two large, yet somewhat separate research traditions in understanding human’s structural abstraction: rule learning (Marcus et al., 1999) and analogical learning (Gentner, 1989). On the one hand, rule learning research has shown humans’ domain-general ability and ease—as early as 7-month-olds—to abstract structure from a limited experience. On the other hand, analogical learning works have shown robust constraints in structural abstraction: young learners prefer object similarity over relational similarity. To understand this seeming paradox between ease and difficulty, we conducted a series of studies using the classic rule learning paradigm (Marcus et al., 1999) but with an analogical (object vs. relation) twist. Adults were presented with 2-minute sentences or events (syllables or shapes) containing a rule. At test, they had to choose between rule abstraction and object matches—the same syllable or shape they saw before. Surprisingly, while in the absence of object matches adults were perfectly capable of abstracting the rule, their ability to do so declined sharply when object matches were present. Our initial results suggest that rule learning ability may be subject to the usual constraints and signatures of analogical learning: preference to object similarity can dampen rule generalization. Humans’ abstraction is also concrete at the same time.

SeminarNeuroscienceRecording

Spanning the arc between optimality theories and data

Gasper Tkacik
Institute of Science and Technology Austria
Jun 2, 2020

Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or "close to", data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.

SeminarNeuroscienceRecording

Computational Models of Large-Scale Brain Networks - Dynamics & Function

Jorge Mejias
University of Amsterdam
Apr 22, 2020

Theoretical and computational models of neural systems have been traditionally focused on small neural circuits, given the lack of reliable data on large-scale brain structures. The situation has started to change in recent years, with novel recording technologies and large organized efforts to describe the brain at a larger scale. In this talk, Professor Mejias from the University of Amsterdam will review his recent work on developing anatomically constrained computational models of large-scale cortical networks of monkeys, and how this approach can help to answer important questions in large-scale neuroscience. He will focus on three main aspects: (i) the emergence of functional interactions in different frequency regimes, (ii) the role of balance for efficient large-scale communication, and (iii) new paradigms of brain function, such as working memory, in large-scale networks.

ePosterNeuroscience

Dynamics of specialization in neural modules under resource constraints

Gabriel Béna, Dan Goodman

Bernstein Conference 2024

ePosterNeuroscience

Sequence learning under biophysical constraints: a re-evaluation of prominent models

Barna Zajzon, Younes Bouhadjar, Tom Tetzlaff, Renato Duarte, Abigail Morrison

Bernstein Conference 2024

ePosterNeuroscience

How coding constraints affect the shape of neural manifolds

Allan Mancoo,Christian Machens

COSYNE 2022

ePosterNeuroscience

Optimal search strategies under energetic constraints

Yipei Guo,Ann Hermundstad

COSYNE 2022

ePosterNeuroscience

Optimal search strategies under energetic constraints

Yipei Guo,Ann Hermundstad

COSYNE 2022

ePosterNeuroscience

Beyond task-optimized neural models: constraints from eye movements during navigation

Akis Stavropoulos, Kaushik Lakshminarasimhan, Dora Angelaki

COSYNE 2023

ePosterNeuroscience

Human-like capacity limits in working memory models result from naturalistic sensory constraints

Yudi Xie, Yu Duan, Aohua Cheng, Pengcen Jiang, Christopher Cueva, Guangyu Robert Yang

COSYNE 2023

ePosterNeuroscience

Modelling ecological constraints on visual processing with deep reinforcement learning

Sacha Sokoloski, Jure Majnik, Thomas Euler, Philipp Berens

COSYNE 2023

ePosterNeuroscience

Unveiling the cognitive computation using multi-area RNN with biological constraints

Kai Chen, Songting Li, Douglas Zhou, Yuxiu Shao

COSYNE 2025

ePosterNeuroscience

Conserved specific nuclear constraints in radial glia cells

José Pablo Soriano-Esqué, Carlos Borau, Jesús Asín, José Manuel García-Aznar, Soledad Alcántara

FENS Forum 2024

ePosterNeuroscience

Metabolic neural constraints provide resilience to noise in feed-forward networks

Ivan Bulygin, Chaitanya Chintaluri, Tim P. Vogels

FENS Forum 2024

ePosterNeuroscience

Postural constraints affect the optimal weighting of multisensory integration during visuo-manual coordination

Célie Dézé, Clémence Daleux, Mathieu Beraneck, Joseph McIntyre, Michele Tagliabue

FENS Forum 2024

ePosterNeuroscience

Thermal constraints on cognition in a poikilothermic brain

Felix Baier, Gilles Laurent

FENS Forum 2024

constraints coverage

55 items

Seminar42
ePoster13
Domain spotlight

Explore how constraints research is advancing inside Neuro.

Visit domain