← Back

Feedforward

Topic spotlight
TopicWorld Wide

feedforward

Discover seminars, jobs, and research tagged with feedforward across World Wide.
42 curated items29 Seminars13 ePosters
Updated 10 months ago
42 items · feedforward
42 results
SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 9, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscience

Restructuring cortical feedback circuits

Andreas Keller
Institute of Molecular and Clinical Ophthalmology, Basel
Nov 2, 2022

We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The mechanistic basis for this fundamental perceptual phenomenon has, however, remained obscure. What enables neurons in the visual system to respond to context when the stimulus is not available? While feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input. These signals are likely used to predict input from the feedforward pathway. Preliminary results show that these feedback projections are strongly influenced by experience and learning.

SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 28, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 21, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscience

Feedback controls what we see

Andreas Keller
Institute of Molecular and Clinical Ophthalmology Basel
May 29, 2022

We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The visual system uses visual context to predict the content of the stimulus. What enables neurons in the visual system to respond to context when the stimulus is not available? In cortex, sensory processing is based on a combination of feedforward information arriving from sensory organs, and feedback information that originates in higher-order areas. Whereas feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the exciting discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas, in particular when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input.

SeminarNeuroscienceRecording

Meta-learning synaptic plasticity and memory addressing for continual familiarity detection

Danil Tyulmankov
Columbia University
May 17, 2022

Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task where a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously, and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning, and demonstrates an effective application of machine learning for neuroscience discovery.

SeminarNeuroscienceRecording

Population coding in the cerebellum: a machine learning perspective

Reza Shadmehr
Johns Hopkins School of Medicine
Apr 5, 2022

The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.

SeminarNeuroscienceRecording

Invariant neural subspaces maintained by feedback modulation

Henning Sprekeler
TU Berlin
Feb 17, 2022

Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.

SeminarNeuroscienceRecording

Wiring Minimization of Deep Neural Networks Reveal Conditions in which Multiple Visuotopic Areas Emerge

Dina Obeid
Harvard University
Dec 14, 2021

The visual system is characterized by multiple mirrored visuotopic maps, with each repetition corresponding to a different visual area. In this work we explore whether such visuotopic organization can emerge as a result of minimizing the total wire length between neurons connected in a deep hierarchical network. Our results show that networks with purely feedforward connectivity typically result in a single visuotopic map, and in certain cases no visuotopic map emerges. However, when we modify the network by introducing lateral connections, with sufficient lateral connectivity among neurons within layers, multiple visuotopic maps emerge, where some connectivity motifs yield mirrored alternations of visuotopic maps–a signature of biological visual system areas. These results demonstrate that different connectivity profiles have different emergent organizations under the minimum total wire length hypothesis, and highlight that characterizing the large-scale spatial organizing of tuning properties in a biological system might also provide insights into the underlying connectivity.

SeminarNeuroscienceRecording

NMC4 Short Talk: The complete connectome of an insect brain

Michael Winding (he/him)
University of Cambridge
Dec 1, 2021

Brains must integrate complex sensory information and compare to past events to generate appropriate behavioral responses. The neural circuit basis of these computations is unclear and the underlying structure unknown. Here, we mapped the comprehensive synaptic wiring diagram of the fruit fly larva brain, which contains 3,013 neurons and 544K synaptic sites. It is the most complete insect connectome to date: 1) Both brain hemispheres are reconstructed, allowing investigation of neural pathways that include contralateral axons, which we found in 37% of brain neurons. 2) All sensory neurons and descending neurons are reconstructed, allowing one to follow signals in an uninterrupted chain—from the sensory periphery, through the brain, to motor neurons in the nerve cord. We developed novel computational tools, allowing us to cluster the brain and investigate how information flows through it. We discovered that feedforward pathways from sensory to descending neurons are multilayered and highly multimodal. Robust feedback was observed at almost all levels of the brain, including descending neurons. We investigated how the brain hemispheres communicate with each other and the nerve cord, leading to identification of novel circuit motifs. This work provides the complete blueprint of a brain and a strong foundation to study the structure-function relationship of neural circuits.

SeminarNeuroscienceRecording

Refuting the unfolding-argument on the irrelevance of causal structure to consciousness

Marius Usher
Tel-Aviv University
Nov 30, 2021

I will build from Niccolo's discussion of the Blockhead argument to argue that having an FeedForward Network (FN) responding like an recurrent network (RN) in a consciousness experiment is not enough to convince us the two are the same with regards to the posession of mental states and conscious experience. I will then argue that a robust functional equivalence between FFN and RN is akso not supported by the mathematical work on the Universal Approximator theorem, and is also unlikely to hold, as a conjecture, given data in cognitive neuroscience; I will argue that an equivalence of RN and FFN may only apply to static functions between input/output layers and not to the temporal patterns or to the network's reactions to structural perturbations. Finally, I review data indicating that consciousness has functional characteristics, such as a flexible control of behavior, and that cognitive/brain dynamics reveal interacting top-down and bottom-up processes, which are necessary for the mediation of such control processes.

SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 4, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscienceRecording

Disinhibitory and neuromodulatory regulation of hippocampal synaptic plasticity

Inês Guerreiro
Gutkin lab, Ecole Normale Superieure
Jul 27, 2021

The CA1 pyramidal neurons are embedded in an intricate local circuitry that contains a variety of interneurons. The roles these interneurons play in the regulation of the excitatory synaptic plasticity remains largely understudied. Recent experiments showed that repeated cholinergic activation of 𝛼7 nACh receptors expressed in oriens-lacunosum-moleculare (OLM𝛼2) interneurons could induce LTP in SC-CA1 synapses. We used a biophysically realistic computational model to examine mechanistically how cholinergic activation of OLMa2 interneurons increases SC to CA1 transmission. Our results suggest that, when properly timed, activation of OLMa2 interneurons cancels the feedforward inhibition onto CA1 pyramidal cells by inhibiting fast-spiking interneurons that synapse on the same dendritic compartment as the SC, i.e., by disinhibiting the pyramidal cell dendritic compartment. Our work further describes the pairing of disinhibition with SC stimulation as a general mechanism for the induction of synaptic plasticity. We found that locally-reduced GABA release (disinhibition) paired with SC stimulation could lead to increased NMDAR activation and intracellular calcium concentration sufficient to upregulate AMPAR permeability and potentiate the excitatory synapse. Our work suggests that inhibitory synapses critically modulate excitatory neurotransmission and induction of plasticity at excitatory synapses. Our work also shows how cholinergic action on OLM interneurons, a mechanism whose disruption is associated with memory impairment, can down-regulate the GABAergic signaling into CA1 pyramidal cells and facilitate potentiation of the SC-CA1 synapse.

SeminarNeuroscience

The Challenge and Opportunities of Mapping Cortical Layer Activity and Connectivity with fMRI

Peter Bandettini
NIMH
Jul 8, 2021

In this talk I outline the technical challenges and current solutions to layer fMRI. Specifically, I describe our acquisition strategies for maximizing resolution, spatial coverage, time efficiency as well as, perhaps most importantly, vascular specificity. Novel applications from our group, including mapping feedforward and feedback connections to M1 during task and sensory input modulation and S1 during a sensory prediction task are be shown. Layer specific activity in dorsal lateral prefrontal cortex during a working memory task is also demonstrated. Additionally, I’ll show preliminary work on mapping whole brain layer-specific resting state connectivity and hierarchy.

SeminarNeuroscience

Towards a neurally mechanistic understanding of visual cognition

Kohitij Kar
Massachusetts Institute of Technology
Jun 13, 2021

I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!

SeminarNeuroscienceRecording

Visual processing of feedforward and feedback signals in mouse thalamus

Laura Busse
LMU Munich
Jun 6, 2021

Traditionally, the dorsolateral geniculate nucleus (dLGN) of the thalamus has been considered a feedforward relay station for retinal signals to reach primary visual cortex. The local and long-range circuits of dLGN, however, suggest that this view is not correct. Indeed, besides the thalamo-cortical relay cells, dLGN contains local inhibitory interneurons, and receives not only feedforward input from the retina, but also massive direct and indirect feedback from primary visual cortex. Furthermore, it is one of the earliest processing stages in the visual system that integrates visual information with neuromodulatory signals.

SeminarNeuroscienceRecording

A fresh look at the bird retina

Karin Dedek
University of Oldenburg
May 30, 2021

I am working on the vertebrate retina, with a main focus on the mouse and bird retina. Currently my work is focused on three major topics: Functional and molecular analysis of electrical synapses in the retina Circuitry and functional role of retinal interneurons: horizontal cells Circuitry for light-dependent magnetoreception in the bird retina Electrical synapses Electrical synapses (gap junctions) permit fast transmission of electrical signals and passage of metabolites by means of channels, which directly connect the cytoplasm of adjoining cells. A functional gap junction channel consists of two hemichannels (one provided by each of the cells), each comprised of a set of six protein subunits, termed connexins. These building blocks exist in a variety of different subtypes, and the connexin composition determines permeability and gating properties of a gap junction channel, thereby enabling electrical synapses to meet a diversity of physiological requirements. In the retina, various connexins are expressed in different cell types. We study the cellular distribution of different connexins as well as the modulation induced by transmitter action or change of ambient light levels, which leads to altered electrical coupling properties. We are also interested in exploiting them as therapeutic avenue for retinal degeneration diseases. Horizontal cells Horizontal cells receive excitatory input from photoreceptors and provide feedback inhibition to photoreceptors and feedforward inhibition to bipolar cells. Because of strong electrical coupling horizontal cells integrate the photoreceptor input over a wide area and are thought to contribute to the antagonistic organization of bipolar cell and ganglion cell receptive fields and to tune the photoreceptor–bipolar cell synapse with respect to the ambient light conditions. However, the extent to which this influence shapes retinal output is unclear, and we aim to elucidate the functional importance of horizontal cells for retinal signal processing by studying various transgenic mouse models. Retinal circuitry for light-dependent magnetoreception in the bird We are studying which neuronal cell types and pathways in the bird retina are involved in the processing of magnetic signals. Likely, magnetic information is detected in cryptochrome-expressing photoreceptors and leaves the retina through ganglion cell axons that project via the thalamofugal pathway to Cluster N, a part of the visual wulst essential for the avian magnetic compass. Thus, we aim to elucidate the synaptic connections and retinal signaling pathways from putatively magnetosensitive photoreceptors to thalamus-projecting ganglion cells in migratory birds using neuroanatomical and electrophysiological techniques.

SeminarNeuroscienceRecording

Inhibitory neural circuit mechanisms underlying neural coding of sensory information in the neocortex

Jeehyun Kwag
Korea University
Jan 28, 2021

Neural codes, such as temporal codes (precisely timed spikes) and rate codes (instantaneous spike firing rates), are believed to be used in encoding sensory information into spike trains of cortical neurons. Temporal and rate codes co-exist in the spike train and such multiplexed neural code-carrying spike trains have been shown to be spatially synchronized in multiple neurons across different cortical layers during sensory information processing. Inhibition is suggested to promote such synchronization, but it is unclear whether distinct subtypes of interneurons make different contributions in the synchronization of multiplexed neural codes. To test this, in vivo single-unit recordings from barrel cortex were combined with optogenetic manipulations to determine the contributions of parvalbumin (PV)- and somatostatin (SST)-positive interneurons to synchronization of precisely timed spike sequences. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (<12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contribution to feedforward and feedback inhibition, respectively. Overall, these results show that PV and SST interneurons have distinct frequency (rate code)-selective roles in dynamically gating the synchronization of spike times (temporal code) through preferentially recruiting feedforward and feedback inhibitory circuit motifs. The inhibitory neural circuit mechanisms we uncovered here his may have critical roles in regulating neural code-based somatosensory information processing in the neocortex.

SeminarNeuroscience

Top-down Modulation in Human Visual Cortex

Mohamed Abdelhack
Washington University in St. Louis
Dec 16, 2020

Human vision flaunts a remarkable ability to recognize objects in the surrounding environment even in the absence of complete visual representation of these objects. This process is done almost intuitively and it was not until scientists had to tackle this problem in computer vision that they noticed its complexity. While current advances in artificial vision systems have made great strides exceeding human level in normal vision tasks, it has yet to achieve a similar robustness level. One cause of this robustness is the extensive connectivity that is not limited to a feedforward hierarchical pathway similar to the current state-of-the-art deep convolutional neural networks but also comprises recurrent and top-down connections. They allow the human brain to enhance the neural representations of degraded images in concordance with meaningful representations stored in memory. The mechanisms by which these different pathways interact are still not understood. In this seminar, studies concerning the effect of recurrent and top-down modulation on the neural representations resulting from viewing blurred images will be presented. Those studies attempted to uncover the role of recurrent and top-down connections in human vision. The results presented challenge the notion of predictive coding as a mechanism for top-down modulation of visual information during natural vision. They show that neural representation enhancement (sharpening) appears to be a more dominant process of different levels of visual hierarchy. They also show that inference in visual recognition is achieved through a Bayesian process between incoming visual information and priors from deeper processing regions in the brain.

SeminarNeuroscience

Feedforward and feedback computations in the olfactory bulb and olfactory cortex: computational model and experimental data

Zhaoping Li
Max Planck Institute of Biological Cybernetics, Tübingen, germany
Dec 6, 2020
SeminarNeuroscience

Crowding and the Architecture of the Visual System

Adrien Doerig
Laboratory of Psychophysics, BMI, EPFL
Dec 1, 2020

Classically, vision is seen as a cascade of local, feedforward computations. This framework has been tremendously successful, inspiring a wide range of ground-breaking findings in neuroscience and computer vision. Recently, feedforward Convolutional Neural Networks (ffCNNs), inspired by this classic framework, have revolutionized computer vision and been adopted as tools in neuroscience. However, despite these successes, there is much more to vision. I will present our work using visual crowding and related psychophysical effects as probes into visual processes that go beyond the classic framework. In crowding, perception of a target deteriorates in clutter. We focus on global aspects of crowding, in which perception of a small target is strongly modulated by the global configuration of elements across the visual field. We show that models based on the classic framework, including ffCNNs, cannot explain these effects for principled reasons and identify recurrent grouping and segmentation as a key missing ingredient. Then, we show that capsule networks, a recent kind of deep learning architecture combining the power of ffCNNs with recurrent grouping and segmentation, naturally explain these effects. We provide psychophysical evidence that humans indeed use a similar recurrent grouping and segmentation strategy in global crowding effects. In crowding, visual elements interfere across space. To study how elements interfere over time, we use the Sequential Metacontrast psychophysical paradigm, in which perception of visual elements depends on elements presented hundreds of milliseconds later. We psychophysically characterize the temporal structure of this interference and propose a simple computational model. Our results support the idea that perception is a discrete process. Together, the results presented here provide stepping-stones towards a fuller understanding of the visual system by suggesting architectural changes needed for more human-like neural computations.

SeminarNeuroscienceRecording

Dimensions of variability in circuit models of cortex

Brent Doiron
The University of Chicago
Nov 15, 2020

Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.

SeminarNeuroscienceRecording

The developing visual brain – answers and questions

Janette Atkinson & Oliver Braddick
UCL & Oxford
Oct 26, 2020

We will start our talk with a short video of our research, illustrating methods (some old and new) and findings that have provided our current understanding of how visual capabilities develop in infancy and early childhood. However, our research poses some outstanding questions. We will briefly discuss three issues, which are linked by a common focus on the development of visual attentional processing: (1) How do recurrent cortical loops contribute to development? Cortical selectivity (e.g., to orientation, motion, and binocular disparity) develops in the early months of life. However, these systems are not purely feedforward but depend on parallel pathways, with recurrent feedback loops playing a critical role. The development of diverse networks, particularly for motion processing, may explain changes in dynamic responses and resolve developmental data obtained with different methodologies. One possible role for these loops is in top-down attentional control of visual processing. (2) Why do hyperopic infants become strabismic (cross-eyes)? Binocular interaction is a particularly sensitive area of development. Standard clinical accounts suppose that long-sighted (hyperopic) refractive errors require accommodative effort, putting stress on the accommodation-convergence link that leads to its breakdown and strabismus. Our large-scale population screening studies of 9-month infants question this: hyperopic infants are at higher risk of strabismus and impaired vision (amblyopia and impaired attention) but these hyperopic infants often under- rather than over-accommodate. This poor accommodation may reflect poor early attention processing, possibly a ‘soft sign’ of subtle cerebral dysfunction. (3) What do many neurodevelopmental disorders have in common? Despite similar cognitive demands, global motion perception is much more impaired than global static form across diverse neurodevelopmental disorders including Down and Williams Syndromes, Fragile-X, Autism, children with premature birth and infants with perinatal brain injury. These deficits in motion processing are associated with deficits in other dorsal stream functions such as visuo-motor co-ordination and attentional control, a cluster we have called ‘dorsal stream vulnerability’. However, our neuroimaging measures related to motion coherence in typically developing children suggest that the critical areas for individual differences in global motion sensitivity are not early motion-processing areas such as V5/MT, but downstream parietal and frontal areas for decision processes on motion signals. Although these brain networks may also underlie attentional and visuo-motor deficits , we still do not know when and how these deficits differ across different disorders and between individual children. Answering these questions provide necessary steps, not only increasing our scientific understanding of human visual brain development, but also in designing appropriate interventions to help each child achieve their full potential.

ePoster

Continual learning using dendritic modulations on view-invariant feedforward weights

Viet Anh Khoa Tran, Emre Neftci, Willem Wybo

Bernstein Conference 2024

ePoster

Non-feedforward architectures enable diverse multisensory computations

Marcus Ghosh, Dan Goodman

Bernstein Conference 2024

ePoster

Response variability can accelerate learning in feedforward-recurrent networks

Sigrid Trägenap, Matthias Kaschube

Bernstein Conference 2024

ePoster

Feedforward thalamocortical inputs to primary visual cortex are OFF dominant

COSYNE 2022

ePoster

Feedforward and feedback computations in V1 and V2 in a hierarchical Variational Autoencoder

COSYNE 2022

ePoster

Feedforward thalamocortical inputs to primary visual cortex are OFF dominant

COSYNE 2022

ePoster

Feedforward and feedback computations in V1 and V2 in a hierarchical Variational Autoencoder

COSYNE 2022

ePoster

An interpretable spline-LNP model to characterize feedforward and feedback processing in mouse dLGN

COSYNE 2022

ePoster

An interpretable spline-LNP model to characterize feedforward and feedback processing in mouse dLGN

COSYNE 2022

ePoster

Dendritic modulation for multitask representation learning in deep feedforward networks

Willem Wybo, Viet Anh Khoa Tran, Matthias Tsai, Bernd Illing, Jakob Jordan, Walter Senn, Abigail Morrison

COSYNE 2023

ePoster

Functional consequences of highly shared feedforward inhibition in the striatum

Lihao Guo, Pascal Helson, Arvind Kumar

COSYNE 2023

ePoster

Cell-type specific auditory responses in the tail of the striatum shaped by feedforward inhibition

Mélanie Druart, Megha Kori, Corryn Chaimowitz, Catherine Fan, Tanya Sippy

FENS Forum 2024

ePoster

The NKCC1 inhibitor bumetanide restores cortical feedforward inhibition and lessens sensory hypersensitivity in early postnatal Fragile X mice

Nazim Kourdougli, Toshihiro Nomura, Michelle Wu, Anouk Heuvelmans, Zoë Dobler, Anis Contractor, Carlos Portera-Cailliau

FENS Forum 2024