World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.
Review the Privacy Policy for details about analytics processing.
Columbia University, New York
Showing your local timezone
Schedule
Wednesday, April 5, 2023
1:00 AM America/New_York
Seminar location
No geocoded details are available for this content yet.
Recording provided by the organiser.
Format
Recorded Seminar
Recording
Available
Host
van Vreeswijk TNS
Duration
70.00 minutes
Seminar location
No geocoded details are available for this content yet.
Computation in neural circuits relies on a common set of motifs, including divergence of common inputs to parallel pathways, convergence of multiple inputs to a single neuron, and nonlinearities that select some signals over others. Convergence and circuit nonlinearities, considered individually, can lead to a loss of information about the inputs. Past work has detailed how to optimize nonlinearities and circuit weights to maximize information, but we show that selective nonlinearities, acting together with divergent and convergent circuit structure, can improve information transmission over a purely linear circuit despite the suboptimality of these components individually. These nonlinearities recode the inputs in a manner that preserves the variance among converged inputs. Our results suggest that neural circuits may be doing better than expected without finely tuned weights.
Gabrielle Gutierrez
Columbia University, New York
Contact & Resources
neuro
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory p
neuro
neuro