World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.
Review the Privacy Policy for details about analytics processing.
Columbia University, New York
Showing your local timezone
Schedule
Wednesday, April 5, 2023
1:00 AM America/New_York
Recording provided by the organiser.
Domain
NeuroscienceOriginal Event
View sourceHost
van Vreeswijk TNS
Duration
70 minutes
Computation in neural circuits relies on a common set of motifs, including divergence of common inputs to parallel pathways, convergence of multiple inputs to a single neuron, and nonlinearities that select some signals over others. Convergence and circuit nonlinearities, considered individually, can lead to a loss of information about the inputs. Past work has detailed how to optimize nonlinearities and circuit weights to maximize information, but we show that selective nonlinearities, acting together with divergent and convergent circuit structure, can improve information transmission over a purely linear circuit despite the suboptimality of these components individually. These nonlinearities recode the inputs in a manner that preserves the variance among converged inputs. Our results suggest that neural circuits may be doing better than expected without finely tuned weights.
Gabrielle Gutierrez
Columbia University, New York
Contact & Resources