← Back

Computational Principles

Topic spotlight
TopicWorld Wide

computational principles

Discover seminars, jobs, and research tagged with computational principles across World Wide.
13 curated items10 Seminars2 Positions1 ePoster
Updated 2 days ago
13 items · computational principles
13 results
Position

N/A

Ruhr-University Bochum, German Primate Center
Ruhr-University Bochum and the German Primate Center
Dec 5, 2025

We are looking for a highly motivated PhD student to study neural mechanisms of high-dimensional visual category learning. The lab generally seeks to understand the cortical basis and computational principles of perception and experience-dependent plasticity in the brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in rodents and non-human primates, and fMRI and ECoG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The PhD student will have access to a new imaging center with a dedicated 3T research scanner, electrophysiology, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan. The Department of Cognitive Neurobiology of Caspar Schwiedrzik at Ruhr-University Bochum is looking for an outstanding PhD student interested in studying the neural basis of mental flexibility. The project investigates neural mechanisms of high-dimensional visual category learning, utilizing functional magnetic resonance imaging (fMRI) in combination with computational modelling and behavioral testing in humans. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The PhD student’s project will focus on developing new category learning paradigms to investigate the neural basis of flexible multi-task learning in humans using fMRI. In addition, the PhD student will cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in non-human primates.

Position

Caspar Schwiedrzik

Ruhr-University Bochum, German Primate Center
Ruhr-University Bochum and the German Primate Center
Dec 5, 2025

We are looking for a highly motivated PhD student to study neural mechanisms of high-dimensional visual category learning. The lab generally seeks to understand the cortical basis and computational principles of perception and experience-dependent plasticity in the brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in rodents and non-human primates, and fMRI and ECoG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The PhD student will have access to a new imaging center with a dedicated 3T research scanner, electrophysiology, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan. The Department of Cognitive Neurobiology of Caspar Schwiedrzik at Ruhr-University Bochum is looking for an outstanding PhD student interested in studying the neural basis of mental flexibility. The project investigates neural mechanisms of high-dimensional visual category learning, utilizing functional magnetic resonance imaging (fMRI) in combination with computational modelling and behavioral testing in humans. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The PhD student’s project will focus on developing new category learning paradigms to investigate the neural basis of flexible multi-task learning in humans using fMRI. In addition, the PhD student will cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in non-human primates.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 28, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

From natural scene statistics to multisensory integration: experiments, models and applications

Cesare Parise
Oculus VR
Feb 8, 2022

To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.

SeminarNeuroscienceRecording

NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks

Abdullahi Ali
Donders Institute for Brain
Dec 1, 2021

Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.

SeminarNeuroscience

Computational Principles of Event Memory

Ken Norman
Princeton University
Dec 1, 2021

Our ability to understand ongoing events depends critically on general knowledge about how different kinds of situations work (schemas), and also on recollection of specific instances of these situations that we have previously experienced (episodic memory). The consensus around this general view masks deep questions about how these two memory systems interact to support event understanding: How do we build our library of schemas? and how exactly do we use episodic memory in the service of event understanding? Given rich, continuous inputs, when do we store and retrieve episodic memory “snapshots”, and how are they organized so as to ensure that we can retrieve the right snapshots at the right time? I will develop predictions about how these processes work using memory augmented neural networks (i.e., neural networks that learn how to use episodic memory in the service of task performance), and I will present results from relevant fMRI and behavioral studies.

SeminarNeuroscience

Computational models of neural development

Geoffrey J. Goodhill
The University of Queensland
Jul 20, 2020

Unlike even the most sophisticated current forms of artificial intelligence, developing biological organisms must build their neural hardware from scratch. Furthermore they must start to evade predators and find food before this construction process is complete. I will discuss an interdisciplinary program of mathematical and experimental work which addresses some of the computational principles underlying neural development. This includes (i) how growing axons navigate to their targets by detecting and responding to molecular cues in their environment, (ii) the formation of maps in the visual cortex and how these are influenced by visual experience, and (iii) how patterns of neural activity in the zebrafish brain develop to facilitate precisely targeted hunting behaviour. Together this work contributes to our understanding of both normal neural development and the etiology of neurodevelopmental disorders.

SeminarNeuroscience

Neural and computational principles of the processing of dynamic faces and bodies

Martin Giese
University of Tübingen
Jul 7, 2020

Body motion is a fundamental signal of social communication. This includes facial as well as full-body movements. Combining advanced methods from computer animation with motion capture in humans and monkeys, we synthesized highly-realistic monkey avatar models. Our face avatar is perceived by monkeys as almost equivalent to a real animal, and does not induce an ‘uncanny valley effect’, unlike all other previously used avatar models in studies with monkeys. Applying machine-learning methods for the control of motion style, we were able to investigate how species-specific shape and dynamic cues influence the perception of human and monkey facial expressions. Human observers showed very fast learning of monkey expressions, and a perceptual encoding of expression dynamics that was largely independent of facial shape. This result is in line with the fact that facial shape evolved faster than the neuromuscular control in primate phylogenesis. At the same time, it challenges popular neural network models of the recognition of dynamic faces that assume a joint encoding of facial shape and dynamics. We propose an alternative physiologically-inspired neural model that realizes such an orthogonal encoding of facial shape and expression from video sequences. As second example, we investigated the perception of social interactions from abstract stimuli, similar to the ones by Heider & Simmel (1944), and also from more realistic stimuli. We developed and validated a new generative model for the synthesis of such social interaction, which is based on a modification of human navigation model. We demonstrate that the recognition of such stimuli, including the perception of agency, can be accounted for by a relatively elementary physiologically-inspired hierarchical neural recognition model, that does not require the assumption of sophisticated inference mechanisms, as postulated by some cognitive theories of social recognition. Summarizing, this suggests that essential phenomena in social cognition might be accounted for by a small set of simple neural principles that can be easily implemented by cortical circuits. The developed technologies for stimulus control form the basis of electrophysiological studies that can verify specific neural circuits, as the ones proposed by our theoretical models.

ePoster

Computational principles of systems memory consolidation

COSYNE 2022