Computational Modeling
computational modeling
Dr. Jorge Mejias
The Cognitive and Systems Neuroscience Group at the University of Amsterdam is seeking a highly qualified and motivated candidate for a PhD position in Computational Neuroscience. The position falls under the Horizon Health Europe Consortium grant “Virtual Brain Twins for Personalized Treatment of Psychiatric Disorders”. This Consortium constitutes a large collaboration between different European institutions, aiming to develop personalized brain simulation software (“virtual brain twins”) to improve the diagnosis and treatment of schizophrenia. The main objective of this PhD project is to develop a biologically realistic computational model of the human brain, and use it to study alterations in brain activity associated with schizophrenia. Such model will make use of local neural mass models (developed by our Consortium partners) to simulate multiple brain areas, and will bring them together using structural connectivity data from human subjects. The model will be then used to explore the effects of schizophrenia-related alterations in brain dynamics and function, and to derive patient-specific virtual brain simulations to improve diagnosis and explore treatments in collaboration with clinical Consortium partners. The project will be supervised by Dr. Jorge Mejias, principal investigator in computational neuroscience and leader of the Dutch component of the Consortium, and by Prof. Dr. Cyriel Pennartz, head of the Cognitive and Systems Neuroscience Group. You will closely collaborate with other Consortium members, particularly with the team of Prof. Viktor Jirsa at Aix-Marseille University, and will also benefit from interactions with local colleagues including other theoretical, computational and experimental neuroscientists at the Cognitive and Systems Neuroscience Group. For more information and to apply, visit the following link: https://vacatures.uva.nl/UvA/job/PhD-position-in-Computational-Neuroscience/786924102/
Arcadia Science
A Bit About Us: We are Arcadia Science. Arcadia is a well-funded for-profit biology research and development company founded and led by scientists. Our mission is to give a community of researchers the freedom and tools to be adventurous, to discover, and to make scientific exploration financially self-sustaining in the life sciences. We are inspired by the spirit of exploration and aspire to evolve how science is done, who it attracts and rewards, and what it can achieve. Research @ Arcadia: At Arcadia, we are building an intramural research and development program that will encompass three areas: (1) emerging organismal biology, (2) enabling research technologies, and (3) translational development. Research areas will be carried out by independent scientists and those working together towards shared goals. Projects will be collaborative in nature and pursue science that is more high-risk and exploratory than typical life science research programs. We will invest heavily in creative technologies that can invent new research tools or optimize workflows for emerging organismal systems. In addition to conducting research, Arcadia scientists will drive engagement with the broader scientific community in order to maximize the impact of our work and identify research areas and needs that Arcadia may be uniquely positioned to address.
Jim Magnuson
3-year Ph.D. project, funded by la Caixa Foundation fellowship. Theme: Computational and neural bases of bilingualism. Goal: develop a model of bilingual development in the complementary learning systems framework. Direct link to position: https://finder.lacaixafellowships.org/finder?position=4739 Detailed Description: We seek a Ph.D. student with strong background (and masters) in a relevant domain (a cognitive, biological, or engineering field) and some experience with programming, data science, or computational modeling. The successful candidate will be involved in developing and computational models and/or running behavioral and neuroimaging studies, collecting and analyzing data, and disseminating the results in scientific conferences (presentations/posters) and peer-reviewed journals. The selected candidate will develop advanced technical and analytical skills and will have the opportunity to develop original experiments under the supervisors’ guidance. Applicants should demonstrate a keen interest in the key areas of cognitive neuroscience that are relevant for the research, coupled with strong computational skills (e.g., Python, Matlab, R). Experience with neuroscience techniques (e.g., MEG, EEG, MRI) and with analysis of neuroimaging data is desirable but not essential. A committed motivation to learning computational modelling and advanced analysis tools is a must, as well as the ability to acquire new skills and knowledge, and to work both independently and as part of a multidisciplinary team. A good command of English (the working language of the BCBL) is required; knowledge of Spanish and/or Basque is an advantage but not required. The candidate will enrol as a PhD student at the University of the Basque Country (UPV/EHU) and is expected to complete the PhD programme within 36 months. Training in complementary skills will be provided during the fellowship, including communication and research dissemination, IT and programming skills, ethics and professional conduct. The BCBL also provides support with living and welfare issues.
Mai-Phuong Bo
The Stanford Cognitive and Systems Neuroscience Laboratory (scsnl.stanford.edu) invites applications for a postdoctoral fellowship in computational modeling of human cognitive, behavioral, and brain imaging data. The candidate will be involved in multidisciplinary projects to develop and implement novel neuro-cognitive computational frameworks, using multiple cutting-edge methods that may include computational cognitive modeling, Bayesian inference, dynamic brain circuit analysis, and deep neural networks. These projects will span areas including robust identification of cognitive and neurobiological signatures of psychiatric and neurological disorders, and neurodevelopmental trajectories. Clinical disorders under investigation include autism, ADHD, anxiety and mood disorders, learning disabilities, and schizophrenia. The candidate will have access to multiple large datasets and state-of-the-art computational resources, including HPCs and GPUs. Please include a CV and a statement of research interests and have three letters of reference emailed to Prof. Vinod Menon at scsnl.stanford+postdoc@gmail.com.
Prof Vinod Menon
The Stanford Cognitive and Systems Neuroscience Laboratory (scsnl.stanford.edu) invites applications for a postdoctoral fellowship in computational modeling of human cognitive, behavioral, and brain imaging data. The candidate will be involved in multidisciplinary projects to develop and implement novel neuro-cognitive computational frameworks, using multiple cutting-edge methods that may include computational cognitive modeling, Bayesian inference, dynamic brain circuit analysis, and deep neural networks. These projects will span areas including robust identification of cognitive and neurobiological signatures of psychiatric and neurological disorders, and neurodevelopmental trajectories. Clinical disorders under investigation include autism, ADHD, anxiety and mood disorders, learning disabilities, and schizophrenia. The candidate will have access to multiple large datasets and state-of-the-art computational resources, including HPCs and GPUs. Please include a CV and a statement of research interests and have three letters of reference emailed to Prof. Vinod Menon at scsnl.stanford+postdoc@gmail.com.
Klaus Wimmer
This postdoctoral position offers an exciting opportunity to combine computational modeling, psychophysics, and EEG to study the computational mechanisms underlying flexible evidence integration in perceptual decision making.
Bei Xiao
The RA is to pursue research projects of his/her own as well as provide support for research carried out in the Xiao lab. Possible duties include: Building VR/AR experimental interfaces with Unity3D, Python coding for behavioral data analysis, Collecting data for psychophysical experiments, Training machine learning models.
Marsa
The successful applicant will work on a multidisciplinary collaborative project aiming to determine the importance of cortical engram cells in memory formation and storage and probe the role of cortical memory engrams in the generation and retrieval of a sensory-based memory. The project as a whole combines computational modeling, electrophysiology, calcium imaging techniques, and molecular and behavioral experiments. First, the biophysical properties of engrams will be identified in a cortical area of interest, and their functional role will be unraveled in vivo. Then, computational modeling will be used to determine the role of engram cells during memory recall. This project is a collaboration between the Florey Institute of Neuroscience and Mental Health in Melbourne, Australia (Prof. L. Palmer), and the University of Dublin, Ireland (Prof. T. Ryan).
Grit Hein
The Translational Social Neuroscience Unit at the Julius-Maximilians-Universität Würzburg (JMU) in Würzburg, Germany is offering a 2-year 100% postdoc position in social neuroscience. The unit studies the psychological and neurobiological processes underlying social interactions and decisions. Current studies investigate drivers of human social behavior such as empathy, social norms, group membership, and egoism, as well as the social modulation of anxiety and pain processing. The unit uses neuroscientific methods (functional magnetic resonance imaging, electroencephalography) and psychophysiological measures (heart rate, skin conductance), combined with experimental paradigms from cognitive and social psychology and simulations of social interactions in virtual reality. The unit also studies social interactions in everyday life using smartphone-based surveys and mobile physiological sensors. The position is initially limited until September 30, 2025 with the option for extension.
Ján Antolík
The postdoctoral position is within the Computational Systems Neuroscience Group (CSNG) at Charles University, Prague, focusing on computational neuroscience and neuro-prosthetic system design. The project goals include developing a large-scale model of electrical stimulation in the primary visual cortex for neuro-prosthetic vision restoration, creating and refining models of the primary visual cortex and its electrical stimulation, simulating the impact of external stimulation on cortical activity, developing novel machine learning methods to link simulated cortical activity to expected visual perceptions, and developing stimulation protocols for neuro-prosthetic systems. This project is undertaken as a part of a larger consortium of Czech experimental and theoretical neuroscience teams.
Dr. Jiri Hammer
The postdoc will be involved in cognitive neuroscience research, specifically in the intracranial EEG recordings. The projects include 'the interplay of movement and touch', which involves analysis of iEEG dynamics during reaching to tactile stimuli on the body, and 'from simple to natural and ecologically valid stimuli', which involves investigating brain responses measured by iEEG among stimuli gradually ranging from the simplest to very complex. The postdoc will also have the opportunity to propose new ideas for research.
Lyle Muller
This position will involve collaboration between our laboratory and researchers with expertise in advanced methods of brain imaging (Mark Schnitzer, Stanford), neuroengineering (Duygu Kuzum, UCSD), theoretical neuroscience (Todd Coleman, Stanford), and neurophysiology of visual perception (John Reynolds, Salk Institute for Biological Studies). In collaboration with this multi-disciplinary team, this researcher will apply new signal processing techniques for multisite spatiotemporal data to understand cortical dynamics during visual perception. This project will also involve development of spiking network models to understand the mechanisms underlying observed activity patterns. The project may include intermittent travel between labs to present results and facilitate collaborative work.
Silvia Lopez-Guzman
The Unit on Computational Decision Neuroscience (CDN) at the National Institute of Mental Health is seeking a full-time Data Scientist/Data Analyst. The lab is focused on understanding the neural and computational bases of adaptive and maladaptive decision-making and their relationship to mental health. Current studies investigate how internal states lead to biases in decision-making and how this is exacerbated in mental health disorders. Our approach involves a combination of computational model-based tasks, questionnaires, biosensor data, fMRI, and intracranial recordings. The main models of interest come from neuroeconomics, reinforcement learning, Bayesian inference, signal detection, and information theory. The main tasks for this position include computational modeling of behavioral data from decision-making and other cognitive tasks, statistical analysis of task-based, clinical, physiological and neuroimaging data, as well as data visualization for scientific presentations, public communication, and academic manuscripts. The candidate is expected to demonstrate experience with best practices for the development of well-documented, reproducible programming pipelines for data analysis, that facilitate sharing and collaboration, and live up to our open-science philosophy, as well as to our data management and sharing commitments at NIH.
Vinita Samarasinghe
Doctoral Position in Computational Neuroscience. Are you curious about how the human brain stores memories? Have you wondered how we manage to navigate through space? Our dynamic research group uses diverse computational modeling approaches, including biological neural networks, cognitive modeling, and machine learning/artificial intelligence, to study learning and memory. Currently, we are actively seeking a talented graduate student to join our team, someone who will expand our computational modeling framework Cobel-Spike and use it to study how spiking neural networks can learn to navigate. This position is 65% at TV-L E13, starts as soon as possible, and is funded for 3 years.
Mingbo Cai
The primary focus of this position is to work on an exciting collaborative project of decoding spontaneous thoughts. The intended project focuses on understanding the contents and dynamics of spontaneous thoughts using fMRI decoding and natural tasks, their interaction with memory and emotion, and rumination in mental disorders. The candidate will have the opportunity to analyze a rich fMRI dataset of healthy and clinical participants during spontaneous thoughts, and conduct new experiments.
Prof. Angela Yu
Prof. Angela Yu recently moved from UCSD to TU Darmstadt as the Alexander von Humboldt AI Professor, and has a number of PhD and postdoc positions available in her growing “Computational Modeling of Intelligent Systems” research group. Applications are solicited from highly motivated and qualified candidates, who are interested in interdisciplinary research at the intersection of natural and artificial intelligence. Prof. Yu’s group uses mathematically rigorous and algorithmically diverse tools to understand the nature of representation and computations that give rise to intelligent behavior. There is a fair amount of flexibility in the actual choice of project, as long as the project excites both the candidate and Prof. Yu. For example, Prof. Yu is currently interested in investigating scientific questions such as: How is socio-emotional intelligence similar or different from cognitive intelligence? Is there a fundamental tradeoff, given the prevalence of autism among scientists and engineers? How can AI be taught socio-emotional intelligence? How are artificial intelligence (e.g. as demonstrated by large language models) and natural intelligence (e.g. as measured by IQ tests) similar or different in their underlying representation or computations? What roles do intrinsic motivations such as curiosity and computational efficiency play in intelligent systems? How can insights about artificial intelligence improve the understanding and augmentation of human intelligence? Are capacity limitations with respect to attention and working memory a feature or a bug in the brain? How can AI system be enhanced by attention or WM? More broadly, Prof. Yu’s group employs and develops diverse machine learning and mathematical tools, e.g. Bayesian statistical modeling, control theory, reinforcement learning, artificial NN, and information theory, to explain various aspects of cognition important for intelligence: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions. Participants who have experience with two or more of the technical areas, and/or one or more of the application areas, are highly encouraged to apply. As part of the Centre for Cognitive Science at TU Darmstadt, the Hessian AI Center, as well as the Computer Science Department, Prof. Yu’s group members are encouraged and expected to collaborate extensively with preeminent researchers in cognitive science and AI, both nearby and internationally. All positions will be based at TU Darmstadt, Germany. Starting dates for the positions are flexible. Salaries are commensurate with experience and expertise, and highly competitive with respect to U.S. and European standards. The working language in the group and within the larger academic community is English. Fluency in German is not required; the university provides free German lessons for interested scientific staff.
Dr Florent MEYNIEL
Learning and decision making are intertwined processes in many everyday situations. One example is when you decide where to have lunch: should you go to the nearby coffee shop or to the university cafeteria? Learning depends on choice, because you can learn which option you prefer by trying each option repeatedly, and decision making depends on learning, because you eventually want to select the option you have learned you like best. Uncertainty plays a key role in both learning and decision making, especially when the environment is not stationary. In the CEA-funded EXPLORE+ collaborative project, we are interested in characterizing the neural representation of uncertainty and value that emerge from learning and guide decisions. Our approach follows a deep phenotyping approach, attempting to characterize each subject with a large multimodal dataset. We collected data from 16 participants who participated in one behavioral session, two 7T fMRI sessions, and two MEG sessions. The large number of trials allows us to estimate and test different computational models of the decision and learning processes. The 7T MRI and MEG data provide access to the topographical organization of neural representations and their dynamics, respectively, to better understand learning and decision making. One postdoc is currently working on the fMRI data, and we are looking for another postdoc for the MEG dataset. Both postdocs will work together to perform analyses informed by both modalities. The EXPLORE+ project will continue with another previously funded project called BrainSync, which will collect data from 11.7 fMRI and intracranial recordings using the same task, providing an opportunity to extend the current work.
Jie Mei
The Wiring, Neuromodeling and Brain Lab at IT:U Interdisciplinary Transformation University Austria is looking for two PhD students to work on neuromodulation-aware artificial intelligence. We are interested in (1) the role of individual neuromodulators (e.g., dopamine, serotonin, and acetylcholine) in initiating and implementing diverse biological and cognitive functions, (2) how competition and cooperation among neuromodulators enrich single neuromodulator computations, and (3) how multi-neuromodulator dynamics can be translated into learning rules for more flexible, robust, and adaptive learning in artificial neural networks (ANNs). Start date: Jan-Mar 2025. Apply by Nov 30, 2024: https://it-u.at/en/research/research-groups/computational-neuroscience/ For more information, please visit: https://majorjiemei.wixsite.com/wnblab If you have any questions, please contact Dr. Jie Mei (jie.mei@it-u.at).
Jie Mei
The Wiring, Neuromodeling and Brain Lab at IT:U Interdisciplinary Transformation University Austria is offering 2 PhD positions in neuromodulation-aware artificial intelligence. We are interested in (1) the role of individual neuromodulators (e.g., dopamine, serotonin, and acetylcholine) in initiating and implementing diverse biological and cognitive functions, (2) how competition and cooperation among neuromodulators enrich single neuromodulator computations, and (3) how multi-neuromodulator dynamics can be translated into learning rules for more flexible, robust, and adaptive learning in artificial neural networks.
Prof. Dr. Caspar Schwiedrzik
The Perception and Plasticity Group of Caspar Schwiedrzik at the DPZ is looking for an outstanding postdoc interested in studying the neural basis of high-dimensional category learning in vision. The project investigates neural mechanisms of category learning at the level of circuits and single cells, utilizing electrophysiology, functional magnetic resonance imaging, behavioral testing in humans and non-human primates, and computational modeling. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The postdoc’s project will focus on investigating the neural basis of visual category learning in macaque monkeys combining chronic multi-electrode electrophysiological recordings and electrical microstimulation. In addition, the postdoc will have the opportunity to cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in humans. The postdoc will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center in Göttingen. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The main site for this part of the project will be Göttingen. The postdoc will have access to state-of-the-art electrophysiology, an imaging center with a dedicated 3T research scanner, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan.
Lorenzo Fontolan
We are pleased to announce the opening of a PhD position at INMED (Aix-Marseille University) through the SCHADOC program, focused on the neural coding of social interactions and memory in the cortex of behaving mice. The project will investigate how social behaviors essential for cooperation, mating, and group dynamics are encoded in the brain, and how these processes are disrupted in neurodevelopmental disorders such as autism. This project uses longitudinal calcium imaging and population-level data analysis to study how cortical circuits encode social interactions in mice. Recordings from mPFC and S1 in wild-type and Neurod2 KO mice will be used to extract neural representations of social memory. The candidate will develop and apply computational models of neural dynamics and representational geometry to uncover how these codes evolve over time and are disrupted in social amnesia.
Anna Montagnini
A fully funded 3-years PhD position (EU MSCA-COFUND program) is available at Aix-Marseille University (France) for motivated students interested in the behavioral, neurophysiological and computational investigation of multistable visual perception in healthy and pathological populations. The project is strongly cross-disciplinary, including psychophysical and oculomotor experiments as well as advanced computational modeling. It will also involve an international mobility at the University of Edinburgh (UK), as well as a collaboration with the psychiatry department of Lille Hospital (France).
Tong (Tina) Liu, Ph.D.
A postdoc position is available in the Visual Perception and Plasticity (VPP) lab, led by Dr. Tina Liu, in the Department of Neurology at Georgetown University Medical Center. The VPP lab studies neuroplasticity in both healthy and clinical populations across the lifespan, with a focus on vision. The postdoctoral researcher will play a key role in studies that integrate psychophysics/visual behavior, eye tracking, functional and structural MRI, transcranial electrical stimulation, and computational modeling.
Prof. Angela Yu
Multiple PhD and postdoctoral positions are immediately available in Prof. Angela Yu's research group at TU Darmstadt. The group investigates the intersection of natural and artificial intelligence using mathematically rigorous approaches to understand the representations and computations underlying intelligent behavior. The research particularly addresses challenges of inferential uncertainty and opportunities of volitional control. The group employs diverse methodological tools including Bayesian statistical modeling, control theory, reinforcement learning, and information theory to develop theoretical frameworks explaining key aspects of cognition: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions.
Dr. Ján Antolík
The CSNG Lab at the Faculty of Mathematics and Physics at the Charles University is seeking a highly motivated Postdoctoral Researcher to join our team to work on a digital twin model of the visual system. Funded by the JUNIOR Post-Doc Fund, this position offers an exciting opportunity to conduct cutting-edge research at the intersection of systems neuroscience, computational modeling, and AI. The project involves developing novel modular, multi-layer recurrent neural network (RNN) architectures that directly mirror the architecture of the primary visual cortex. Our models will establish a one-to-one mapping between individual neurons at different stages of the visual pathway and their artificial counterparts. They will explicitly incorporate functionally specific lateral recurrent interactions, excitatory and inhibitory neuronal classes, complex single-neuron transfer functions with adaptive mechanisms, synaptic depression, and others. We will first train our new RNNs on synthetic data generated by a state-of-the-art biologically realistic recurrent spiking model of the primary visual cortex developed in our group. After establishing the proof-of-concept on the synthetic data, we will translate our models to publicly available mouse and macaque data, as well as additional data from our experimental collaborators.
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus
Updating our models of the basal ganglia using advances in neuroanatomy and computational modeling
Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum
Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.
Hidden nature of seizures
How seizures emerge from the abnormal dynamics of neural networks within the epileptogenic tissue remains an enigma. Are seizures random events, or do detectable changes in brain dynamics precede them? Are mechanisms of seizure emergence identical at the onset and later stages of epilepsy? Is the risk of seizure occurrence stable, or does it change over time? A myriad of questions about seizure genesis remains to be answered to understand the core principles governing seizure genesis. The last decade has brought unprecedented insights into the complex nature of seizure emergence. It is now believed that seizure onset represents the product of the interactions between the process of a transition to seizure, long-term fluctuations in seizure susceptibility, epileptogenesis, and disease progression. During the lecture, we will review the latest observations about mechanisms of ictogenesis operating at multiple temporal scales. We will show how the latest observations contribute to the formation of a comprehensive theory of seizure genesis, and challenge the traditional perspectives on ictogenesis. Finally, we will discuss how combining conventional approaches with computational modeling, modern techniques of in vivo imaging, and genetic manipulation open prospects for exploration of yet hidden mechanisms of seizure genesis.
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
Successes and failures of current AI as a model of visual cognition
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
How neural circuits organize and learn during development
To generate brain circuits that are both flexible and stable requires the coordination of powerful developmental mechanisms acting at different scales, including activity-dependent synaptic plasticity and changes in single neuron properties. The brain prepares to efficiently compute information and reliably generate behavior during early development without any prior sensory experience but through patterned spontaneous activity. After the onset of sensory experience, ongoing activity continues to modify sensory circuits, and plays an important functional role in the mature brain. Using quantitative data analysis, experiment-driven theory and computational modeling, I will present a framework for how neural circuits are built and organized during early postnatal development into functional units, and how they are modified by intact and perturbed sensory-evoked activity. Inspired by experimental data from sensory cortex, I will then show how neural circuits use the resulting non-random connectivity to flexibly gate a network’s response, providing a mechanism for routing information.
Can I be bothered? Neural and computational mechanisms underlying the dynamics of effort processing (BACN Early-career Prize Lecture 2021)
From a workout at the gym to helping a colleague with their work, everyday we make decisions about whether we are willing to exert effort to obtain some sort of benefit. Increases in how effortful actions and cognitive processes are perceived to be has been linked to clinically severe impairments to motivation, such as apathy and fatigue, across many neurological and psychiatric conditions. However, the vast majority of neuroscience research has focused on understanding the benefits for acting, the rewards, and not on the effort required. As a result, the computational and neural mechanisms underlying how effort is processed are poorly understood. How do we compute how effortful we perceive a task to be? How does this feed into our motivation and decisions of whether to act? How are such computations implemented in the brain? and how do they change in different environments? I will present a series of studies examining these questions using novel behavioural tasks, computational modelling, fMRI, pharmacological manipulations, and testing in a range of different populations. These studies highlight how the brain represents the costs of exerting effort, and the dynamic processes underlying how our sensitivity to effort changes as a function of our goals, traits, and socio-cognitive processes. This work provides new computational frameworks for understanding and examining impaired motivation across psychiatric and neurological conditions, as well as why all of us, sometimes, can’t be bothered.
Computational modelling of neurotransmitter release
Synaptic transmission provides the basis for neuronal communication. When an action-potential propagates through the axonal arbour, it activates voltage-gated Ca2+ channels located in the vicinity of release-ready synaptic vesicles docked at the presynaptic active zone. Ca2+ ions enter the presynaptic terminal and activate the vesicular Ca2+ sensor, thereby triggering neurotransmitter release. This whole process occurs on a timescale of a few milliseconds. In addition to fast, synchronous release, which keeps pace with action potentials, many synapses also exhibit delayed asynchronous release that persists for tens to hundreds of milliseconds. In this talk I will demonstrate how experimentally constrained computational modelling of underlying biological processes can complement laboratory studies (using electrophysiology and imaging techniques) and provide insights into the mechanisms of synaptic transmission.
How do protein-RNA condensates form and contribute to disease?
In recent years, it has become clear that intrinsically disordered regions (IDRs) of RBPs, and the structure of RNAs, often contribute to the condensation of RNPs. To understand the transcriptomic features of such RNP condensates, we’ve used an improved individual nucleotide resolution CLIP protocol (iiCLIP), which produces highly sensitive and specific data, and thus enables quantitative comparisons of interactions across conditions (Lee et al., 2021). This showed how the IDR-dependent condensation properties of TDP-43 specify its RNA binding and regulatory repertoire (Hallegger et al., 2021). Moreover, we developed software for discovery and visualisation of RNA binding motifs that uncovered common binding patterns of RBPs on long multivalent RNA regions that are composed of dispersed motif clusters (Kuret et al, 2021). Finally, we used hybrid iCLIP (hiCLIP) to characterise the RNA structures mediating the assembly of Staufen RNPs across mammalian brain development, which demonstrated the roles of long-range RNA duplexes in the compaction of long 3’UTRs. I will present how the combined analysis of the characteristics of IDRs in RBPs, multivalent RNA regions and RNA structures is required to understand the formation and functions of RNP condensates, and how they change in diseases.
The Problem of Testimony
The talk will detail work drawing on behavioural results, formal analysis, and computational modelling with agent-based simulations to unpack the scale of the challenge humans face when trying to work out and factor in the reliability of their sources. In particular, it is shown how and why this task admits of no easy solution in the context of wider communication networks, and how this will affect the accuracy of our beliefs. The implications of this for the shift in the size and topology of our communication networks through the uncontrolled rise of social media are discussed.
Functional Divergence at the Mouse Bipolar Cell Terminal
Research in our lab focuses on the circuit mechanisms underlying sensory computation. We use the mouse retina as a model system because it allows us to stimulate the circuit precisely with its natural input, patterns of light, and record its natural output, the spike trains of retinal ganglion cells. We harness the power of genetic manipulations and detailed information about cell types to uncover new circuits and discover their role in visual processing. Our methods include electrophysiology, computational modeling, and circuit tracing using a variety of imaging techniques.
Deception, ExoNETs, SmushWare & Organic Data: Tech-facilitated neurorehabilitation & human-machine training
Making use of visual display technology and human-robotic interfaces, many researchers have illustrated various opportunities to distort visual and physical realities. We have had success with interventions such as error augmentation, sensory crossover, and negative viscosity. Judicial application of these techniques leads to training situations that enhance the learning process and can restore movement ability after neural injury. I will trace out clinical studies that have employed such technologies to improve the health and function, as well as share some leading-edge insights that include deceiving the patient, moving the "smarts" of software into the hardware, and examining clinical effectiveness
Hearing in an acoustically varied world
In order for animals to thrive in their complex environments, their sensory systems must form representations of objects that are invariant to changes in some dimensions of their physical cues. For example, we can recognize a friend’s speech in a forest, a small office, and a cathedral, even though the sound reaching our ears will be very different in these three environments. I will discuss our recent experiments into how neurons in auditory cortex can form stable representations of sounds in this acoustically varied world. We began by using a normative computational model of hearing to examine how the brain may recognize a sound source across rooms with different levels of reverberation. The model predicted that reverberations can be removed from the original sound by delaying the inhibitory component of spectrotemporal receptive fields in the presence of stronger reverberation. Our electrophysiological recordings then confirmed that neurons in ferret auditory cortex apply this algorithm to adapt to different room sizes. Our results demonstrate that this neural process is dynamic and adaptive. These studies provide new insights into how we can recognize auditory objects even in highly reverberant environments, and direct further research questions about how reverb adaptation is implemented in the cortical circuit.
The self-consistent nature of visual perception
Vision provides us with a holistic interpretation of the world that is, with very few exceptions, coherent and consistent across multiple levels of abstraction, from scene to objects to features. In this talk I will present results from past and ongoing work in my laboratory that investigates the role top-down signals play in establishing such coherent perceptual experience. Based on the results of several psychophysical experiments I will introduce a theory of “self-consistent inference” and show how it can account for human perceptual behavior. The talk will close with a discussion of how the theory can help us understand more cognitive processes.
NMC4 Short Talk: Two-Photon Imaging of Norepinephrine in the Prefrontal Cortex Shows that Norepinephrine Structures Cell Firing Through Local Release
Norepinephrine (NE) is a neuromodulator that is released from projections of the locus coeruleus via extra-synaptic vesicle exocytosis. Tonic fluctuations in NE are involved in brain states, such as sleep, arousal, and attention. Previously, NE in the PFC was thought to be a homogenous field created by bulk release, but it remains unknown whether phasic (fast, short-term) fluctuations in NE can produce a spatially heterogeneous field, which could then structure cell firing at a fine spatial scale. To understand how spatiotemporal dynamics of norepinephrine (NE) release in the prefrontal cortex affect neuronal firing, we performed a novel in-vivo two-photon imaging experiment in layer ⅔ of the prefrontal cortex using a green fluorescent NE sensor and a red fluorescent Ca2+ sensor, which allowed us to simultaneously observe fine-scale neuronal and NE dynamics in the form of spatially localized fluorescence time series. Using generalized linear modeling, we found that the local NE field differs from the global NE field in transient periods of decorrelation, which are influenced by proximal NE release events. We used optical flow and pattern analysis to show that release and reuptake events can occur at the same location but at different times, and differential recruitment of release and reuptake sites over time is a potential mechanism for creating a heterogeneous NE field. Our generalized linear models predicting cellular dynamics show that the heterogeneous local NE field, and not the global field, drives cell firing dynamics. These results point to the importance of local, small-scale, phasic NE fluctuations for structuring cell firing. Prior research suggests that these phasic NE fluctuations in the PFC may play a role in attentional shifts, orienting to sensory stimuli in the environment, and in the selective gain of priority representations during stress (Mather, Clewett et al. 2016) (Aston-Jones and Bloom 1981).
NMC4 Short Talk: Multiscale and extended retrieval of associative memory structures in a cortical model of local-global inhibition balance
Inhibitory neurons take on many forms and functions. How this diversity contributes to memory function is not completely known. Previous formal studies indicate inhibition differentiated by local and global connectivity in associative memory networks functions to rescale the level of retrieval of excitatory assemblies. However, such studies lack biological details such as a distinction between types of neurons (excitatory and inhibitory), unrealistic connection schemas, and non-sparse assemblies. In this study, we present a rate-based cortical model where neurons are distinguished (as excitatory, local inhibitory, or global inhibitory), connected more realistically, and where memory items correspond to sparse excitatory assemblies. We use this model to study how local-global inhibition balance can alter memory retrieval in associative memory structures, including naturalistic and artificial structures. Experimental studies have reported inhibitory neurons and their sub-types uniquely respond to specific stimuli and can form sophisticated, joint excitatory-inhibitory assemblies. Our model suggests such joint assemblies, as well as a distribution and rebalancing of overall inhibition between two inhibitory sub-populations – one connected to excitatory assemblies locally and the other connected globally – can quadruple the range of retrieval across related memories. We identify a possible functional role for local-global inhibitory balance to, in the context of choice or preference of relationships, permit and maintain a broader range of memory items when local inhibition is dominant and conversely consolidate and strengthen a smaller range of memory items when global inhibition is dominant. This model therefore highlights a biologically-plausible and behaviourally-useful function of inhibitory diversity in memory.
NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks
Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.
NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics
Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.
NMC4 Short Talk: Different hypotheses on the role of the PFC in solving simple cognitive tasks
Low-dimensional population dynamics can be observed in neural activity recorded from the prefrontal cortex (PFC) of subjects performing simple cognitive tasks. Many studies have shown that recurrent neural networks (RNNs) trained on the same tasks can reproduce qualitatively these state space trajectories, and have used them as models of how neuronal dynamics implement task computations. The PFC is also viewed as a conductor that organizes the communication between cortical areas and provides contextual information. It is then not clear what is its role in solving simple cognitive tasks. Do the low-dimensional trajectories observed in the PFC really correspond to the computations that it performs? Or do they indirectly reflect the computations occurring within the cortical areas projecting to the PFC? To address these questions, we modelled cortical areas with a modular RNN and equipped it with a PFC-like cognitive system. When trained on cognitive tasks, this multi-system brain model can reproduce the low-dimensional population responses observed in neuronal activity as well as classical RNNs. Qualitatively different mechanisms can emerge from the training process when varying some details of the architecture such as the time constants. In particular, there is one class of models where it is the dynamics of the cognitive system that is implementing the task computations, and another where the cognitive system is only necessary to provide contextual information about the task rule as task performance is not impaired when preventing the system from accessing the task inputs. These constitute two different hypotheses about the causal role of the PFC in solving simple cognitive tasks, which could motivate further experiments on the brain.
NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule
Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.
NMC4 Short Talk: Brain-inspired spiking neural network controller for a neurorobotic whisker system
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model to study active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was properly connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.
NMC4 Short Talk: Sensory intermixing of mental imagery and perception
Several lines of research have demonstrated that internally generated sensory experience - such as during memory, dreaming and mental imagery - activates similar neural representations as externally triggered perception. This overlap raises a fundamental challenge: how is the brain able to keep apart signals reflecting imagination and reality? In a series of online psychophysics experiments combined with computational modelling, we investigated to what extent imagination and perception are confused when the same content is simultaneously imagined and perceived. We found that simultaneous congruent mental imagery consistently led to an increase in perceptual presence responses, and that congruent perceptual presence responses were in turn associated with a more vivid imagery experience. Our findings can be best explained by a simple signal detection model in which imagined and perceived signals are added together. Perceptual reality monitoring can then easily be implemented by evaluating whether this intermixed signal is strong or vivid enough to pass a ‘reality threshold’. Our model suggests that, in contrast to self-generated sensory changes during movement, our brain does not discount self-generated sensory signals during mental imagery. This has profound implications for our understanding of reality monitoring and perception in general.
NMC4 Short Talk: The complete connectome of an insect brain
Brains must integrate complex sensory information and compare to past events to generate appropriate behavioral responses. The neural circuit basis of these computations is unclear and the underlying structure unknown. Here, we mapped the comprehensive synaptic wiring diagram of the fruit fly larva brain, which contains 3,013 neurons and 544K synaptic sites. It is the most complete insect connectome to date: 1) Both brain hemispheres are reconstructed, allowing investigation of neural pathways that include contralateral axons, which we found in 37% of brain neurons. 2) All sensory neurons and descending neurons are reconstructed, allowing one to follow signals in an uninterrupted chain—from the sensory periphery, through the brain, to motor neurons in the nerve cord. We developed novel computational tools, allowing us to cluster the brain and investigate how information flows through it. We discovered that feedforward pathways from sensory to descending neurons are multilayered and highly multimodal. Robust feedback was observed at almost all levels of the brain, including descending neurons. We investigated how the brain hemispheres communicate with each other and the nerve cord, leading to identification of novel circuit motifs. This work provides the complete blueprint of a brain and a strong foundation to study the structure-function relationship of neural circuits.
Neurocognitive mechanisms of proactive temporal attention: challenging oscillatory and cortico-centered models
To survive in a rapidly dynamic world, the brain predicts the future state of the world and proactively adjusts perception, attention and action. A key to efficient interaction is to predict and prepare to not only “where” and “what” things will happen, but also to “when”. I will present studies in healthy and neurological populations that investigated the cognitive architecture and neural basis of temporal anticipation. First, influential ‘entrainment’ models suggest that anticipation in rhythmic contexts, e.g. music or biological motion, uniquely relies on alignment of attentional oscillations to external rhythms. Using computational modeling and EEG, I will show that cortical neural patterns previously associated with entrainment in fact overlap with interval timing mechanisms that are used in aperiodic contexts. Second, temporal prediction and attention have commonly been associated with cortical circuits. Studying neurological populations with subcortical degeneration, I will present data that point to a double dissociation between rhythm- and interval-based prediction in the cerebellum and basal ganglia, respectively, and will demonstrate a role for the cerebellum in attentional control of perceptual sensitivity in time. Finally, using EEG in neurodegenerative patients, I will demonstrate that the cerebellum controls temporal adjustment of cortico-striatal neural dynamics, and use computational modeling to identify cerebellar-controlled neural parameters. Altogether, these findings reveal functionally and neural context-specificity and subcortical contributions to temporal anticipation, revising our understanding of dynamic cognition.
NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity
Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.
NMC4 Keynote: A network perspective on cognitive effort
Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.
Networking—the key to success… especially in the brain
In our everyday lives, we form connections and build up social networks that allow us to function successfully as individuals and as a society. Our social networks tend to include well-connected individuals who link us to other groups of people that we might otherwise have limited access to. In addition, we are more likely to befriend individuals who a) live nearby and b) have mutual friends. Interestingly, neurons tend to do the same…until development is perturbed. Just like social networks, neuronal networks require highly connected hubs to elicit efficient communication at minimal cost (you can’t befriend everybody you meet, nor can every neuron wire with every other!). This talk will cover some of Alex’s work showing that microscopic (cellular scale) brain networks inferred from spontaneous activity show similar complex topology to that previously described in macroscopic human brain scans. The talk will also discuss what happens when neurodevelopment is disrupted in the case of a monogenic disorder called Rett Syndrome. This will include simulations of neuronal activity and the effects of manipulation of model parameters as well as what happens when we manipulate real developing networks using optogenetics. If functional development can be restored in atypical networks, this may have implications for treatment of neurodevelopmental disorders like Rett Syndrome.
Adaptive bottleneck to pallium for sequence memory, path integration and mixed selectivity representation
Spike-driven adaptation involves intracellular mechanisms that are initiated by neural firing and lead to the subsequent reduction of spiking rate followed by a recovery back to baseline. We report on long (>0.5 second) recovery times from adaptation in a thalamic-like structure in weakly electric fish. This adaptation process is shown via modeling and experiment to encode in a spatially invariant manner the time intervals between event encounters, e.g. with landmarks as the animal learns the location of food. These cells also come in two varieties, ones that care only about the time since the last encounter, and others that care about the history of encounters. We discuss how the two populations can share in the task of representing sequences of events, supporting path integration and converting from ego-to-allocentric representations. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus. Finally we discuss how all the cells of this gateway to the pallium exhibit mixed selectivity of social features of their environment. The data and computational modeling further reveal that, in contrast to a long-held belief, these gymnotiform fish are endowed with a corollary discharge, albeit only for social signalling.
The bounded rationality of probability distortion
In decision-making under risk (DMR) participants' choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information? We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and (3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.
Learning to see Stuff
Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. How does the brain learn to see properties of the world—like the glossiness of a surface—that cannot be measured by any other senses? Recent advances in unsupervised deep learning may help shed light on material perception. I will show how an unsupervised deep neural network trained on an artificial environment of surfaces that have different shapes, materials and lighting, spontaneously comes to encode those factors in its internal representations. Most strikingly, the model makes patterns of errors in its perception of material that follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how many perceptual dimensions form, in material perception and beyond.
Characterising the brain representations behind variations in real-world visual behaviour
Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.
Behavioral and neurobiological mechanisms of social cooperation
Human society operates on large-scale cooperation and shared norms of fairness. However, individual differences in cooperation and incentives to free-riding on others’ cooperation make large-scale cooperation fragile and can lead to reduced social-welfare. Deciphering the neural codes representing potential rewards/costs for self and others is crucial for understanding social decision-making and cooperation. I will first talk about how we integrate computational modeling with functional magnetic resonance imaging to investigate the neural representation of social value and the modulation by oxytocin, a nine-amino acid neuropeptide, in participants evaluating monetary allocations to self and other (self-other allocations). Then I will introduce our recent studies examining the neurobiological mechanisms underlying intergroup decision-making using hyper-scanning, and share with you how we alter intergroup decisions using psychological manipulations and pharmacological challenge. Finally, I will share with you our on-going project that reveals how individual cooperation spreads through human social networks. Our results help to better understand the neurocomputational mechanism underlying interpersonal and intergroup decision-making.
Higher cognitive resources for efficient learning
A central issue in reinforcement learning (RL) is the ‘curse-of-dimensionality’, arising when the degrees-of-freedom are much larger than the number of training samples. In such circumstances, the learning process becomes too slow to be plausible. In the brain, higher cognitive functions (such as abstraction or metacognition) may be part of the solution by generating low dimensional representations on which RL can operate. In this talk I will discuss a series of studies in which we used functional magnetic resonance imaging (fMRI) and computational modeling to investigate the neuro-computational basis of efficient RL. We found that people can learn remarkably complex task structures non-consciously, but also that - intriguingly - metacognition appears tightly coupled to this learning ability. Furthermore, when people use an explicit (conscious) policy to select relevant information, learning is accelerated by abstractions. At the neural level, prefrontal cortex subregions are differentially involved in separate aspects of learning: dorsolateral prefrontal cortex pairs with metacognitive processes, while ventromedial prefrontal cortex with valuation and abstraction. I will discuss the implications of these findings, in particular new questions on the function of metacognition in adaptive behavior and the link with abstraction.
Learning under uncertainty in autism and anxiety
Optimally interacting with a changeable and uncertain world requires estimating and representing uncertainty. Psychiatric and neurodevelopmental conditions such as anxiety and autism are characterized by an altered response to uncertainty. I will review the evidence for these phenomena from computational modelling, and outline the planned experiments from our lab to add further weight to these ideas. If time allows, I will present results from a control sample in a novel task interrogating a particular type of uncertainty and their associated transdiagnostic psychiatric traits.
Capacitance clamp - artificial capacitance in biological neurons via dynamic clamp
A basic time scale in neural dynamics from single cells to the network level is the membrane time constant - set by a neuron’s input resistance and its capacitance. Interestingly, the membrane capacitance appears to be more dynamic than previously assumed with implications for neural function and pathology. Indeed, altered membrane capacitance has been observed in reaction to physiological changes like neural swelling, but also in ageing and Alzheimer's disease. Importantly, according to theory, even small changes of the capacitance can affect neuronal signal processing, e.g. increase network synchronization or facilitate transmission of high frequencies. In experiment, robust methods to modify the capacitance of a neuron have been missing. Here, we present the capacitance clamp - an electrophysiological method for capacitance control based on an unconventional application of the dynamic clamp. In its original form, dynamic clamp mimics additional synaptic or ionic conductances by injecting their respective currents. Whereas a conductance directly governs a current, the membrane capacitance determines how fast the voltage responds to a current. Accordingly, capacitance clamp mimics an altered capacitance by injecting a dynamic current that slows down or speeds up the voltage response (Fig 1 A). For the required dynamic current, the experimenter only has to specify the original cell and the desired target capacitance. In particular, capacitance clamp requires no detailed model of present conductances and thus can be applied in every excitable cell. To validate the capacitance clamp, we performed numerical simulations of the protocol and applied it to modify the capacitance of cultured neurons. First, we simulated capacitance clamp in conductance based neuron models and analysed impedance and firing frequency to verify the altered capacitance. Second, in dentate gyrus granule cells from rats, we could reliably control the capacitance in a range of 75 to 200% of the original capacitance and observed pronounced changes in the shape of the action potentials: increasing the capacitance reduced after-hyperpolarization amplitudes and slowed down repolarization. To conclude, we present a novel tool for electrophysiology: the capacitance clamp provides reliable control over the capacitance of a neuron and thereby opens a new way to study the temporal dynamics of excitable cells.
A dynamical model of the visual cortex
In the past several years, I have been involved in building a biologically realistic model of the monkey visual cortex. Work on one of the input layers (4Ca) of the primary visual cortex (V1) is now nearly complete, and I would like to share some of what I have learned with the community. After a brief overview of the model and its capabilities, I would like to focus on three sets of results that represent three different aspects of the modeling. They are: (i) emergent E-I dynamics in local circuits; (ii) how visual cortical neurons acquire their ability to detect edges and directions of motion, and (iii) a view across the cortical surface: nonequilibrium steady states (in analogy with statistical mechanics) and beyond.
Models of Core Knowledge (Physics, Really)
Even young children seem to have an early understanding of the world around them, and the people in it. Before children can reliably say "ball", "wall", or "Saul", they expect balls to not go through walls, and for Saul to go right for a ball (if there's no wall). What is the formal conceptual structure underlying this commonsense reasoning about objects and agents? I will raise several possibilities for models underlying core intuitive physics as a way of talking about models of core knowledge and intuitive theories more generally. In particular, I will present some recent ML work trying to capture early expectations about object solidly, cohesion, and permanence, that relies on a rough-derendering approach.
A macaque connectome for simulating large-scale network dynamics in The VirtualBrain
TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.
How dendrites help solve biological and machine learning problems
Dendrites are thin processes that extend from the cell body of neurons, the main computing units of the brain. The role of dendrites in complex brain functions has been investigated for several decades, yet their direct involvement in key behaviors such as for example sensory perception has only recently been established. In my presentation I will discuss how computational modelling has helped us illuminate dendritic function. I will present the main findings of a number of projects in lab dealing with dendritic nonlinearities in excitatory and inhibitory and their consequences on neuronal tuning and memory formation, the role of dendrites in solving nonlinear problems in human neurons and recent efforts to advance machine learning algorithms by adopting dendritic features.
TA domain-general dynamic framework for social perception
Initial social perceptions are often thought to reflect direct “read outs” of facial features. Instead, we outline a perspective whereby initial perceptions emerge from an automatic yet gradual process of negotiation between the perceptual cues inherent to a person (e.g., facial cues) and top-down social cognitive processes harbored within perceivers. This perspective argues that perceivers’ social-conceptual knowledge in particular can have a fundamental structuring role in perceptions, and thus how we think about social groups, emotions, or personality traits helps determine how we visually perceive them in other people. Integrative evidence from real-time behavioral paradigms (e.g., mouse-tracking), multivariate fMRI, and computational modeling will be discussed. Together, this work shows that the way we use facial cues to categorize other people into social groups (e.g., gender, race), perceive their emotion (e.g., anger), or infer their personality (e.g., trustworthiness) are all fundamentally shaped by prior social-conceptual knowledge and stereotypical assumptions. We find that these top-down impacts on initial perceptions are driven by the interplay of higher-order prefrontal regions involved in top-down predictions and lower-level fusiform regions involved in face processing. We argue that the perception of social categories, emotions, and traits from faces can all be conceived as resulting from an integrated system relying on domain-general cognitive properties. In this system, both visual and social cognitive processes are in a close exchange, and initial social perceptions emerge in part out of the structure of social-conceptual knowledge.
Neural circuit parameter variability, robustness, and homeostasis
Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.
Cortical networks for flexible decisions during spatial navigation
My lab seeks to understand how the mammalian brain performs the computations that underlie cognitive functions, including decision-making, short-term memory, and spatial navigation, at the level of the building blocks of the nervous system, cell types and neural populations organized into circuits. We have developed methods to measure, manipulate, and analyze neural circuits across various spatial and temporal scales, including technology for virtual reality, optical imaging, optogenetics, intracellular electrophysiology, molecular sensors, and computational modeling. I will present recent work that uses large scale calcium imaging to reveal the functional organization of the mouse posterior cortex for flexible decision-making during spatial navigation in virtual reality. I will also discuss work that uses optogenetics and calcium imaging during a variety of decision-making tasks to highlight how cognitive experience and context greatly alter the cortical circuits necessary for navigation decisions.
Modelling affective biases in rodents: behavioural and computational approaches
My research focuses, broadly speaking, on how emotions impact decision making. Specifically, I am interested in affective biases, a phenomenon known to be important in depression. Using a rodent decision-making task, combined with computational modelling I have investigated how different antidepressant and pro-depressant manipulations that are known to alter mood in humans alter judgement bias, and provided insight into the decision processes that underlie these behaviours. I will also highlight how the combination of behaviour and modelling can provide a truly translation approach, enabling comparison and interpretation of the same cognitive processes between animal and human research.
Mice alternate between discrete strategies during perceptual decision-making
Classical models of perceptual decision-making assume that animals use a single, consistent strategy to integrate sensory evidence and form decisions during an experiment. In this talk, I aim to convince you that this common view is incorrect. I will show results from applying a latent variable framework, the “GLM-HMM”, to hundreds of thousands of trials of mouse choice data. Our analysis reveals that mice don’t lapse. Instead, mice switch back and forth between engaged and disengaged behavior within a single session, and each mode of behavior lasts tens to hundreds of trials.
Theory-driven probabilistic modeling of language use: a case study on quantifiers, logic and typicality
Theoretical linguistics postulates abstract structures that successfully explain key aspects of language. However, the precise relation between abstract theoretical ideas and empirical data from language use is not always apparent. Here, we propose to empirically test abstract semantic theories through the lens of probabilistic pragmatic modelling. We consider the historically important case of quantity words (e.g., `some', `all'). Data from a large-scale production study seem to suggest that quantity words are understood via prototypes. But based on statistical and empirical model comparison, we show that a probabilistic pragmatic model that embeds a strict truth-conditional notion of meaning explains the data just as well as a model that encodes prototypes into the meaning of quantity words.
A multiscale approach to brain disorders
Exploration beyond bandits
Machine learning researchers frequently focus on human-level performance, in particular in games. However, in these applications human (or human-level) behavior is commonly reduced to a simple dot on a performance graph. Cognitive science, in particular theories of learning and decision making, could hold the key to unlock what is behind this dot, thereby gaining further insights into human cognition and the design principles of intelligent algorithms. However, cognitive experiments commonly focus on relatively simple paradigms such as restricted multi-armed bandit tasks. In this talk, I will argue that cognitive science can turn its lens to more complex scenarios to study exploration in real-world domains and online games. I will show in one large data set of online food delivery orders and across many online games how current cognitive theories of learning and exploration can describe human behavior in the wild, but also how these tasks demand us to expand our theoretical toolkit to describe a rich repertoire of real-world behaviors such as empowerment and fun.
How to simulate and analyze drift-diffusion models of timing and decision making
My talk will discuss the use of some of these four, simple Matlab functions to simulate models of timing, and to fit models to empirical data. Feel free to examine the code and the relatively brief book chapter that explains the code before the talk if you would like to learn more about computational/mathematical modeling.
Uncertainty in learning and decision making
Uncertainty plays a critical role in reinforcement learning and decision making. However, exactly how subjective uncertainty influences behaviour remains unclear. Multi-armed bandits are a useful framework to gain more insight into this. Paired with computational tools such as Kalman filters, they allow us to closely characterize the interplay between trial-by-trial value, uncertainty, learning, and choice. In this talk, I will present recent research where we also measured participants visual fixations on the options in a multi-armed bandit task. The estimated value of each option, and the uncertainty in these estimations, influenced what subjects looked at in the period before making a choice and their subsequent choice, as additionally did fixation itself. Uncertainty also determined how long participants looked at the obtained outcomes. Our findings clearly show the importance of uncertainty in learning and decision making.
Cognitive Psychometrics: Statistical Modeling of Individual Differences in Latent Processes
Many psychological theories assume that qualitatively different cognitive processes can result in identical responses. Multinomial processing tree (MPT) models allow researchers to disentangle latent cognitive processes based on observed response frequencies. Recently, MPT models have been extended to explicitly account for participant and item heterogeneity. These hierarchical Bayesian MPT models provide the opportunity to connect two traditionally isolated disciplines. Whereas cognitive psychology has often focused on the experimental validation of MPT model parameters on the group level, psychometrics provides the necessary concepts and tools for measuring differences in MPT parameters on the item or person level. Moreover, MPT parameters can be regressed on covariates to model latent processes as a function of personality traits or other person characteristics.
Generalization guided exploration
How do people learn in real-world environments where the space of possible actions can be vast or even infinite? The study of human learning has made rapid progress in past decades, from discovering the neural substrate of reward prediction errors, to building AI capable of mastering the game of Go. Yet this line of research has primarily focused on learning through repeated interactions with the same stimuli. How are humans able to rapidly adapt to novel situations and learn from such sparse examples? I propose a theory of how generalization guides human learning, by making predictions about which unobserved options are most promising to explore. Inspired by Roger Shepard’s law of generalization, I show how a Bayesian function learning model provides a mechanism for generalizing limited experiences to a wide set of novel possibilities, based on the simple principle that similar actions produce similar outcomes. This model of generalization generates predictions about the expected reward and underlying uncertainty of unexplored options, where both are vital components in how people actively explore the world. This model allows us to explain developmental differences in the explorative behavior of children, and suggests a general principle of learning across spatial, conceptual, and structured domains.
‘Optimistic’ and ‘pessimistic’ decision-making as an indicator of animal emotion and welfare
Reliable and validated measures of emotion in animals are of great import; they are crucial to better understanding and developing treatments for human mood disorders, and they are necessary for ensuring good animal welfare. We have developed a novel measure of emotion in animals that is grounded in theory and psychological research – decision-making under ambiguity. Specifically, we consider that more ‘optimistic’ decisions about ambiguous stimuli reflect more positive emotional states, while the opposite is true for more ‘pessimistic’ decisions. In this talk, we will outline the background behind and implementation of this measure, meta-analyses that have been conducted to validate the measure, and discuss how computational modelling has been used to further understand the cognitive processes underlying ‘optimistic’ and ‘pessimistic’ decision-making as an indicator of animal emotion and welfare.
Awakening: Predicting external stimulation to force transitions between different brain states
Cones with character: An in vivo circuit implementation of efficient coding
In this talk I will summarize some of our recent unpublished work on spectral coding in the larval zebrafish retina. Combining 2p imaging, hyperspectral stimulation, computational modeling and connectomics, we take a renewed look at the spectral tuning of cone photoreceptors in the live eye. We find that already cones optimally rotate natural colour space in a PCA-like fashion to disambiguate greyscale from "colour" information. We then follow this signal through the retinal layers and ultimately into the brain to explore the major spectral computations performed by the visual system at its consecutive stages. We find that by and large, zebrafish colour vision can be broken into three major spectral zones: long wavelength grey-scale-like vision, short-wavelength prey capture circuits, and spectrally diverse mid-wavelength circuits which possibly support the bulk of "true colour vision" in this tetrachromate vertebrate.
Computational modeling of neurovascular coupling at the gliovascular unit
COSYNE 2025
Cerebellum and emotions: A journey from evidence to computational modeling and simulation
FENS Forum 2024