← Back

Computational Models

Topic spotlight
TopicWorld Wide

computational models

Discover seminars, jobs, and research tagged with computational models across World Wide.
51 curated items35 Seminars16 Positions
Updated 2 days ago
51 items · computational models
51 results
Position

Jorge Almeida (Proaction Lab)

Proaction Lab (University of Coimbra)
Coimbra, Portugal
Dec 5, 2025

The Proaction Laboratory (Jorge Almeida’s Lab; proactionlab.fpce.uc.pt) at the University of Coimbra (www.uc.pt), Portugal is looking for 3 motivated and bright Research Assistants to work on a prestigious ERC Starting Grant project (ContentMAP; https://cordis.europa.eu/project/id/802553) on the neural organization of object knowledge. In this project we are exploring how complex information is topographically organized in the brain using fMRI and state of the art analytical techniques, as well as computational approaches, and neuromodulation. We strongly and particularly encourage applications from women, and from underrepresented groups in academia. General Requirements for the positions: 1. Candidates should have a BA and/or MA in Psychology, Cognitive Neuroscience, Computer Science, Computational Neuroscience or any other related field as long as their work relates to the specific profiles below. 2. They should already have their diplomas (so that we can start the process of recognition in Portugal, which is a necessary step for hiring). 3. Interest in object recognition and neural representation. 5. Very good English (oral and written) communicative skills are necessary. Specific requirements for the positions: 1. Understanding of and experience with fMRI and data analysis, and specifically with MVPA. 2. Strong programming skills (matlab, python, etc.) are a requirement. Salary and duration: The position will start as soon as possible and finish in January 2024. The salary is the standard for a PhD student in Portugal – about 1100 per month tax free. Note that cost of living in Portugal (and particularly in Coimbra) is low compared to major European and American cities. Working conditions: The researcher will work directly with Jorge Almeida in Coimbra. The researcher will also be encouraged to develop her/his own projects and look for additional funding so that the stay can be extended. In fact, the expectation is that the applicants start a PhD one year after starting their positions. We have access to 2 3T MRI scanner with a 32-channel coil, to tDCS with neuronavigation, and to a fully set psychophysics lab. We have EEG and eyetracking on site. We also have access, through other collaborations, to a 7T scanner. Finally, the University of Coimbra is a 700 year old University and has been selected as a UNESCO world Heritage site. Coimbra is one of the most lively university cities in the world, and it is a beautiful city with easy access to the beach and mountain. How can I apply: Applicants are encouraged to apply as soon as possible as these positions will be closed as they are filled. Nevertheless, the deadline in May 15. The interested candidates should email Jorge Almeida for questions and applications. Please send an email (jorgealmeida@fpce.uc.pt) with the subject “Research assistant positions under ERC - ContentMAP” with: 1. The Curriculum Vitae with a list of publications, 2. 2 Reference letters 3. A motivation letter with a short description of your experience in the field and how you fulfill the requirements (fit with the position).

Position

Dr. Panayiota Poirazi

Foundation for Research and Technology-Hellas (FORTH)
Heraklion, Crete, Greece
Dec 5, 2025

The Poirazi lab is recruiting two PhD students as part of the Marie Skłodowska Curie Innovative Training Network “SmartNets” The objective of SmartNets is to provide high-level training into the functioning of biological networks to a new generation of early stage researchers, to provide them with the skills necessary for thriving careers in a burgeoning area that underpins innovative technological development across a range of diverse disciplines. This goal will be achieved by a unique combination of “hands-on” research training, non-academic placements (industrial and non-profit organisations) and courses and workshops on both scientific and complementary -so-called “soft”- skills facilitated by the academic-non-academic composition of the consortium. SmartNets brings together neuroscientists, behavioural and cognitive scientists, physicists, computer scientists, and non-profit stakeholders in order to train the next generation of data scientists that will: 1) develop a fundamental understanding of the relationship between structure and function in biological networks and 2) translate this knowledge into novel technological solutions. ESRs will develop a unique interdisciplinary set of skills that will make them capable of analyzing networks at many levels and for many systems. Mobility Rule: PhD students must not have resided or carried out their main activity (work, studies, etc.) in the country of the host for more than 12 months in the 3 years immediately before the recruitment date. Compulsory national service, short stays such as holidays, and time spent as part of a procedure for obtaining refugee status under the Geneva Convention, are not taken into account. There are two available positions, corresponding to the two projects listed below: Project 1: Role of dendritic nonlinearities in V1 network properties after visual learning. Project 2: Role of dendritic nonlinearities in hippocampal network properties after contextual and spatial learning. for more details on each project please visit: http://www.dendrites.gr/en/open-positions-967/marie-curie-etn-smartnets-11 Vacancy terms: Full Time, Fixed Term starting from 1st of April 2021 for 1 year (renewable for 2 more years). Living allowance: 34,800 € per annum (approximately 22,800 € Net). Depending on family status, successful candidates will also have mobility (600 € per month) and/or family (250 € per month) allowance, as provided by MSCA grants. Shortlisted applicants will be invited for an (online) interview.

PositionComputational Neuroscience

Dr. Jessica Ausborn

Drexel University College of Medicine
Philadelphia, PA
Dec 5, 2025

Dr. Jessica Ausborn’s group at Drexel University College of Medicine, in the Department of Neurobiology & Anatomy has a postdoctoral position available for an exciting new research project involving computational models of sensorimotor integration based on neural and behavior data in Drosophila. The interdisciplinary collaboration with the experimental group of Dr. Katie von Reyn (School of Biomedical Engineering) will involve a variety of computational techniques including the development of biophysically detailed and more abstract mathematical models together with machine learning and data science techniques to identify and describe the algorithms computed in neuronal pathways that perform sensorimotor transformations. The Ausborn laboratory is part of an interdisciplinary group of Drexel’s Neuroengineering program that includes computational and experimental investigators. This collaborative, interdisciplinary environment enables us to probe biological systems in a way that would not be possible with either an exclusively experimental or computational approach. Applicants should forward a cover letter, curriculum vitae, statement of research interests, and contact information of three references to Jessica Ausborn (ja696@drexel.edu). Salary will be commensurate with experience based on NIH guidelines.

Position

Fatma Deniz

UC Berkeley
N/A
Dec 5, 2025

We are looking for highly motivated researchers to join our group in interdisciplinary projects that focus on the development of computational models to understand how linguistic information is represented in the human brain. Computational encoding models in combination with deep learning-based machine learning techniques will be developed, compared, and applied to identify linguistic representations in the brain. The projects are conducted in collaboration with UC Berkeley.

Position

Fatma Deniz

Technische Universität Berlin
Technische Universität Berlin
Dec 5, 2025

We are looking for a highly motivated researcher to join our group in interdisciplinary projects that focus on the development of computational models to understand how linguistic information is represented in the human brain during language comprehension. Computational encoding models in combination with deep learning-based machine learning techniques will be developed, compared, and applied to identify linguistic representations in the brain across languages.

Position

Fatma Deniz

TU Berlin
TU Berlin
Dec 5, 2025

We are looking for a highly motivated researcher to join our group in interdisciplinary projects that focus on the development of computational models to understand how linguistic information is represented in the human brain during multi-modal language comprehension. Computational encoding models in combination with deep learning-based machine learning techniques will be developed, compared, and applied to identify linguistic representations in the brain. The projects are conducted in collaboration with UC Berkeley.

Position

N/A

N/A
N/A
Dec 5, 2025

We are announcing one or more 2-year postdoc positions in identification and analysis of lexical semantic change using computational models applied to diachronic texts. Our languages change over time. As a consequence, words may look the same, but have different meanings at different points in time, a phenomenon called lexical semantic change (LSC). To facilitate interpretation, search, and analysis of old texts, we build computational methods for automatic detection and characterization of LSC from large amounts of text. Our outputs will be used by the lexicographic R&D unit that compiles the Swedish Academy dictionaries, as well as by researchers from the humanities and social sciences that include textual analysis as a central methodological component. The Change is Key! program and the Towards Computational Lexical Semantic Change Detection research project offer a vibrant research environment for this exciting and rapidly growing cutting-edge research field in NLP. There is a unique opportunity to contribute to the field of LSC, but also to humanities and social sciences through our active collaboration with international researchers in historical linguistics, analytical sociology, gender studies, conceptual history, and literary studies.

Position

Boris Gutkin, Catherine Tallon-Baudry

Group for Neural Theory and LNC2, Ecole Normale Superieure
Paris, France
Dec 5, 2025

A three-year post-doctoral position in theoretical neuroscience is open to explore the mechanisms of interaction between interoceptive cardiac and exteroceptive tactile inputs at the cortical level. We aim to develop data-based computational models of cardiac and somatosensory cortical circuit dynamics. Building on these models we will determine the conditions under which interactions between exteroceptive and interoceptive inputs occur and which underlying mechanisms (e.g., phase-resetting, gating, phasic arousal,..) best explain experimental data.

Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste, Italy
Dec 5, 2025

Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department.

Position

Fatma Deniz

TU Berlin
Berlin, Germany
Dec 5, 2025

We are looking for a highly motivated researcher to join our group for an ERC-project that focuses on the development of computational models to understand how linguistic information is represented in the human brain. Computational encoding models in combination with large language models will be developed, compared, and applied to identify linguistic representations in the brain. We are a small yet dynamic international team, driven by motivation and collaboration within a supportive environment.

PositionNeuroscience

N/A

University of Chicago
Chicago
Dec 5, 2025

The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. We especially welcome applicants who develop mathematical approaches, computational models, and machine learning methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron’s lab investigates how the cellular and synaptic circuitry of neuronal circuits supports the complex dynamics and computations that are routinely observed in the brain. Jorge Jaramillo’s lab investigates how subcortical structures interact with cortical circuits to subserve cognitive processes such as memory, attention, and decision making. Ramon Nogueira’s lab investigates the geometry of representations as the computational support of cognitive processes like abstraction in noisy artificial and biological neural networks. Marcella Noorman’s lab investigates how properties of synapses, neurons, and circuits shape the neural dynamics that enable flexible and efficient computation. Samuel Muscinelli’s lab studies how the anatomy of brain circuits both governs learning and adapts to it. We combine analytical theory, machine learning, and data analysis, in close collaboration with experimentalists. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Neuroscience Institute, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and UChicago’s Data Science Institute. The Neuroscience Institute is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.

Position

Mathew Diamond

SISSA
Trieste, Italy
Dec 5, 2025

Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for current admission round are those funded by the 'Fondo Sociale Europeo Plus', accessible through the first link above.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 28, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus

Khamasi Mehdi
Centre National de la Recherche Scientifique / Sorbonne University
Nov 7, 2024
SeminarPsychology

Error Consistency between Humans and Machines as a function of presentation duration

Thomas Klein
Eberhard Karls Universität Tübingen
Jun 30, 2024

Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.

SeminarNeuroscience

Connectome-based models of neurodegenerative disease

Jacob Vogel
Lund University
Dec 4, 2023

Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.

SeminarNeuroscience

Modeling the Navigational Circuitry of the Fly

Larry Abbott
Columbia University
Nov 30, 2023

Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.

SeminarNeuroscience

Computational models of spinal locomotor circuitry

Simon Danner
Drexel University, Philadelphia, USA
Jun 13, 2023

To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.

SeminarArtificial IntelligenceRecording

Computational models and experimental methods for the human cornea

Anna Pandolfi
Politecnico di Milano
May 1, 2023

The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.

SeminarNeuroscienceRecording

Neural circuits of visuospatial working memory

Albert Compte
IDIPAPS, Barcelona
May 10, 2022

One elementary brain function that underlies many of our cognitive behaviors is the ability to maintain parametric information briefly in mind, in the time scale of seconds, to span delays between sensory information and actions. This component of working memory is fragile and quickly degrades with delay length. Under the assumption that behavioral delay-dependencies mark core functions of the working memory system, our goal is to find a neural circuit model that represents their neural mechanisms and apply it to research on working memory deficits in neuropsychiatric disorders. We have constrained computational models of spatial working memory with delay-dependent behavioral effects and with neural recordings in the prefrontal cortex during visuospatial working memory. I will show that a simple bump attractor model with weak inhomogeneities and short-term plasticity mechanisms can link neural data with fine-grained behavioral output in a trial-by-trial basis and account for the main delay-dependent limitations of working memory: precision, cardinal repulsion biases and serial dependence. I will finally present data from participants with neuropsychiatric disorders that suggest that serial dependence in working memory is specifically altered, and I will use the model to infer the possible neural mechanisms affected.

SeminarOpen SourceRecording

GeNN

James Knight
University of Sussex
Mar 22, 2022

Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.

SeminarNeuroscience

Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?

Dorothea Koert
Technical Universtiy Darmstadt
Dec 14, 2021

While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.

SeminarNeuroscience

Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli

Helene Schreyer
Gollisch lab, University Medical Center Göttingen, Germany
Dec 8, 2021

Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.

SeminarPsychology

Computational Models of Fine-Detail and Categorical Information in Visual Working Memory: Unified or Separable Representations?

Timothy J Ricker
University of South Dakota
Nov 21, 2021

When we remember a stimulus we rarely maintain a full fidelity representation of the observed item. Our working memory instead maintains a mixture of the observed feature values and categorical/gist information. I will discuss evidence from computational models supporting a mix of categorical and fine-detail information in working memory. Having established the need for two memory formats in working memory, I will discuss whether categorical and fine-detailed information for a stimulus are represented separately or as a single unified representation. Computational models of these two potential cognitive structures make differing predictions about the pattern of responses in visual working memory recall tests. The present study required participants to remember the orientation of stimuli for later reproduction. The pattern of responses are used to test the competing representational structures and to quantify the relative amount of fine-detailed and categorical information maintained. The effects of set size, encoding time, serial order, and response order on memory precision, categorical information, and guessing rates are also explored. (This is a 60 min talk).

SeminarNeuroscienceRecording

Computational Models of Compulsivity

Frederike Petzschner
Brown University
Nov 10, 2021
SeminarArtificial Intelligence

Seeing things clearly: Image understanding through hard-attention and reasoning with structured knowledges

Jonathan Gerrand
University of the Witwatersrand
Nov 3, 2021

In this talk, Jonathan aims to frame the current challenges of explainability and understanding in ML-driven approaches to image processing, and their potential solution through explicit inference techniques.

SeminarNeuroscienceRecording

The Social Brain: From Models to Mental Health

Xiaosi Gu
Mount Sinai
Sep 16, 2021

Given the complex and dynamic nature of our social relationships, the human brain needs to quickly learn and adapt to new social situations. The breakdown of any of these computations could lead to social deficits, as observed in many psychiatric disorders. In this talk, I will present our recent neurocomputational and intracranial work that attempts to model both 1) how humans dynamically adapt beliefs about other people and 2) how individuals can exert influence over social others through model-based forward thinking. Lastly, I will present our findings of how impaired social computations might manifest in different disorders such as addiction, delusion, and autism. Taken together, these findings reveal the dynamic and proactive nature of human interactions as well as the clinical significance of these high-order social processes.

SeminarNeuroscienceRecording

An in-silico framework to study the cholinergic modulation of the neocortex

Cristina Colangelo
EPFL, Blue Brain Project
Jun 29, 2021

Neuromodulators control information processing in cortical microcircuits by regulating the cellular and synaptic physiology of neurons. Computational models and detailed simulations of neocortical microcircuitry offer a unifying framework to analyze the role of neuromodulators on network activity. In the present study, to get a deeper insight in the organization of the cortical neuropil for modeling purposes, we quantify the fiber length per cortical volume and the density of varicosities for catecholaminergic, serotonergic and cholinergic systems using immunocytochemical staining and stereological techniques. The data obtained are integrated into a biologically detailed digital reconstruction of the rodent neocortex (Markram et al, 2015) in order to model the influence of modulatory systems on the activity of the somatosensory cortex neocortical column. Simulations of ascending modulation of network activity in our model predict the effects of increasing levels of neuromodulators on diverse neuron types and synapses and reveal a spectrum of activity states. Low levels of neuromodulation drive microcircuit activity into slow oscillations and network synchrony, whereas high neuromodulator concentrations govern fast oscillations and network asynchrony. The models and simulations thus provide a unifying in silico framework to study the role of neuromodulators in reconfiguring network activity.

SeminarNeuroscience

Towards a neurally mechanistic understanding of visual cognition

Kohitij Kar
Massachusetts Institute of Technology
Jun 13, 2021

I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!

SeminarPhysics of LifeRecording

Microorganism locomotion in viscoelastic fluids

Becca Thomases
University of California Davis
May 11, 2021

Many microorganisms and cells function in complex (non-Newtonian) fluids, which are mixtures of different materials and exhibit both viscous and elastic stresses. For example, mammalian sperm swim through cervical mucus on their journey through the female reproductive tract, and they must penetrate the viscoelastic gel outside the ovum to fertilize. In micro-scale swimming the dynamics emerge from the coupled interactions between the complex rheology of the surrounding media and the passive and active body dynamics of the swimmer. We use computational models of swimmers in viscoelastic fluids to investigate and provide mechanistic explanations for emergent swimming behaviors. I will discuss how flexible filaments (such as flagella) can store energy from a viscoelastic fluid to gain stroke boosts due to fluid elasticity. I will also describe 3D simulations of model organisms such as C. Reinhardtii and mammalian sperm, where we use experimentally measured stroke data to separate naturally coupled stroke and fluid effects. We explore why strokes that are adapted to Newtonian fluid environments might not do well in viscoelastic environments.

SeminarNeuroscienceRecording

Learning in pain: probabilistic inference and (mal)adaptive control

Flavia Mancini
Department of Engineering
Apr 19, 2021

Pain is a major clinical problem affecting 1 in 5 people in the world. There are unresolved questions that urgently require answers to treat pain effectively, a crucial one being how the feeling of pain arises from brain activity. Computational models of pain consider how the brain processes noxious information and allow mapping neural circuits and networks to cognition and behaviour. To date, they have generally have assumed two largely independent processes: perceptual and/or predictive inference, typically modelled as an approximate Bayesian process, and action control, typically modelled as a reinforcement learning process. However, inference and control are intertwined in complex ways, challenging the clarity of this distinction. I will discuss how they may comprise a parallel hierarchical architecture that combines pain inference, information-seeking, and adaptive value-based control. Finally, I will discuss whether and how these learning processes might contribute to chronic pain.

SeminarNeuroscience

Choosing, fast and slow: Implications of prioritized-sampling models for understanding automaticity and control

Cendri Hutcherson
University of Toronto
Apr 14, 2021

The idea that behavior results from a dynamic interplay between automatic and controlled processing underlies much of decision science, but has also generated considerable controversy. In this talk, I will highlight behavioral and neural data showing how recently-developed computational models of decision making can be used to shed new light on whether, when, and how decisions result from distinct processes operating at different timescales. Across diverse domains ranging from altruism to risky choice biases and self-regulation, our work suggests that a model of prioritized attentional sampling and evidence accumulation may provide an alternative explanation for many phenomena previously interpreted as supporting dual process models of choice. However, I also show how some features of the model might be taken as support for specific aspects of dual-process models, providing a way to reconcile conflicting accounts and generating new predictions and insights along the way.

SeminarNeuroscienceRecording

Inferring brain-wide interactions using data-constrained recurrent neural network models

Matthew Perich
Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 23, 2021

Behavior arises from the coordinated activity of numerous distinct brain regions. Modern experimental tools allow access to neural populations brain-wide, yet understanding such large-scale datasets necessitates scalable computational models to extract meaningful features of inter-region communication. In this talk, I will introduce Current-Based Decomposition (CURBD), an approach for inferring multi-region interactions using data-constrained recurrent neural network models. I will first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. I will then apply CURBD to understand the brain-wide flow of information leading to behavioral state transitions in larval zebrafish. These examples will establish CURBD as a flexible, scalable framework to infer brain-wide interactions that are inaccessible from experimental measurements alone.

SeminarNeuroscienceRecording

The When, Where and What of visual memory formation

Brad Wyble
Pennsylvania State University
Feb 11, 2021

The eyes send a continuous stream of about two million nerve fibers to the brain, but only a fraction of this information is stored as visual memories. This talk will detail three neurocomputational models that attempt an understanding how the visual system makes on-the-fly decisions about how to encode that information. First, the STST family of models (Bowman & Wyble 2007; Wyble, Potter, Bowman & Nieuwenstein 2011) proposes mechanisms for temporal segmentation of continuous input. The conclusion of this work is that the visual system has mechanisms for rapidly creating brief episodes of attention that highlight important moments in time, and also separates each episode from temporally adjacent neighbors to benefit learning. Next, the RAGNAROC model (Wyble et al. 2019) describes a decision process for determining the spatial focus (or foci) of attention in a spatiotopic field and the neural mechanisms that provide enhancement of targets and suppression of highly distracting information. This work highlights the importance of integrating behavioral and electrophysiological data to provide empirical constraints on a neurally plausible model of spatial attention. The model also highlights how a neural circuit can make decisions in a continuous space, rather than among discrete alternatives. Finally, the binding pool (Swan & Wyble 2014; Hedayati, O’Donnell, Wyble in Prep) provides a mechanism for selectively encoding specific attributes (i.e. color, shape, category) of a visual object to be stored in a consolidated memory representation. The binding pool is akin to a holographic memory system that layers representations of select latent representations corresponding to different attributes of a given object. Moreover, it can bind features into distinct objects by linking them to token placeholders. Future work looks toward combining these models into a coherent framework for understanding the full measure of on-the-fly attentional mechanisms and how they improve learning.

SeminarNeuroscienceRecording

Global visual salience of competing stimuli

Alex Hernandez-Garcia
Université de Montréal
Dec 9, 2020

Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? In this talk, I will present the analysis of a computational model of the global salience of natural images. We trained a machine learning algorithm to learn the direction of the first saccade of participants who freely observed pairs of images. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. The coefficients of the model provided a reliable measure of the likelihood of each image to attract the first fixation when seen next to another image, that is their global salience. For example, images of close-up faces and images containing humans were consistently looked first and were assigned higher global salience. Interestingly, we found that global salience cannot be explained by the feature-driven local salience of images, the influence of task and familiarity was rather small and we reproduced the previously reported left-sided bias. This computational model of global salience allows to analyse multiple other aspects of human visual perception of competing stimuli. In the talk, I will also present our latest results from analysing the saccadic reaction time as a function of the global salience of the pair of images.

SeminarPhysics of Life

“Models for Liquid-liquid Phase Separation of Intrinsically Disordered Proteins”

Wenwei Zheng
Arizona State University
Oct 19, 2020

Intrinsically disordered proteins (IDPs), lack of a well-defined folded structure, have been recently shown to be critical to forming membrane-less organelles via liquid-liquid phase separation (LLPS). Due to the flexible conformations of IDPs, it could be challenging to investigate IDPs with solely experimental techniques. Computational models can therefore provide complementary views at several aspects, including the fundamental physics underlying LLPS and the sequence determinants contributing to LLPS. In this presentation, I will start with our coarse-grained computational framework that can help generate sequence dependent phase diagrams. The coarse-grained model further led to the development of a polymer model with empirical parameters to quickly predict LLPS of IDPs. At last, I will show our preliminary efforts on addressing molecular interactions within LLPS of IDPs using all-atom explicit-solvent simulations.

SeminarNeuroscience

Computational models of neural development

Geoffrey J. Goodhill
The University of Queensland
Jul 20, 2020

Unlike even the most sophisticated current forms of artificial intelligence, developing biological organisms must build their neural hardware from scratch. Furthermore they must start to evade predators and find food before this construction process is complete. I will discuss an interdisciplinary program of mathematical and experimental work which addresses some of the computational principles underlying neural development. This includes (i) how growing axons navigate to their targets by detecting and responding to molecular cues in their environment, (ii) the formation of maps in the visual cortex and how these are influenced by visual experience, and (iii) how patterns of neural activity in the zebrafish brain develop to facilitate precisely targeted hunting behaviour. Together this work contributes to our understanding of both normal neural development and the etiology of neurodevelopmental disorders.

SeminarNeuroscienceRecording

Computational Models of Large-Scale Brain Networks - Dynamics & Function

Jorge Mejias
University of Amsterdam
Apr 21, 2020

Theoretical and computational models of neural systems have been traditionally focused on small neural circuits, given the lack of reliable data on large-scale brain structures. The situation has started to change in recent years, with novel recording technologies and large organized efforts to describe the brain at a larger scale. In this talk, Professor Mejias from the University of Amsterdam will review his recent work on developing anatomically constrained computational models of large-scale cortical networks of monkeys, and how this approach can help to answer important questions in large-scale neuroscience. He will focus on three main aspects: (i) the emergence of functional interactions in different frequency regimes, (ii) the role of balance for efficient large-scale communication, and (iii) new paradigms of brain function, such as working memory, in large-scale networks.