← Back

Computational

Topic spotlight
TopicWorld Wide

Computational

Discover seminars, jobs, and research tagged with Computational across World Wide.
123 curated items60 Seminars40 ePosters19 Positions4 Conferences
Updated in 4 days
123 items · Computational
123 results
SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

Position

Professors Yale cohen and Jennifer groh

University of Pennsylvania
Philadelphia, USA
Dec 5, 2025

Yale Cohen (U. Penn; https://auditoryresearchlaboratory.weebly.com/) and Jennifer Groh (Duke U.; www.duke.edu/~jmgroh) seeks a full-time post-doctoral scholar. Our labs study visual, auditory, and multisensory processing in the brain using neurophysiological and computational techniques. We have a newly funded NIH grant to study the contribution of corticofugal connectivity in non-human primate models of auditory perception. The work will take place at the Penn site. This will be a full-time, 12-month renewable appointment. Salary will be commensurate with experience and consistent with NIH NRSA stipends. To apply, send your CV along with contact information for 2 referees to: compneuro@sas.upenn.edu. For questions, please contact Yale Cohen (ycohen@pennmedicine.upenn.edu). Applications will be considered on a rolling basis, and we anticipate a summer 2022 start date. Penn is an Affirmative Action / Equal Opportunity Employer committed to providing employment opportunity without regard to an individual’s age, color, disability, gender, gender expression, gender identity, genetic information, national origin, race, religion, sex, sexual orientation, or veteran status

PositionComputational Biology

Navin Pokala

New York Institute of Technology
New York City, USA
Dec 5, 2025

The Department of Biological and Chemical Sciences at New York Institute of Technology seeks outstanding applicants for a tenure-track position at the Assistant Professor level to develop a research program in the broadly defined fields of biostatistics, bioinformatics or computational biology that complements existing research programs and carries potential to establish external collaborations. The successful candidate will teach introductory and advanced courses in the biological sciences at the undergraduate level, notably Biostatistics. The Department has undergraduate programs in Biology, Chemistry, and Biotechnology at the New York City and Long Island (Old Westbury) campuses. New York Tech emphasizes interdisciplinary scholarship, research, and teaching. Department faculty research interests are diverse, including medicinal and organic chemistry, neuroscience, cell and molecular biology, genetics, biochemistry, microbiology, computational chemistry, and analytical chemistry. Faculty in the Department have ample opportunity to collaborate with faculty at the New York Tech’s College of Engineering and Computer Sciences and College of Osteopathic Medicine.

PositionComputational Biology

Department of Biological and Chemical Sciences

New York Institute of Technology
New York City, NY
Dec 5, 2025

The Department of Biological and Chemical Sciences at New York Institute of Technology seeks outstanding applicants for a tenure-track position at the Assistant Professor level to develop a research program in the broadly defined fields of biostatistics, bioinformatics or computational biology that complements existing research programs and carries potential to establish external collaborations. The Assistant Professor will be responsible for our NY campus locations. The Department has undergraduate programs in Biology, Chemistry, and Biotechnology at the New York City and Long Island (Old Westbury) campuses. New York Tech emphasizes interdisciplinary scholarship, research, and teaching. Department faculty research interests are diverse, including medicinal and organic chemistry, neuroscience, cell and molecular biology, genetics, biochemistry, microbiology, computational chemistry, and analytical chemistry. Faculty in the Department have ample opportunity to collaborate with faculty at the New York Tech’s College of Engineering and Computer Sciences and College of Osteopathic Medicine. The successful candidate will teach introductory and advanced lecture and laboratory courses in the biological sciences at the undergraduate level, notably Biostatistics. Interested candidates should apply here: https://internal-nyit.icims.com/jobs/2681/assistant-professor%2c-tenure-track%2c-computational-quantitative-biology/job?iis=Social+Networks&iieid=pt1645617395656e37e8

Position

Prof Richard Smith

Northwestern Medical School
Chicago, USA
Dec 5, 2025

The Smith lab is seeking team members to conduct exciting research in human neurodevelopment and models of neuronal activity in the prenatal brain. Interested applicants can expect to work in an environment that promotes autonomy and all the resources to develop and expand the several ongoing research projects of the lab. These include, but are not limited to, questions relating to human brain development, human disease modeling (using high throughput approaches), and therapeutics. Current NIH funded projects are examining ion flux and biophysical properties of developing cell types in the prenatal brain, specifically as is relates to childhood diseases. As a trainee you will have to opportunity gain expertise in several state-of the art approaches widely used to interrogate important aspects of neurodevelopment, including human stem cell cerebral organoid models, single cell sequencing (RNA/ATAC), high-content confocal microscopy/screening, ferret model of cortex development and hiPSC derived neuronal models (excitatory, dopamine, inhibitory). Additional physiology approaches include, 2-photon imaging, high-throughput electrophysiology, patch-clamp, and calcium/voltage imaging. Please visit our website for details about our research, www.rsmithlab.com

Position

Drs. David Brang and Zhongming Liu

University of Michigan
Ann Arbor, Michigan, United States
Dec 5, 2025

We are seeking a full-time post-doctoral research fellow to study computational and neuroscientific models of perception and cognition. The research fellow will be jointly supervised by Dr. David Brang (https://sites.lsa.umich.edu/brang-lab/) and Zhongming Liu (https://libi.engin.umich.edu). The goal of this collaboration is to build computational models of cognitive and perceptual processes using data combined from electrocorticography (ECoG) and fMRI. The successful applicant will also have freedom to conduct additional research based on their interests, using a variety of methods -- ECoG, fMRI, DTI, lesion mapping, and EEG. The ideal start date is from spring to fall 2021 and the position is expected to last for at least two years, with the possibility of extension for subsequent years. Interested applicants should email their CV, a cover letter describing their research interests and career goals, and contact information for 2-3 references to Drs. David Brang (djbrang@umich.edu) and Zhongming Liu (zmliu@umich.edu).

PositionNeuroscience

IMPRS for Brain & Behavior

research center caesar, Uni of Bonn, MPFI, FAU
Germany, Bonn OR USA, FL, Jupiter
Dec 5, 2025

Join our unique transatlantic PhD program in neuroscience! The International Max Planck Research School (IMPRS) for Brain and Behavior is a unique transatlantic collaboration between two Max Planck Neuroscience institutes – the Max Planck-associated research center caesar and the Max Planck Florida Institute for Neuroscience – and the partner universities, University of Bonn and Florida Atlantic University. It offers a completely funded international PhD program in neuroscience in either Bonn, Germany, or Jupiter, Florida. We offer an exciting opportunity to outstanding Bachelor's and/or Master's degree holders (or equivalent) from any field (life sciences, mathematics, physics, computer science, engineering, etc.) to be immersed in a stimulating environment that provides novel technologies to elucidate the function of brain circuits from molecules to animal behavior. The comprehensive and diverse expertise of the faculty in the exploration of brain-circuit function using advanced imaging and optogenetic techniques combined with comprehensive training in fundamental neurobiology will provide students with an exceptional level of knowledge to pursue a successful independent research career. Apply to Bonn, Germany by November 15, 2020 or to Florida, USA by December 1, 2020!

Position

Dr. Lei Zhang

University of Birmingham, Centre for Human Brain Health, Institute of Mental Health
University of Birmingham, UK
Dec 5, 2025

Dr. Lei Zhang is looking for 2x PhD students interested in the cognitive, computational, and neural basis of (social) learning and decision-making in health and disease. The newly opened ALP(E)N Lab (Adaptive Learning Psychology and Neuroscience Lab) addresses the fundamental question of the “adaptive brain” by studying the cognitive, computational, and neurobiological basis of (social) learning and decision-making in healthy individuals (across the lifespan), and in psychiatric disorders. The lab combines an array of approaches including neuroimaging, patient studies and computational modelling (particularly hierarchical Bayesian modelling) with behavioural paradigms inspired by learning theories. The lab is based at the Centre for Human Brain Health and Institute of Mental Health at the University of Birmingham, UK, with access to exceptional facilities including MRI, MEG, TMS, and fNIRS. Funding is available through two competitive schemes from the BBSRC and MRC that provide a stipend, fees (at UK rate) and a research allowance, amongst other benefits. International (ie, outside UK) applicants are welcome.

Position

Dr. Nicholas Hatsopoulos

Department of Organismal Biology & Anatomy, University of Chicago
University of Chicago, 1027 East 57th Street, Chicago, IL 60637
Dec 5, 2025

A postdoctoral position is available beginning in 2023 to help develop a brain-machine interface for dexterous control of a cortically-controlled robotic arm and hand. The approach involves creating 1) decoding strategies from electrical signals in motor cortex that enable the user to not only control the movements of the arm and hand but also the forces transmitted through the hand and 2) encoding models to convey tactile sensations to the user through intracortical microstimulation of somatosensory cortex.

PositionNeuroscience

N/A

New York University
New York University
Dec 5, 2025

New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.

PositionNeuroscience

N/A

New York University
New York University
Dec 5, 2025

New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.

PositionNeuroscience

N/A

Center for Neuroscience and Cell Biology of the University of Coimbra (CNC-UC)
Coimbra and Cantanhede, University of Coimbra
Dec 5, 2025

The PostDoctoral researcher will conduct research activities in modelling and simulation of reward-modulated prosocial behavior and decision-making. The position is part of a larger effort to uncover the computational and mechanistic bases of prosociality and empathy at the behavioral and circuit levels. The role involves working at the interface between experimental data (animal behavior and electrophysiology) and theoretical modelling, with an emphasis on Multi-Agent Reinforcement Learning and neural population dynamics.

Position

N/A

New York University
New York University
Dec 5, 2025

New York University is home to a thriving interdisciplinary community of researchers using computational and theoretical approaches in neuroscience. We are interested in exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. A listing of faculty, sorted by their primary departmental affiliation, is given below. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Nevertheless, admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.

PositionNeuroscience

Anna Montagnini

Institut de Neurosciences de la Timone (INT,UMR7289)
Aix-Marseille University, France
Dec 5, 2025

A fully funded 3-years PhD position (EU MSCA-COFUND program) is available at Aix-Marseille University (France) for motivated students interested in the behavioral, neurophysiological and computational investigation of multistable visual perception in healthy and pathological populations. The project is strongly cross-disciplinary, including psychophysical and oculomotor experiments as well as advanced computational modeling. It will also involve an international mobility at the University of Edinburgh (UK), as well as a collaboration with the psychiatry department of Lille Hospital (France).

SeminarOpen Source

Computational bio-imaging via inverse scattering

Shwetadwip Chowdhury
Assistant Professor, University of Texas at Austin
Nov 24, 2025

Optical imaging is a major research tool in the basic sciences, and is the only imaging modality that routinely enables non-ionized imaging with subcellular spatial resolutions and high imaging speeds. In biological imaging applications, however, optical imaging is limited by tissue scattering to short imaging depths. This prevents large-scale bio-imaging by allowing visualization of only the outer superficial layers of an organism, or specific components isolated from within the organism and prepared in-vitro.

SeminarNeuroscience

Convergent large-scale network and local vulnerabilities underlie brain atrophy across Parkinson’s disease stages

Andrew Vo
Montreal Neurological Institute, McGill University
Nov 5, 2025
SeminarNeuroscience

AutoMIND: Deep inverse models for revealing neural circuit invariances

Richard Gao
Goethe University
Oct 1, 2025
SeminarNeuroscience

OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis

Michael Demidenko
Stanford University
Jul 31, 2025

In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
Jul 8, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarPsychology

Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives

Lukasz Piwek
University of Bath & Cumulus Neuroscience Ltd
Jul 6, 2025

Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.

SeminarNeuroscience

Neurobiological constraints on learning: bug or feature?

Cian O’Donell
Ulster University
Jun 10, 2025

Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.

SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 22, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
May 13, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia

Alban Elias Voppel
McGill University
Apr 30, 2025
SeminarArtificial IntelligenceRecording

Computational modelling of ocular pharmacokinetics

Arto Urtti
School of Pharmacy, University of Eastern Finland
Apr 21, 2025

Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).

Conference

COSYNE 2025

Montreal, Canada
Mar 27, 2025

The COSYNE 2025 conference was held in Montreal with post-conference workshops in Mont-Tremblant, continuing to provide a premier forum for computational and systems neuroscience. Attendees exchanged cutting-edge research in a single-track main meeting and in-depth specialized workshops, reflecting Cosyne’s mission to understand how neural systems function:contentReference[oaicite:6]{index=6}:contentReference[oaicite:7]{index=7}.

SeminarNeuroscience

Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades

Andrej Bicanski
Max Planck Institute for Human Cognitive and Brain Sciences
Mar 11, 2025

How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.

SeminarNeuroscienceRecording

Brain Emulation Challenge Workshop

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Feb 21, 2025

Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.

SeminarNeuroscience

Memory formation in hippocampal microcircuit

Andreakos Nikolaos
Visiting Scientist, School of Computer Science, University of Lincoln, Scientific Associate, National and Kapodistrian University of Athens
Feb 6, 2025

The centre of memory is the medial temporal lobe (MTL) and especially the hippocampus. In our research, a more flexible brain-inspired computational microcircuit of the CA1 region of the mammalian hippocampus was upgraded and used to examine how information retrieval could be affected under different conditions. Six models (1-6) were created by modulating different excitatory and inhibitory pathways. The results showed that the increase in the strength of the feedforward excitation was the most effective way to recall memories. In other words, that allows the system to access stored memories more accurately.

SeminarNeuroscience

Predicting traveling waves: a new mathematical technique to link the structure of a network to the specific patterns of neural activity

Roberto Budzinski
Western University
Feb 5, 2025
SeminarNeuroscience

Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge

Jorge Almeida
University of Coimbra
Jan 27, 2025

Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.

SeminarOpen SourceRecording

Towards open meta-research in neuroimaging

Kendra Oudyk
ORIGAMI - Neural data science - https://neurodatascience.github.io/
Dec 8, 2024

When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/

SeminarNeuroscience

Screen Savers : Protecting adolescent mental health in a digital world

Amy Orben
University of Cambridge UK
Dec 2, 2024

In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.

SeminarNeuroscience

The Brain Prize winners' webinar

Larry Abbott, Haim Sompolinsky, Terry Sejnowski
Columbia University; Harvard University / Hebrew University; Salk Institute
Nov 29, 2024

This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 28, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 28, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

Learning and Memory

Nicolas Brunel, Ashok Litwin-Kumar, Julijana Gjeorgieva
Duke University; Columbia University; Technical University Munich
Nov 28, 2024

This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.

SeminarNeuroscience

Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus

Khamasi Mehdi
Centre National de la Recherche Scientifique / Sorbonne University
Nov 7, 2024
SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 15, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscience

Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data

Hamid Karimi-Rouzbahani
The University of Queensland
Oct 10, 2024

There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany
Sep 29, 2024

Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange:contentReference[oaicite:8]{index=8}. Bernstein Conference 2024, held in Frankfurt am Main, featured discussions, keynote lectures, and poster sessions, and has established itself as one of the most renowned conferences worldwide in this field:contentReference[oaicite:9]{index=9}:contentReference[oaicite:10]{index=10}.

SeminarOpen Source

Optogenetic control of Nodal signaling patterns

Nathan Lord
Assistant Professor, Department of Computational and Systems Biology
Sep 19, 2024

Embryos issue instructions to their cells in the form of patterns of signaling activity. Within these patterns, the distribution of signaling in time and space directs the fate of embryonic cells. Tools to perturb developmental signaling with high resolution in space and time can help reveal how these patterns are decoded to make appropriate fate decisions. In this talk, I will present new optogenetic reagents and an experimental pipeline for creating designer Nodal signaling patterns in live zebrafish embryos. Our improved optoNodal reagents eliminate dark activity and improve response kinetics, without sacrificing dynamic range. We adapted an ultra-widefield microscopy platform for parallel light patterning in up to 36 embryos and demonstrated precise spatial control over Nodal signaling activity and downstream gene expression. Using this system, we demonstrate that patterned Nodal activation can initiate specification and internalization movements of endodermal precursors. Further, we used patterned illumination to generate synthetic signaling patterns in Nodal signaling mutants, rescuing several characteristic developmental defects. This study establishes an experimental toolkit for systematic exploration of Nodal signaling patterns in live embryos.

SeminarNeuroscienceRecording

Prosocial Learning and Motivation across the Lifespan

Patricia Lockwood
University of Birmingham, UK
Sep 9, 2024

2024 BACN Early-Career Prize Lecture Many of our decisions affect other people. Our choices can decelerate climate change, stop the spread of infectious diseases, and directly help or harm others. Prosocial behaviours – decisions that help others – could contribute to reducing the impact of these challenges, yet their computational and neural mechanisms remain poorly understood. I will present recent work that examines prosocial motivation, how willing we are to incur costs to help others, prosocial learning, how we learn from the outcomes of our choices when they affect other people, and prosocial preferences, our self-reports of helping others. Throughout the talk, I will outline the possible computational and neural bases of these behaviours, and how they may differ from young adulthood to old age.

SeminarPsychology

Error Consistency between Humans and Machines as a function of presentation duration

Thomas Klein
Eberhard Karls Universität Tübingen
Jun 30, 2024

Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.

SeminarNeuroscience

Updating our models of the basal ganglia using advances in neuroanatomy and computational modeling

Mac Shine
University of Sydney
May 28, 2024
SeminarNeuroscience

Modelling the fruit fly brain and body

Srinivas Turaga
HHMI | Janelia
May 14, 2024

Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.

SeminarNeuroscienceRecording

Predictive processing: a circuit approach to psychosis

Georg Keller
Friedrich Miescher Institute for Biomedical Research, Basel
Mar 13, 2024

Predictive processing is a computational framework that aims to explain how the brain processes sensory information by making predictions about the environment and minimizing prediction errors. It can also be used to explain some of the key symptoms of psychotic disorders such as schizophrenia. In my talk, I will provide an overview of our progress in this endeavor.

SeminarNeuroscienceRecording

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

Dmitri 'Mitya' Chklovskii
Flatiron Institute, Center for Computational Neuroscience
Feb 4, 2024

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

SeminarNeuroscience

Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions

Marija Markicevic
Yale
Jan 17, 2024

Understanding how macroscale brain dynamics are shaped by microscale mechanisms is crucial in neuroscience. We investigate this relationship in animal models by directly manipulating cellular properties and measuring whole-brain responses using resting-state fMRI. Specifically, we explore the impact of chemogenetically neuromodulating D1 medium spiny neurons in the dorsomedial caudate putamen (CPdm) on BOLD dynamics within a striato-thalamo-cortical circuit in mice. Our findings indicate that CPdm neuromodulation alters BOLD dynamics in thalamic subregions projecting to the dorsomedial striatum, influencing both local and inter-regional connectivity in cortical areas. This study contributes to understanding structure–function relationships in shaping inter-regional communication between subcortical and cortical levels.

SeminarNeuroscienceRecording

Tracking subjects' strategies in behavioural choice experiments at trial resolution

Mark Humphries
University of Nottingham
Dec 6, 2023

Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.

SeminarNeuroscience

Connectome-based models of neurodegenerative disease

Jacob Vogel
Lund University
Dec 4, 2023

Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.

SeminarNeuroscience

Modeling the Navigational Circuitry of the Fly

Larry Abbott
Columbia University
Nov 30, 2023

Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.

SeminarNeuroscience

Bio-realistic multiscale modeling of cortical circuits

Anton Arkhipov
Allen Institute
Nov 23, 2023

A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.

SeminarArtificial IntelligenceRecording

Mathematical and computational modelling of ocular hemodynamics: from theory to applications

Giovanna Guidoboni
University of Maine
Nov 13, 2023

Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.

SeminarNeuroscienceRecording

Diffuse coupling in the brain - A temperature dial for computation

Eli Müller
The University of Sydney
Oct 5, 2023

The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of largescale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.

SeminarNeuroscience

Brain Connectivity Workshop

Ed Bullmore, Jianfeng Feng, Viktor Jirsa, Helen Mayberg, Pedro Valdes-Sosa
Sep 19, 2023

Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.

SeminarNeuroscienceRecording

Social and non-social learning: Common, or specialised, mechanisms? (BACN Early Career Prize Lecture 2022)

Jennifer Cook
University of Birmingham, UK
Sep 11, 2023

The last decade has seen a burgeoning interest in studying the neural and computational mechanisms that underpin social learning (learning from others). Many findings support the view that learning from other people is underpinned by the same, ‘domain-general’, mechanisms underpinning learning from non-social stimuli. Despite this, the idea that humans possess social-specific learning mechanisms - adaptive specializations moulded by natural selection to cope with the pressures of group living - persists. In this talk I explore the persistence of this idea. First, I present dissociations between social and non-social learning - patterns of data which are difficult to explain under the domain-general thesis and which therefore support the idea that we have evolved special mechanisms for social learning. Subsequently, I argue that most studies that have dissociated social and non-social learning have employed paradigms in which social information comprises a secondary, additional, source of information that can be used to supplement learning from non-social stimuli. Thus, in most extant paradigms, social and non-social learning differ both in terms of social nature (social or non-social) and status (primary or secondary). I conclude that status is an important driver of apparent differences between social and non-social learning. When we account for differences in status, we see that social and non-social learning share common (dopamine-mediated) mechanisms.

SeminarNeuroscience

Cognitive Computational Neuroscience 2023

Cate Hartley, Helen Barron, James McClelland, Tim Kietzmann, Leslie Kaelbling, Stanislas Dehaene
Aug 23, 2023

CCN is an annual conference that serves as a forum for cognitive science, neuroscience, and artificial intelligence researchers dedicated to understanding the computations that underlie complex behavior.

SeminarNeuroscienceRecording

Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing

Pulin Gong
The University of Sydney
Aug 10, 2023

The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.

SeminarArtificial IntelligenceRecording

Computational and mathematical approaches to myopigenesis

C. Ross Ethier
Georgia Institute of Technology and Emory University
Jul 31, 2023

Myopia is predicted to affect 50% of all people worldwide by 2050, and is a risk factor for significant, potentially blinding ocular pathologies, such as retinal detachment and glaucoma. Thus, there is significant motivation to better understand the process of myopigenesis and to develop effective anti-myopigenic treatments. In nearly all cases of human myopia, scleral remodeling is an obligate step in the axial elongation that characterizes the condition. Here I will describe the development of a biomechanical assay based on transient unconfined compression of scleral samples. By treating the scleral as a poroelastic material, one can determine scleral biomechanical properties from extremely small samples, such as obtained from the mouse eye. These properties provide proxy measures of scleral remodeling, and have allowed us to identify all-trans retinoic acid (atRA) as a myopigenic stimulus in mice. I will also describe nascent collaborative work on modeling the transport of atRA in the eye.

SeminarNeuroscience

Bernstein Student Workshop Series

Cátia Fortunato
Imperial College London
Jun 14, 2023

The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.

SeminarNeuroscience

Computational models of spinal locomotor circuitry

Simon Danner
Drexel University, Philadelphia, USA
Jun 13, 2023

To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.

SeminarNeuroscience

The Geometry of Decision-Making

Iain Couzin
University of Konstanz, Germany
May 23, 2023

Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.

SeminarNeuroscience

The role of sub-population structure in computations through neural dynamics

Srdjan Ostojic
École normale supérieure
May 18, 2023

Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.

SeminarNeuroscience

Why are we consistently inconsistent? On the neural mechanisms of behavioural inconsistency

Tobias Hauser
Developmental Computational Psychiatry Lab, University of Tübingen
May 3, 2023
SeminarNeuroscience

Bernstein Student Workshop Series

Lílian de Sardenberg Schmid
Max Planck Institute for Biological Cybernetics
May 3, 2023

The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.

SeminarNeuroscienceRecording

Signatures of criticality in efficient coding networks

Shervin Safavi
Dayan lab, MPI for Biological Cybernetics
May 2, 2023

The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related

SeminarArtificial IntelligenceRecording

Computational models and experimental methods for the human cornea

Anna Pandolfi
Politecnico di Milano
May 1, 2023

The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.

SeminarNeuroscienceRecording

Estimating repetitive spatiotemporal patterns from resting-state brain activity data

Yusuke Takeda
Computational Brain Dynamics Team, RIKEN Center for Advanced Intelligence Project, Japan; Department of Computational Brain Imaging, ATR Neural Information Analysis Laboratories, Japan
Apr 27, 2023

Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.

SeminarNeuroscience

Bernstein Student Workshop Series

James Malkin
Apr 12, 2023

The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.

SeminarNeuroscience

From spikes to factors: understanding large-scale neural computations

Mark M. Churchland
Columbia University, New York, USA
Apr 5, 2023

It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.

SeminarNeuroscience

Hallucinating mice, dopamine and immunity; towards mechanistic treatment targets for psychosis

Katharina Schmack
Francis Crick Institute, London
Mar 22, 2023

Hallucinations are a core symptom of psychotic disorders and have traditionally been difficult to study biologically. We developed a new behavioral computational approach to measure hallucinations-like perception in humans and mice alike. Using targeted neural circuit manipulations, we identified a causal role for striatal dopamine in mediating hallucination-like perception. Building on this, we currently investigate the neural and immunological upstream regulators of these dopaminergic circuits with the goal to identify new biological treatment targets for psychosis

SeminarNeuroscience

Explaining an asymmetry in similarity and difference judgments

Nick Ichien
University of California, Los Angeles
Mar 22, 2023

Explicit similarity judgments tend to emphasize relational information more than do difference judgments. In this talk, I propose and test the hypothesis that this asymmetry arises because human reasoners represent the relation different as the negation of the relation same (i.e., as not-same). This proposal implies that processing difference is more cognitively demanding than processing similarity. Both for verbal comparisons between word pairs, and for visual comparisons between sets of geometric shapes, participants completed a triad task in which they selected which of two options was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard, either by virtue of featural similarity or by virtue of relational similarity. On ambiguous trials, one option was more featurally similar (but less relationally similar) to the standard, whereas the other was more relationally similar (but less featurally similar). Given the higher cognitive complexity of assessing relational similarity, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty accurately detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. These results provide convergent evidence for a representational asymmetry between the relations same and different.

Conference

COSYNE 2023

Montreal, Canada
Mar 9, 2023

The COSYNE 2023 conference provided an inclusive forum for exchanging experimental and theoretical approaches to problems in systems neuroscience, continuing the tradition of bringing together the computational neuroscience community:contentReference[oaicite:5]{index=5}. The main meeting was held in Montreal followed by post-conference workshops in Mont-Tremblant, fostering intensive discussions and collaboration.

Conference

Neuromatch 5

Virtual (online)
Sep 27, 2022

Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.

ePoster

Investigating hippocampal synaptic plasticity in Schizophrenia: a computational and experimental approach using MEA recordings

Sarah Hamdi Cherif, Candice Roux, Valentine Bouet, Jean-Marie Billard, Jérémie Gaidamour, Laure Buhry, Radu Ranta

Bernstein Conference 2024

ePoster

Computational analysis of optogenetic inhibition of a pyramidal CA1 neuron

Laila Weyn, Thomas Tarnaud, Xavier De Becker, Wout Joseph, Robrecht Raedt, Emmeric Tanghe

Bernstein Conference 2024

ePoster

Computational mechanisms of odor perception and representational drift in rodent olfactory systems

Alexander Roxin, Licheng Zou

Bernstein Conference 2024

ePoster

Computational implications of motor primitives for cortical motor learning

Natalie Schieferstein, Paul Züge, Raoul-Martin Memmesheimer

Bernstein Conference 2024

ePoster

A computationally efficient simplification of the Brunel-Wang NMDA model: Numerical approach and first results

Jan-Eirik Skaar, Nicolai Haug, Hans Ekkehard Plesser

Bernstein Conference 2024

ePoster

Computational modelling of dentate granule cells reveals Pareto optimal trade-off between pattern separation and energy efficiency (economy)

Martin Mittag, Alexander Bird, Hermann Cuntz, Peter Jedlicka

Bernstein Conference 2024

ePoster

Deep generative networks as a computational approach for global non-linear control modeling in the nematode C. elegans

Doris Voina, Steven Brunton, Jose Kutz

Bernstein Conference 2024

ePoster

Dendritic computation: A comprehensive review of current biological and computational developments

Tim Bax, Pascal Nieters

Bernstein Conference 2024

ePoster

Physiological Implementation of Synaptic Plasticity at Behavioral Timescales Supports Computational Properties of Place Cell Formation

Hsuan-Pei Huang, Han-Ying Wang, Ching-Tsuey Chen, Ching-Lung Hsu

Bernstein Conference 2024

ePoster

Cerebellum learns to drive cortical dynamics: a computational lesson

COSYNE 2022

ePoster

Computational principles of systems memory consolidation

COSYNE 2022

ePoster

Computational strategies and neural correlates of probabilistic reversal learning in mice

COSYNE 2022

ePoster

Subcortical modulation of cortical dynamics for motor planning: a computational framework

COSYNE 2022

ePoster

Subcortical modulation of cortical dynamics for motor planning: a computational framework

COSYNE 2022

ePoster

Computational and behavioral mechanisms underlying selecting, stopping, and switching of actions

Shan Zhong & Vasileios Christopoulos

COSYNE 2023

ePoster

Computational mechanisms underlying thalamic regulation of prefrontal signal-to-noise ratio in decision making

Zhe Chen, Xiaohan Zhang, Michael Halassa

COSYNE 2023

ePoster

Geometrical Features of Neural Trajectory as a Computational Motif for the Cue-Stimulus Integration of Pain

Jungwoo Kim, Suhwan Gim, Seng Bum Yoo, Choong-Wan Woo

COSYNE 2023

ePoster

Intracranial electrophysiological evidence for a novel neuro-computational mechanism of cognitive flexibility in humans

Xinyuan Yan, Seth Koneig, Becket Ebitz, Benjamin Hayden, David Darrow, Alexander Herman

COSYNE 2023

ePoster

Leveraging computational and animal models of vision to probe atypical emotion recognition in autism

Hamid Ramezanpour & Kohitij Kar

COSYNE 2023

ePoster

Non-stationary recurrent neural networks for reconstructing computational dynamics of rule learning

Max Ingo Thurm, Georgia Koppe, Eleonora Russo, Florian Bähner, Daniel Durstewitz

COSYNE 2023

ePoster

Synaptic-type-specific clustering optimizes the computational capabilities of balanced recurrent networks

Emmanouil Giannakakis, Anna Levina, Victor Buendia, Sina Khajehabdollahi

COSYNE 2023

ePoster

Bounds on the computational complexity of neurons due to dendritic morphology

Anamika Agrawal, Michael Buice

COSYNE 2025

ePoster

Biologically Realistic Computational Primitives of Neocortex Implemented on Neuromorphic Hardware Improve Vision Transformer Performance

Asim Iqbal, Hassan Mahmood, Greg Stuart, Gord Fishell, Suraj Honnuraiah

COSYNE 2025

ePoster

Composing computational primitives in recurrent neural networks

Arianna Di Bernardo, Cheng Tang, Mehrdad Jazayeri, Srdjan Ostojic

COSYNE 2025

ePoster

Computational benefits of normalization in a circuit model

Deying Song, Chengcheng Huang

COSYNE 2025

ePoster

A computational framework for decoding active sensing

Benjamin Cellini, Burak Boyacioglu, Stanley Stupski, Floris van Breugel

COSYNE 2025

ePoster

A computational map of flight control in Drosophila melanogaster

Serene Dhawan, Bradley Dickerson, Jasper Phelps, Wei-Chung Lee, John Tuthill

COSYNE 2025

ePoster

A computational model of cortico-basal ganglia circuits for deciding between reaching actions

Poune Mirzazadeh, David Thura, Andrea Green, Paul Cisek

COSYNE 2025

ePoster

A Computational Model of Visual Spatial Distortions in Human Amblyopia

Farzaneh Olianezhad, Jianzhong Jin, Sohrab Najafian, Akihito Maruya, Qasim Zaidi, Jose-Manuel Alonso

COSYNE 2025

ePoster

Computational modeling of neurovascular coupling at the gliovascular unit

Florian Dupeuble, Hugues Berry, Audrey Denizot

COSYNE 2025

ePoster

Computational specialization of cortical cell types

Kaiwen Sheng, Brendan Bicknell, Beverly Clark, Michael Hausser

COSYNE 2025

ePoster

Experimental and computational evidence of learned synaptic dynamics to enhance temporal processing

Jamie McDowell, Shanglin Zhou, Dean Buonomano

COSYNE 2025

ePoster

Interleaved regime promotes structural learning: behavioral and computational insights

Salma Elnagar, Nicholas Menghi, Francesco Silvestrin, Christian F. Doeller

COSYNE 2025

ePoster

Advanced metamodelling on the o2S2PARC computational neurosciences platform facilitates stimulation selectivity and power efficiency optimization and intelligent control

Werner Van Geit, Cédric Bujard, Mads Rystok Bisgaard, Pedro Crespo-Valero, Esra Neufeld, Niels Kuster

FENS Forum 2024

ePoster

Cerebellum and emotions: A journey from evidence to computational modeling and simulation

Dianela Andreina Osorio Becerra, Dimitri Rodarie, Alessio Marta, Claudia Casellato, Egidio D'Angelo

FENS Forum 2024

ePoster

Cognitive computational model reveals repetition bias in a sequential decision-making task

Eric Legler, Darío Cuevas Rivera, Sarah Schwöbel, Stefan Kiebel

FENS Forum 2024

ePoster

Comprehensive whole rat brain analysis: Expanding rat brain research with enhanced imaging and computational tools

Grace Houser, Jaspreet Kaur, Amaia Diego Ajenjo, Madelaine Bonfils, Salif Komi, Rune Berg

FENS Forum 2024

ePoster

Computational analysis of Alzheimer’s disease-associated missense SNPs to understand underlying molecular mechanisms and identify diagnostic biomarkers

Aziza Abugaliyeva, Saad Rasool

FENS Forum 2024

ePoster

A computational analysis of second-order conditioning in mice using DeepLabCut

Marc Canela Grimau, Julia Pinho, Jose Antonio González Parra, Arnau Busquets Garcia

FENS Forum 2024

ePoster

Computational model-based analysis of spatial navigation strategies under stress and uncertainty using place, distance, and border cells

Yanran Qiu, Shiqi Wang, Jiachuan Wang, Wenyuan Zhu, Yuchen Cheng, Beste Aydemir, Wulfram Gerstner, Carmen Sandi, Gediminas Luksys

FENS Forum 2024