Computational Approach
computational approach
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Hallucinating mice, dopamine and immunity; towards mechanistic treatment targets for psychosis
Hallucinations are a core symptom of psychotic disorders and have traditionally been difficult to study biologically. We developed a new behavioral computational approach to measure hallucinations-like perception in humans and mice alike. Using targeted neural circuit manipulations, we identified a causal role for striatal dopamine in mediating hallucination-like perception. Building on this, we currently investigate the neural and immunological upstream regulators of these dopaminergic circuits with the goal to identify new biological treatment targets for psychosis
Implications of Vector-space models of Relational Concepts
Vector-space models are used frequently to compare similarity and dimensionality among entity concepts. What happens when we apply these models to relational concepts? What is the evidence that such models do apply to relational concepts? If we use such a model, then one implication is that maximizing surface feature variation should improve relational concept learning. For example, in STEM instruction, the effectiveness of teaching by analogy is often limited by students’ focus on superficial features of the source and target exemplars. However, in contrast to the prediction of the vector-space computational model, the strategy of progressive alignment (moving from perceptually similar to different targets) has been suggested to address this issue (Gentner & Hoyos, 2017), and human behavioral evidence has shown benefits from progressive alignment. Here I will present some preliminary data that supports the computational approach. Participants were explicitly instructed to match stimuli based on relations while perceptual similarity of stimuli varied parametrically. We found that lower perceptual similarity reduced accurate relational matching. This finding demonstrates that perceptual similarity may interfere with relational judgements, but also hints at why progressive alignment maybe effective. These are preliminary, exploratory data and I to hope receive feedback on the framework and to start a discussion in a group on the utility of vector-space models for relational concepts in general.
Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation
Studies of the mouse visual system have revealed a variety of visual brain areas in a roughly hierarchical arrangement, together with a multitude of behavioral capacities, ranging from stimulus-reward associations, to goal-directed navigation, and object-centric discriminations. However, an overall understanding of the mouse’s visual cortex organization, and how this organization supports visual behaviors, remains unknown. Here, we take a computational approach to help address these questions, providing a high-fidelity quantitative model of mouse visual cortex. By analyzing factors contributing to model fidelity, we identified key principles underlying the organization of mouse visual cortex. Structurally, we find that comparatively low-resolution and shallow structure were both important for model correctness. Functionally, we find that models trained with task-agnostic, unsupervised objective functions, based on the concept of contrastive embeddings were substantially better than models trained with supervised objectives. Finally, the unsupervised objective builds a general-purpose visual representation that enables the system to achieve better transfer on out-of-distribution visual, scene understanding and reward-based navigation tasks. Our results suggest that mouse visual cortex is a low-resolution, shallow network that makes best use of the mouse’s limited resources to create a light-weight, general-purpose visual system – in contrast to the deep, high-resolution, and more task-specific visual system of primates.
Network science and network medicine: New strategies for understanding and treating the biological basis of mental ill-health
The last twenty years have witnessed extraordinarily rapid progress in basic neuroscience, including breakthrough technologies such as optogenetics, and the collection of unprecedented amounts of neuroimaging, genetic and other data relevant to neuroscience and mental health. However, the translation of this progress into improved understanding of brain function and dysfunction has been comparatively slow. As a result, the development of therapeutics for mental health has stagnated too. One central challenge has been to extract meaning from these large, complex, multivariate datasets, which requires a shift towards systems-level mathematical and computational approaches. A second challenge has been reconciling different scales of investigation, from genes and molecules to cells, circuits, tissue, whole-brain, and ultimately behaviour. In this talk I will describe several strands of work using mathematical, statistical, and bioinformatic methods to bridge these gaps. Topics will include: using artificial neural networks to link the organization of large-scale brain connectivity to cognitive function; using multivariate statistical methods to link disease-related changes in brain networks to the underlying biological processes; and using network-based approaches to move from genetic insights towards drug discovey. Finally, I will discuss how simple organisms such as C. elegans can serve to inspire, test, and validate new methods and insights in networks neuroscience.
Why do we need a formal ontology of cognition, and what should it look like?
In my talk I will discuss the concept of a cognitive ontology, which defines the parts of the mind that psychologists and neuroscientsts aim to study. I will discuss the way in which ontologies have traditionally been defined, and then discuss ways in which ontology might be reconsidered in the context of computational approaches to cognition.
Characterising the brain representations behind variations in real-world visual behaviour
Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.
The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium
Join the Department of Bioengineering on the 26th May at 9:00am for The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium. This year’s lecture speaker will be distinguished bioengineer and neuroscientist Professor Mandyam V. Srinivasan AM FRS, from the University of Queensland. Professor Srinivasan studies visual systems, particularly those of bees and birds. His research has revealed how flying insects negotiate narrow gaps, regulate the height and speed of flight, estimate distance flown, and orchestrate smooth landings. Apart from enhancing fundamental knowledge, these findings are leading to novel, biologically inspired approaches to the design of guidance systems for unmanned aerial vehicles with applications in the areas of surveillance, security and planetary exploration. Following Professor Srinivasan’s lecture will be the Bioinspired GNC Mini Symposium with guest speakers from Google Deepmind, Imperial College London, the University of Würzburg and the University of Konstanz giving talks on their research into autonomous robot navigation, neural mechanisms of compass orientation in insects and computational approaches to motor control.
A Changing View of Vision: From Molecules to Behavior in Zebrafish
All sensory perception and every coordinated movement, as well as feelings, memories and motivation, arise from the bustling activity of many millions of interconnected cells in the brain. The ultimate function of this elaborate network is to generate behavior. We use zebrafish as our experimental model, employing a diverse array of molecular, genetic, optical, connectomic, behavioral and computational approaches. The goal of our research is to understand how neuronal circuits integrate sensory inputs and internal state and convert this information into behavioral responses.
Modelling affective biases in rodents: behavioural and computational approaches
My research focuses, broadly speaking, on how emotions impact decision making. Specifically, I am interested in affective biases, a phenomenon known to be important in depression. Using a rodent decision-making task, combined with computational modelling I have investigated how different antidepressant and pro-depressant manipulations that are known to alter mood in humans alter judgement bias, and provided insight into the decision processes that underlie these behaviours. I will also highlight how the combination of behaviour and modelling can provide a truly translation approach, enabling comparison and interpretation of the same cognitive processes between animal and human research.
Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Machine reasoning in histopathologic image analysis
Deep learning is an emerging computational approach inspired by the human brain’s neural connectivity that has transformed machine-based image analysis. By using histopathology as a model of an expert-level pattern recognition exercise, we explore the ability for humans to teach machines to learn and mimic image-recognition and decision making. Moreover, these models also allow exploration into the ability for computers to independently learn salient histological patterns and complex ontological relationships that parallel biological and expert knowledge without the need for explicit direction or supervision. Deciphering the overlap between human and unsupervised machine reasoning may aid in eliminating biases and improving automation and accountability for artificial intelligence-assisted vision tasks and decision-making. Aleksandar Ivanov Title:
Deep generative networks as a computational approach for global non-linear control modeling in the nematode C. elegans
Bernstein Conference 2024
A neurocomputational approach for effort-based decision-making: Comparing self and environmental motivation
FENS Forum 2024