design
Latest
Organization of thalamic networks and mechanisms of dysfunction in schizophrenia and autism
Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis
In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Enhancing Real-World Event Memory
Memory is essential for shaping how we interpret the world, plan for the future, and understand ourselves, yet effective cognitive interventions for real-world episodic memory loss remain scarce. This talk introduces HippoCamera, a smartphone-based intervention inspired by how the brain supports memory, designed to enhance real-world episodic recollection by replaying high-fidelity autobiographical cues. It will showcase how our approach improves memory, mood, and hippocampal activity while uncovering links between memory distinctiveness, well-being, and the perception of time.
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Neuronal population interactions between brain areas
Most brain functions involve interactions among multiple, distinct areas or nuclei. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Using a population approach, we found that interactions between early visual cortical areas (V1 and V2) occur through a low-dimensional bottleneck, termed a communication subspace. In this talk, I will focus on the statistical methods we have developed for studying interactions between brain areas. First, I will describe Delayed Latents Across Groups (DLAG), designed to disentangle concurrent, bi-directional (i.e., feedforward and feedback) interactions between areas. Second, I will describe an extension of DLAG applicable to three or more areas, and demonstrate its utility for studying simultaneous Neuropixels recordings in areas V1, V2, and V3. Our results provide a framework for understanding how neuronal population activity is gated and selectively routed across brain areas.
Current and future trends in neuroimaging
With the advent of several different fMRI analysis tools and packages outside of the established ones (i.e., SPM, AFNI, and FSL), today's researcher may wonder what the best practices are for fMRI analysis. This talk will discuss some of the recent trends in neuroimaging, including design optimization and power analysis, standardized analysis pipelines such as fMRIPrep, and an overview of current recommendations for how to present neuroimaging results. Along the way we will discuss the balance between Type I and Type II errors with different correction mechanisms (e.g., Threshold-Free Cluster Enhancement and Equitable Thresholding and Clustering), as well as considerations for working with large open-access databases.
Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?
Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Walk the talk: concrete actions to promote diversity in neuroscience in Latin America
Building upon the webinar "What are the main barriers to succeed in brain sciences in Latin America?" (February 2021) and the paper "Addressing the opportunity gap in the Latin American neuroscience community" (Silva, A., Iyer, K., Cirulli, F. et al. Nat Neurosci August 2022), this ALBA-IBRO Webinar is the next chapter in our journey towards fostering inclusivity and diversity in neuroscience in Latin America. The webinar is designed to go beyond theoretical discussions and provide tangible solutions. We will showcase 3-4 best practice case studies, shining a spotlight on real-life actions and campaigns implemented at the institutional level, be it within government bodies, universities, or other organisations. Our goal is to empower neuroscientists across Latin America by equipping them with practical knowledge they can apply in their own institutions and countries.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Dynamic endocrine modulation of the nervous system
Sex hormones are powerful neuromodulators of learning and memory. In rodents and nonhuman primates estrogen and progesterone influence the central nervous system across a range of spatiotemporal scales. Yet, their influence on the structural and functional architecture of the human brain is largely unknown. Here, I highlight findings from a series of dense-sampling neuroimaging studies from my laboratory designed to probe the dynamic interplay between the nervous and endocrine systems. Individuals underwent brain imaging and venipuncture every 12-24 hours for 30 consecutive days. These procedures were carried out under freely cycling conditions and again under a pharmacological regimen that chronically suppresses sex hormone production. First, resting state fMRI evidence suggests that transient increases in estrogen drive robust increases in functional connectivity across the brain. Time-lagged methods from dynamical systems analysis further reveals that these transient changes in estrogen enhance within-network integration (i.e. global efficiency) in several large-scale brain networks, particularly Default Mode and Dorsal Attention Networks. Next, using high-resolution hippocampal subfield imaging, we found that intrinsic hormone fluctuations and exogenous hormone manipulations can rapidly and dynamically shape medial temporal lobe morphology. Together, these findings suggest that neuroendocrine factors influence the brain over short and protracted timescales.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Relations and Predictions in Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines
Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.
How Children Design by Analogy: The Role of Spatial Thinking
Analogical reasoning is a common reasoning tool for learning and problem-solving. Existing research has extensively studied children’s reasoning when comparing, or choosing from ready-made analogies. Relatively less is known about how children come up with analogies in authentic learning environments. Design education provides a suitable context to investigate how children generate analogies for creative learning purposes. Meanwhile, the frequent use of visual analogies in design provides an additional opportunity to understand the role of spatial reasoning in design-by-analogy. Spatial reasoning is one of the most studied human cognitive factors and is critical to the learning of science, technology, engineering, arts, and mathematics (STEAM). There is growing interest in exploring the interplay between analogical reasoning and spatial reasoning. In this talk, I will share qualitative findings from a case study, where a class of 11-to-12-year-olds in the Netherlands participated in a biomimicry design project. These findings illustrate (1) practical ways to support children’s analogical reasoning in the ideation process and (2) the potential role of spatial reasoning as seen in children mapping form-function relationships in nature analogically and adaptively to those in human designs.
A specialized role for entorhinal attractor dynamics in combining path integration and landmarks during navigation
During navigation, animals estimate their position using path integration and landmarks. In a series of two studies, we used virtual reality and electrophysiology to dissect how these inputs combine to generate the brain’s spatial representations. In the first study (Campbell et al., 2018), we focused on the medial entorhinal cortex (MEC) and its set of navigationally-relevant cell types, including grid cells, border cells, and speed cells. We discovered that attractor dynamics could explain an array of initially puzzling MEC responses to virtual reality manipulations. This theoretical framework successfully predicted both MEC grid cell responses to additional virtual reality manipulations, as well as mouse behavior in a virtual path integration task. In the second study (Campbell*, Attinger* et al., 2021), we asked whether these principles generalize to other navigationally-relevant brain regions. We used Neuropixels probes to record thousands of neurons from MEC, primary visual cortex (V1), and retrosplenial cortex (RSC). In contrast to the prevailing view that “everything is everywhere all at once,” we identified a unique population of MEC neurons, overlapping with grid cells, that became active with striking spatial periodicity while head-fixed mice ran on a treadmill in darkness. These neurons exhibited unique cue-integration properties compared to other MEC, V1, or RSC neurons: they remapped more readily in response to conflicts between path integration and landmarks; they coded position prospectively as opposed to retrospectively; they upweighted path integration relative to landmarks in conditions of low visual contrast; and as a population, they exhibited a lower-dimensional activity structure. Based on these results, our current view is that MEC attractor dynamics play a privileged role in resolving conflicts between path integration and landmarks during navigation. Future work should include carefully designed causal manipulations to rigorously test this idea, and expand the theoretical framework to incorporate notions of uncertainty and optimality.
Children-Agent Interaction For Assessment and Rehabilitation: From Linguistic Skills To Mental Well-being
Socially Assistive Robots (SARs) have shown great potential to help children in therapeutic and healthcare contexts. SARs have been used for companionship, learning enhancement, social and communication skills rehabilitation for children with special needs (e.g., autism), and mood improvement. Robots can be used as novel tools to assess and rehabilitate children’s communication skills and mental well-being by providing affordable and accessible therapeutic and mental health services. In this talk, I will present the various studies I have conducted during my PhD and at the Cambridge Affective Intelligence and Robotics Lab to explore how robots can help assess and rehabilitate children’s communication skills and mental well-being. More specifically, I will provide both quantitative and qualitative results and findings from (i) an exploratory study with children with autism and global developmental disorders to investigate the use of intelligent personal assistants in therapy; (ii) an empirical study involving children with and without language disorders interacting with a physical robot, a virtual agent, and a human counterpart to assess their linguistic skills; (iii) an 8-week longitudinal study involving children with autism and language disorders who interacted either with a physical or a virtual robot to rehabilitate their linguistic skills; and (iv) an empirical study to aid the assessment of mental well-being in children. These findings can inform and help the child-robot interaction community design and develop new adaptive robots to help assess and rehabilitate linguistic skills and mental well-being in children.
When to stop immune checkpoint inhibitor for malignant melanoma? Challenges in emulating target trials
Observational data have become a popular source of evidence for causal effects when no randomized controlled trial exists, or to supplement information provided by those. In practice, a wide range of designs and analytical choices exist, and one recent approach relies on the target trial emulation framework. This framework is particularly well suited to mimic what could be obtained in a specific randomized controlled trial, while avoiding time-related selection biases. In this abstract, we present how this framework could be useful to emulate trials in malignant melanoma, and the challenges faced when planning such a study using longitudinal observational data from a cohort study. More specifically, two questions are envisaged: duration of immune checkpoint inhibitors, and trials comparing treatment strategies for BRAF V600-mutant patients (targeted therapy as 1st line, followed by immunotherapy as 2nd line, vs. immunotherapy as 2nd line followed by targeted therapy as 1st line). Using data from 1027 participants to the MELBASE cohort, we detail the results for the emulation of a trial where immune checkpoint inhibitor would be stopped at 6 months vs. continued, in patients in response or with stable disease.
Meta-learning functional plasticity rules in neural networks
Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Experimental Neuroscience Bootcamp
This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.
Self-direction in daily stress management: the solution for mental health issues
In the lecture Yvette Roke and Jamie Hoefakker will discuss the positive and negative effects of daily stress on mental health. They will also highlight which characteristics are likely to cause more stress related issues, and why recovery time is very important. They will give an understanding of autism spectrum disorder (ASD) in relation to daily stress and they will discuss the app, SAM the stress autism mate, developed and investigated (SCED design) in co-creation with their patients with ASD.
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
Algorithm-Hardware Co-design for Efficient and Robust Spiking Neural Networks
Real-world scene perception and search from foveal to peripheral vision
A high-resolution central fovea is a prominent design feature of human vision. But how important is the fovea for information processing and gaze guidance in everyday visual-cognitive tasks? Following on from classic findings for sentence reading, I will present key results from a series of eye-tracking experiments in which observers had to search for a target object within static or dynamic images of real-world scenes. Gaze-contingent scotomas were used to selectively deny information processing in the fovea, parafovea, or periphery. Overall, the results suggest that foveal vision is less important and peripheral vision is more important for scene perception and search than previously thought. The importance of foveal vision was found to depend on the specific requirements of the task. Moreover, the data support a central-peripheral dichotomy in which peripheral vision selects and central vision recognizes.
Designing the BEARS (Both Ears) Virtual Reality Training Package to Improve Spatial Hearing in Young People with Bilateral Cochlear Implant
Results: the main areas which were modified based on participatory feedback were the variety of immersive scenarios to cover a range of ages and interests, the number of levels of complexity to ensure small improvements were measured, the feedback and reward schemes to ensure positive reinforcement, and specific provision for participants with balance issues, who had difficulties when using head-mounted displays. The effectiveness of the finalised BEARS suite will be evaluated in a large-scale clinical trial. We have added in additional login options for other members of the family and based on patient feedback we have improved the accompanying reward schemes. Conclusions: Through participatory design we have developed a training package (BEARS) for young people with bilateral cochlear implants. The training games are appropriate for use by the study population and ultimately should lead to patients taking control of their own management and reducing the reliance upon outpatient-based rehabilitation programmes. Virtual reality training provides a more relevant and engaging approach to rehabilitation for young people.
Setting network states via the dynamics of action potential generation
To understand neural computation and the dynamics in the brain, we usually focus on the connectivity among neurons. In contrast, the properties of single neurons are often thought to be negligible, at least as far as the activity of networks is concerned. In this talk, I will contradict this notion and demonstrate how the biophysics of action-potential generation can have a decisive impact on network behaviour. Our recent theoretical work shows that, among regularly firing neurons, the somewhat unattended homoclinic type (characterized by a spike onset via a saddle homoclinic orbit bifurcation) particularly stands out: First, spikes of this type foster specific network states - synchronization in inhibitory and splayed-out/frustrated states in excitatory networks. Second, homoclinic spikes can easily be induced by changes in a variety of physiological parameters (like temperature, extracellular potassium, or dendritic morphology). As a consequence, such parameter changes can even induce switches in network states, solely based on a modification of cellular voltage dynamics. I will provide first experimental evidence and discuss functional consequences of homoclinic spikes for the design of efficient pattern-generating motor circuits in insects as well as for mammalian pathologies like febrile seizures. Our analysis predicts an interesting role for homoclinic action potentials as an integral part of brain dynamics in both health and disease.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
The Secret Bayesian Life of Ring Attractor Networks
Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.
Learning with less labels for medical image segmentation
Accurate segmentation of medical images is a key step in developing Computer-Aided Diagnosis (CAD) and automating various clinical tasks such as image-guided interventions. The success of state-of-the-art methods for medical image segmentation is heavily reliant upon the availability of a sizable amount of labelled data. If the required quantity of labelled data for learning cannot be reached, the technology turns out to be fragile. The principle of consensus tells us that as humans, when we are uncertain how to act in a situation, we tend to look to others to determine how to respond. In this webinar, Dr Mehrtash Harandi will show how to model the principle of consensus to learn to segment medical data with limited labelled data. In doing so, we design multiple segmentation models that collaborate with each other to learn from labelled and unlabelled data collectively.
Exploration-Based Approach for Computationally Supported Design-by-Analogy
Engineering designers practice design-by-analogy (DbA) during concept generation to retrieve knowledge from external sources or memory as inspiration to solve design problems. DbA is a tool for innovation that involves retrieving analogies from a source domain and transferring the knowledge to a target domain. While DbA produces innovative results, designers often come up with analogies by themselves or through serendipitous, random encounters. Computational support systems for searching analogies have been developed to facilitate DbA in systematic design practice. However, many systems have focused on a query-based approach, in which a designer inputs a keyword or a query function and is returned a set of algorithmically determined stimuli. In this presentation, a new analogical retrieval process that leverages a visual interaction technique is introduced. It enables designers to explore a space of analogies, rather than be constrained by what’s retrieved by a query-based algorithm. With an exploration-based DbA tool, designers have the potential to uncover more useful and unexpected inspiration for innovative design solutions.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
Where do problem spaces come from? On metaphors and representational change
The challenges of problem solving do not exclusively lie in how to perform heuristic search, but they begin with how we understand a given task: How to cognitively represent the task domain and its components can determine how quickly someone is able to progress towards a solution, whether advanced strategies can be discovered, or even whether a solution is found at all. While this challenge of constructing and changing representations has been acknowledged early on in problem solving research, for the most part it has been sidestepped by focussing on simple, well-defined problems whose representation is almost fully determined by the task instructions. Thus, the established theory of problem solving as heuristic search in problem spaces has little to say on this. In this talk, I will present a study designed to explore this issue, by virtue of finding and refining an adequate problem representation being its main challenge. In this exploratory case study, it was investigated how pairs of participants acquaint themselves with a complex spatial transformation task in the domain of iterated mental paper folding over the course of several days. Participants have to understand the geometry of edges which occurs when repeatedly mentally folding a sheet of paper in alternating directions without the use of external aids. Faced with the difficulty of handling increasingly complex folds in light of limited cognitive capacity, participants are forced to look for ways in which to represent folds more efficiently. In a qualitative analysis of video recordings of the participants' behaviour, the development of their conceptualisation of the task domain was traced over the course of the study, focussing especially on their use of gesture and the spontaneous occurrence and use of metaphors in the construction of new representations. Based on these observations, I will conclude the talk with several theoretical speculations regarding the roles of metaphor and cognitive capacity in representational change.
Canonical neural networks perform active inference
The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.
PET imaging in brain diseases
Talk 1. PET based biomarkers of treatment efficacy in temporal lobe epilepsy A critical aspect of drug development involves identifying robust biomarkers of treatment response for use as surrogate endpoints in clinical trials. However, these biomarkers also have the capacity to inform mechanisms of disease pathogenesis and therapeutic efficacy. In this webinar, Dr Bianca Jupp will report on a series of studies using the GABAA PET ligand, [18F]-Flumazenil, to establish biomarkers of treatment response to a novel therapeutic for temporal lobe epilepsy, identifying affinity at this receptor as a key predictor of treatment outcome. Dr Bianca Jupp is a Research Fellow in the Department of Neuroscience, Monash University and Lead PET/CT Scientist at the Alfred Research Alliance–Monash Biomedical Imaging facility. Her research focuses on neuroimaging and its capacity to inform the neurobiology underlying neurological and neuropsychiatric disorders. Talk 2. The development of a PET radiotracer for reparative microglia Imaging of neuroinflammation is currently hindered by the technical limitations associated with TSPO imaging. In this webinar, Dr Lucy Vivash will discuss the development of PET radiotracers that specifically image reparative microglia through targeting the receptor kinase MerTK. This includes medicinal chemistry design and testing, radiochemistry, and in vitro and in vivo testing of lead tracers. Dr Lucy Vivash is a Research Fellow in the Department of Neuroscience, Monash University. Her research focuses on the preclinical development and clinical translation of novel PET radiotracers for the imaging of neurodegenerative diseases.
How communication networks promote cross-cultural similarities: The case of category formation
Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations to construct divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. For this investigation, I designed an online “Grouping Game” to observe how people construct categories in both small and large populations when tasked with grouping together the same novel and ambiguous images. I replicated this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution. In particular, I discuss how participants in both cultures readily harnessed analogies when categorizing novel stimuli, and I examine the role of communication networks in promoting cross-cultural similarities in analogy-making as the key engine of category formation.
Co-Design of Analog Neuromorphic Systems and Cortical Motifs with Local Dendritic Learning Rules
Bernstein Conference 2024
Set-based Fitness Comparisons - Could Neuroscientists Benefit from Engineering Studies on Conceptual Design?
Bernstein Conference 2024
Variability in Self-Organizing Networks of Neurons: Between Chance and Design
Bernstein Conference 2024
What should a neuron aim for? Designing local objective functions based on information theory
Bernstein Conference 2024
Exploiting color space geometry for visual stimulus design across animals
COSYNE 2022
Optimization of error distributions as a design principle for neural representations
COSYNE 2022
Optimization of error distributions as a design principle for neural representations
COSYNE 2022
Design principles for memory storage and recall in noisy intracellular networks
COSYNE 2025
Design and development of nanoliposomes based on soy lecithin for the delivery of molecules to the CNS as strategy for the treatment of neurodegenerative diseases
FENS Forum 2024
Designing a third place in urban underground space
FENS Forum 2024
Designing a transmodal technology to feel sound through touch: The multichannel vibrotactile gloves
FENS Forum 2024
Design, synthesis and pharmacological evaluation of pyrazole/tacrine derivatives as potential acetylcholinesterase inhibitors
FENS Forum 2024
Differences of designer receptor exclusively activated by designer drugs (DREADD) signaling preferences compared to wild type receptors
FENS Forum 2024
From action observation to brain-to-brain social interaction: An EEG µ rhythms scalable design
FENS Forum 2024
Investigating design parameters for improved tissue integration in brain-computer-interface technology
FENS Forum 2024
Microglial depletion strategies as a promising tool to design potential therapies for Alzheimer’s disease
FENS Forum 2024
Modulation of brain activity by environmental design: A study using EEG and virtual reality
FENS Forum 2024
Recording cortico-hypothalamic projection neuron activity in mice living in an automated naturalistic open-design maze
FENS Forum 2024
design coverage
68 items