Topic spotlight
TopicWorld Wide

numb

Discover seminars, jobs, and research tagged with numb across World Wide.
66 curated items60 Seminars6 ePosters
Updated 12 months ago
66 items · numb
66 results
SeminarNeuroscience

Gene regulatory mechanisms of neocortex development and evolution

Mareike Albert
Center for Regenerative Therapies, Dresden University of Technology, Germany
Dec 11, 2024

The neocortex is considered to be the seat of higher cognitive functions in humans. During its evolution, most notably in humans, the neocortex has undergone considerable expansion, which is reflected by an increase in the number of neurons. Neocortical neurons are generated during development by neural stem and progenitor cells. Epigenetic mechanisms play a pivotal role in orchestrating the behaviour of stem cells during development. We are interested in the mechanisms that regulate gene expression in neural stem cells, which have implications for our understanding of neocortex development and evolution, neural stem cell regulation and neurodevelopmental disorders.

SeminarNeuroscience

Brain-Wide Compositionality and Learning Dynamics in Biological Agents

Kanaka Rajan
Harvard Medical School
Nov 12, 2024

Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.

SeminarPsychology

How to tell if someone is hiding something from you? An overview of the scientific basis of deception and concealed information detection

Kristina Suchotzki
Philipps-Universität Marburg
May 26, 2024

I my talk I will give an overview of recent research on deception and concealed information detection. I will start with a short introduction on the problems and shortcomings of traditional deception detection tools and why those still prevail in many recent approaches (e.g., in AI-based deception detection). I want to argue for the importance of more fundamental deception research and give some examples for insights gained therefrom. In the second part of the talk, I will introduce the Concealed Information Test (CIT), a promising paradigm for research and applied contexts to investigate whether someone actually recognizes information that they do not want to reveal. The CIT is based on solid scientific theory and produces large effects sizes in laboratory studies with a number of different measures (e.g., behavioral, psychophysiological, and neural measures). I will highlight some challenges a forensic application of the CIT still faces and how scientific research could assist in overcoming those.

SeminarNeuroscience

Modeling human brain development and disease: the role of primary cilia

Kyrousi Christina
Medical School, National and Kapodistrian University of Athens, Athens, Greece
Apr 23, 2024

Neurodevelopmental disorders (NDDs) impose a global burden, affecting an increasing number of individuals. While some causative genes have been identified, understanding the human-specific mechanisms involved in these disorders remains limited. Traditional gene-driven approaches for modeling brain diseases have failed to capture the diverse and convergent mechanisms at play. Centrosomes and cilia act as intermediaries between environmental and intrinsic signals, regulating cellular behavior. Mutations or dosage variations disrupting their function have been linked to brain formation deficits, highlighting their importance, yet their precise contributions remain largely unknown. Hence, we aim to investigate whether the centrosome/cilia axis is crucial for brain development and serves as a hub for human-specific mechanisms disrupted in NDDs. Towards this direction, we first demonstrated species-specific and cell-type-specific differences in the cilia-genes expression during mouse and human corticogenesis. Then, to dissect their role, we provoked their ectopic overexpression or silencing in the developing mouse cortex or in human brain organoids. Our findings suggest that cilia genes manipulation alters both the numbers and the position of NPCs and neurons in the developing cortex. Interestingly, primary cilium morphology is disrupted, as we find changes in their length, orientation and number that lead to disruption of the apical belt and altered delamination profiles during development. Our results give insight into the role of primary cilia in human cortical development and address fundamental questions regarding the diversity and convergence of gene function in development and disease manifestation. It has the potential to uncover novel pharmacological targets, facilitate personalized medicine, and improve the lives of individuals affected by NDDs through targeted cilia-based therapies.

SeminarNeuroscience

How are the epileptogenesis clocks ticking?

Cristina Reschke
RCSI
Apr 9, 2024

The epileptogenesis process is associated with large-scale changes in gene expression, which contribute to the remodelling of brain networks permanently altering excitability. About 80% of the protein coding genes are under the influence of the circadian rhythms. These are 24-hour endogenous rhythms that determine a large number of daily changes in physiology and behavior in our bodies. In the brain, the master clock regulates a large number of pathways that are important during epileptogenesis and established-epilepsy, such as neurotransmission, synaptic homeostasis, inflammation, blood-brain barrier among others. In-depth mapping of the molecular basis of circadian timing in the brain is key for a complete understanding of the cellular and molecular events connecting genes to phenotypes.

SeminarNeuroscienceRecording

Distinctive features of experiential time: Duration, speed and event density

Marianna Lamprou Kokolaki
Université Paris-Saclay
Mar 26, 2024

William James’s use of “time in passing” and “stream of thoughts” may be two sides of the same coin that emerge from the brain segmenting the continuous flow of information into discrete events. Departing from that idea, we investigated how the content of a realistic scene impacts two distinct temporal experiences: the felt duration and the speed of the passage of time. I will present you the results from an online study in which we used a well-established experimental paradigm, the temporal bisection task, which we extended to passage of time judgments. 164 participants classified seconds-long videos of naturalistic scenes as short or long (duration), or slow or fast (passage of time). Videos contained a varying number and type of events. We found that a large number of events lengthened subjective duration and accelerated the felt passage of time. Surprisingly, participants were also faster at estimating their felt passage of time compared to duration. The perception of duration heavily depended on objective duration, whereas the felt passage of time scaled with the rate of change. Altogether, our results support a possible dissociation of the mechanisms underlying the two temporal experiences.

SeminarNeuroscience

Maintaining Plasticity in Neural Networks

Clare Lyle
DeepMind
Mar 12, 2024

Nonstationarity presents a variety of challenges for machine learning systems. One surprising pathology which can arise in nonstationary learning problems is plasticity loss, whereby making progress on new learning objectives becomes more difficult as training progresses. Networks which are unable to adapt in response to changes in their environment experience plateaus or even declines in performance in highly non-stationary domains such as reinforcement learning, where the learner must quickly adapt to new information even after hundreds of millions of optimization steps. The loss of plasticity manifests in a cluster of related empirical phenomena which have been identified by a number of recent works, including the primacy bias, implicit under-parameterization, rank collapse, and capacity loss. While this phenomenon is widely observed, it is still not fully understood. This talk will present exciting recent results which shed light on the mechanisms driving the loss of plasticity in a variety of learning problems and survey methods to maintain network plasticity in non-stationary tasks, with a particular focus on deep reinforcement learning.

SeminarNeuroscienceRecording

The Role of Spatial and Contextual Relations of real world objects in Interval Timing

Rania Tachmatzidou
Panteion University
Jan 28, 2024

In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.

SeminarNeuroscience

Degrees of Consciousness

Andrew Y. Lee
Tronto University
Dec 18, 2023

In the science of consciousness, it’s often assumed that some creatures (or mental states) are more conscious than others. But a number of philosophers have argued that the notion of degrees of consciousness is conceptually confused. I'll (1) argue that the most prominent objections to degrees of consciousness are unsustainable, and (2) develop an analysis of degrees of consciousness. On my view, whether consciousness comes in degrees ultimately depends on which theory of consciousness turns out to be correct. But I'll also argue that most theories of consciousness entail that consciousness comes in degrees.

SeminarNeuroscience

Astrocyte reprogramming / activation and brain homeostasis

Thomaidou Dimitra
Department of Neurobiology, Hellenic Pasteur Institute, Athens, Greece
Dec 12, 2023

Astrocytes are multifunctional glial cells, implicated in neurogenesis and synaptogenesis, supporting and fine-tuning neuronal activity and maintaining brain homeostasis by controlling blood-brain barrier permeability. During the last years a number of studies have shown that astrocytes can also be converted into neurons if they force-express neurogenic transcription factors or miRNAs. Direct astrocytic reprogramming to induced-neurons (iNs) is a powerful approach for manipulating cell fate, as it takes advantage of the intrinsic neural stem cell (NSC) potential of brain resident reactive astrocytes. To this end, astrocytic cell fate conversion to iNs has been well-established in vitro and in vivo using combinations of transcription factors (TFs) or chemical cocktails. Challenging the expression of lineage-specific TFs is accompanied by changes in the expression of miRNAs, that post-transcriptionally modulate high numbers of neurogenesis-promoting factors and have therefore been introduced, supplementary or alternatively to TFs, to instruct direct neuronal reprogramming. The neurogenic miRNA miR-124 has been employed in direct reprogramming protocols supplementary to neurogenic TFs and other miRNAs to enhance direct neurogenic conversion by suppressing multiple non-neuronal targets. In our group we aimed to investigate whether miR-124 is sufficient to drive direct reprogramming of astrocytes to induced-neurons (iNs) on its own both in vitro and in vivo and elucidate its independent mechanism of reprogramming action. Our in vitro data indicate that miR-124 is a potent driver of the reprogramming switch of astrocytes towards an immature neuronal fate. Elucidation of the molecular pathways being triggered by miR-124 by RNA-seq analysis revealed that miR-124 is sufficient to instruct reprogramming of cortical astrocytes to immature induced-neurons (iNs) in vitro by down-regulating genes with important regulatory roles in astrocytic function. Among these, the RNA binding protein Zfp36l1, implicated in ARE-mediated mRNA decay, was found to be a direct target of miR-124, that be its turn targets neuronal-specific proteins participating in cortical development, which get de-repressed in miR-124-iNs. Furthermore, miR-124 is potent to guide direct neuronal reprogramming of reactive astrocytes to iNs of cortical identity following cortical trauma, a novel finding confirming its robust reprogramming action within the cortical microenvironment under neuroinflammatory conditions. In parallel to their reprogramming properties, astrocytes also participate in the maintenance of blood-brain barrier integrity, which ensures the physiological functioning of the central nervous system and gets affected contributing to the pathology of several neurodegenerative diseases. To study in real time the dynamic physical interactions of astrocytes with brain vasculature under homeostatic and pathological conditions, we performed 2-photon brain intravital imaging in a mouse model of systemic neuroinflammation, known to trigger astrogliosis and microgliosis and to evoke changes in astrocytic contact with brain vasculature. Our in vivo findings indicate that following neuroinflammation the endfeet of activated perivascular astrocytes lose their close proximity and physiological cross-talk with vasculature, however this event is at compensated by the cross-talk of astrocytes with activated microglia, safeguarding blood vessel coverage and maintenance of blood-brain integrity.

SeminarNeuroscience

Dyslexias in words and numbers

Naama Friedmann
Tel Aviv University
Nov 13, 2023
SeminarNeuroscienceRecording

Brain network communication: concepts, models and applications

Caio Seguin
Indiana University
Aug 23, 2023

Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.

SeminarNeuroscience

Why spikes?

Romaine Brette
Institut de la Vision
May 30, 2023

On a fast timescale, neurons mostly interact by short, stereotypical electrical impulses or spikes. Why? A common answer is that spikes are useful for long-distance communication, to avoid alterations while traveling along axons. But as it turns out, spikes are seen in many places outside neurons: in the heart, in muscles, in plants and even in protists. From these examples, it appears that action potentials mediate some form of coordinated action, a timed event. From this perspective, spikes should not be seen simply as noisy implementations of underlying continuous signals (a sort of analog-to-digital conversion), but rather as events or actions. I will give a number of examples of functional spike-based interactions in living systems.

SeminarNeuroscience

The Geometry of Decision-Making

Iain Couzin
University of Konstanz, Germany
May 23, 2023

Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.

SeminarNeuroscienceRecording

A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time

Domenica Bueti
SISSA, Trieste (Italy)
Apr 18, 2023

Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.

SeminarNeuroscienceRecording

Fragile minds in a scary world: trauma and post traumatic stress in very young children

Tim Dalgleish
MRC Cognition and Brain Sciences Unit, University of Cambridge
Mar 13, 2023

Post traumatic stress disorder (PTSD) is a prevalent and disabling condition that affects larger numbers of children and adolescents worldwide. Until recently, we have understood little about the nature of PTSD reactions in our youngest children (aged under 8 years old). This talk describes our work over the last 15 years working with this very young age group. It overviews how we need a markedly different PTSD diagnosis for very young children, data on the prevalence of this new diagnostic algorithm, and the development of a psychological intervention and its evaluation in a clinical trial.

SeminarNeuroscienceRecording

Cognitive supports for analogical reasoning in rational number understanding

Shuyuan Yu
Carleton University
Mar 2, 2023

In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.

SeminarNeuroscienceRecording

Silences, Spikes and Bursts: Three-Part Knot of the Neural Code

Richard Naud
University of Ottawa
Feb 28, 2023

When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labeling action potentials emitted at a particularly high frequency with a metonym – bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. In this talk, I will discuss the implications of seeing the neural code as having three syllables: silences, spikes and bursts. In particular, I will describe recent theoretical and experimental results that implicate bursting in the implementation of top-down attention and the coordination of learning.

SeminarPsychology

Automated generation of face stimuli: Alignment, features and face spaces

Carl Gaspar
Zayed University (UAE)
Jan 31, 2023

I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.

SeminarNeuroscienceRecording

Dynamics of cortical circuits: underlying mechanisms and computational implications

Alessandro Sanzeni
Bocconi University, Milano
Jan 24, 2023

A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.

SeminarPsychology

Adaptation via innovation in the animal kingdom

Kata Horváth
Eötvös Loránd University & Lund University
Nov 23, 2022

Over the course of evolution, the human race has achieved a number of remarkable innovations, that have enabled us to adapt to and benefit from the environment ever more effectively. The ongoing environmental threats and health disasters of our world have now made it crucial to understand the cognitive mechanisms behind innovative behaviours. In my talk, I will present two research projects with examples of innovation-based behavioural adaptation from the taxonomic kingdom of animals, serving as a comparative psychological model for mapping the evolution of innovation. The first project focuses on the challenge of overcoming physical disability. In this study, we investigated an injured kea (Nestor notabilis) that exhibits an efficient, intentional, and innovative tool-use behaviour to compensate his disability, showing evidence for innovation-based adaptation to a physical disability in a non-human species. The second project focuses on the evolution of fire use from a cognitive perspective. Fire has been one of the most dominant ecological forces in human evolution; however, it is still unknown what capabilities and environmental factors could have led to the emergence of fire use. In the core study of this project, we investigated a captive population of Japanese macaques (Macaca fuscata) that has been regularly exposed to campfires during the cold winter months for over 60 years. Our results suggest that macaques are able to take advantage of the positive effects of fire while avoiding the dangers of flames and hot ashes, and exhibit calm behaviour around the bonfire. In addition, I will present a research proposal targeting the foraging behaviour of predatory birds in parts of Australia frequently affected by bushfires. Anecdotal reports suggest that some birds use burning sticks to spread the flames, a behaviour that has not been scientifically observed and evaluated. In summary, the two projects explore innovative behaviours along three different species groups, three different habitats, and three different ecological drivers, providing insights into the cognitive and behavioural mechanisms of adaptation through innovation.

SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 9, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscienceRecording

Nonlinear computations in spiking neural networks through multiplicative synapses

M. Nardin
IST Austria
Nov 8, 2022

The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscience

Lifelong Learning AI via neuro inspired solutions

Hava Siegelmann
University of Massachusetts Amherst
Oct 26, 2022

AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.

SeminarNeuroscienceRecording

The multimodal number sense: spanning space, time, sensory modality, and action

David Burr
University of Florence
Oct 19, 2022

Humans and other animals can estimate rapidly the number of items in a scene, flashes or tones in a sequence and motor actions. Adaptation techniques provide clear evidence in humans for the existence of specialized numerosity mechanisms that make up the numbersense. This sense of number is truly general, encoding the numerosity of both spatial arrays and sequential sets, in vision and audition, and interacting strongly with action. The adaptation (cross-sensory and cross-format) acts on sensory mechanisms rather than decisional processes, pointing to a truly general sense.

SeminarNeuroscienceRecording

Learning Relational Rules from Rewards

Guillermo Puebla
University of Bristol
Oct 12, 2022

Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we propose that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the relationship between our model with models of relational and analogical reasoning, as well as its limitations and future directions of research.

SeminarNeuroscienceRecording

Designing the BEARS (Both Ears) Virtual Reality Training Package to Improve Spatial Hearing in Young People with Bilateral Cochlear Implant

Deborah Vickers
Clinical Neurosciences
Oct 10, 2022

Results: the main areas which were modified based on participatory feedback were the variety of immersive scenarios to cover a range of ages and interests, the number of levels of complexity to ensure small improvements were measured, the feedback and reward schemes to ensure positive reinforcement, and specific provision for participants with balance issues, who had difficulties when using head-mounted displays. The effectiveness of the finalised BEARS suite will be evaluated in a large-scale clinical trial. We have added in additional login options for other members of the family and based on patient feedback we have improved the accompanying reward schemes. Conclusions: Through participatory design we have developed a training package (BEARS) for young people with bilateral cochlear implants. The training games are appropriate for use by the study population and ultimately should lead to patients taking control of their own management and reducing the reliance upon outpatient-based rehabilitation programmes. Virtual reality training provides a more relevant and engaging approach to rehabilitation for young people.

SeminarNeuroscienceRecording

Analogy and Spatial Cognition: How and Why they matter for STEM learning

David Uttal
Northwestern University
Sep 21, 2022

Space is the universal donor for relations" (Gentner, 2014). This quote is the foundation of my talk. I will explore how and why visual representations and analogies are related, and why. I will also explore how considering the relation between analogy and spatial reasoning can shed light on why and how spatial thinking is correlated with learning in STEM fields. For example, I will consider children’s numbers sense and learning of the number line from the perspective of analogical reasoning.

SeminarNeuroscienceRecording

A model of colour appearance based on efficient coding of natural images

Jolyon Troscianko
University of Exeter
Jul 17, 2022

An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.

SeminarPsychology

Do we measure what we think we are measuring?

Dario Alejandro Gordillo Lopez
EPFL
Jul 13, 2022

Tests used in the empirical sciences are often (implicitly) assumed to be representative of a target mechanism in the sense that similar tests should lead to similar results. In this talk, using resting-state electroencephalogram (EEG) as an example, I will argue that this assumption does not necessarily hold true. Typically EEG studies are conducted selecting one analysis method thought to be representative of the research question asked. Using multiple methods, we extracted a variety of features from a single resting-state EEG dataset and conducted correlational and case-control analyses. We found that many EEG features revealed a significant effect in the case-control analyses. Similarly, EEG features correlated significantly with cognitive tasks. However, when we compared these features pairwise, we did not find strong correlations. A number of explanations to these results will be discussed.

SeminarNeuroscienceRecording

Analogical retrieval across disparate task domains

Shir Dekel
The University of Sydney
Jul 13, 2022

Previous experiments have shown that a comparison of two written narratives highlights their shared relational structure, which in turn facilitates the retrieval of analogous narratives from the past (e.g., Gentner, Loewenstein, Thompson, & Forbus, 2009). However, analogical retrieval occurs across domains that appear more conceptually distant than merely different narratives, and the deepest analogies use matches in higher-order relational structure. The present study investigated whether comparison can facilitate analogical retrieval of higher-order relations across written narratives and abstract symbolic problems. Participants read stories which became retrieval targets after a delay, cued by either analogous stories or letter-strings. In Experiment 1 we replicated Gentner et al. who used narrative retrieval cues, and also found preliminary evidence for retrieval between narrative and symbolic domains. In Experiment 2 we found clear evidence that a comparison of analogous letter-string problems facilitated the retrieval of source stories with analogous higher-order relations. Experiment 3 replicated the retrieval results of Experiment 2 but with a longer delay between encoding and recall, and a greater number of distractor source stories. These experiments offer support for the schema induction account of analogical retrieval (Gentner et al., 2009) and show that the schemas abstracted from comparison of narratives can be transferred to non-semantic symbolic domains.

SeminarNeuroscience

The role of astroglia-neuron interactions in generation and spread of seizures

Emre Yaksi
Kavli Institute for Systems Neuroscience, Norwegian University of Science and technology
Jul 5, 2022

Astroglia-neuron interactions are involved in multiple processes, regulating development, excitability and connectivity of neural circuits. Accumulating number of evidences highlight a direct connection between aberrant astroglial genetics and physiology in various forms of epilepsies. Using zebrafish seizure models, we showed that neurons and astroglia follow different spatiotemporal dynamics during transitions from pre-ictal to ictal activity. We observed that during pre-ictal period neurons exhibit local synchrony and low level of activity, whereas astroglia exhibit global synchrony and high-level of calcium signals that are anti correlated with neural activity. Instead, generalized seizures are marked by a massive release of astroglial glutamate release as well as a drastic increase of astroglia and neuronal activity and synchrony across the entire brain. Knocking out astroglial glutamate transporters leads to recurrent spontaneous generalized seizures accompanied with massive astroglial glutamate release. We are currently using a combination of genetic and pharmacological approaches to perturb astroglial glutamate signalling and astroglial gap junctions to further investigate their role in generation and spreading of epileptic seizures across the brain.

SeminarNeuroscienceRecording

How Children Discover Mathematical Structure through Relational Mapping

Kelly Mix
University of Maryland
Jun 29, 2022

A core question in human development is how we bring meaning to conventional symbols. This question is deeply connected to understanding how children learn mathematics—a symbol system with unique vocabularies, syntaxes, and written forms. In this talk, I will present findings from a program of research focused on children’s acquisition of place value symbols (i.e., multidigit number meanings). The base-10 symbol system presents a variety of obstacles to children, particularly in English. Children who cannot overcome these obstacles face years of struggle as they progress through the mathematics curriculum of the upper elementary and middle school grades. Through a combination of longitudinal, cross-sectional, and pretest-training-posttest approaches, I aim to illuminate relational learning mechanisms by which children sometimes succeed in mastering the place value system, as well as instructional techniques we might use to help those who do not.

SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 28, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscienceRecording

What the fly’s eye tells the fly’s brain…and beyond

Gwyneth Card
Janelia Research Campus, HHMI
May 31, 2022

Fly Escape Behaviors: Flexible and Modular We have identified a set of escape maneuvers performed by a fly when confronted by a looming object. These escape responses can be divided into distinct behavioral modules. Some of the modules are very stereotyped, as when the fly rapidly extends its middle legs to jump off the ground. Other modules are more complex and require the fly to combine information about both the location of the threat and its own body posture. In response to an approaching object, a fly chooses some varying subset of these behaviors to perform. We would like to understand the neural process by which a fly chooses when to perform a given escape behavior. Beyond an appealing set of behaviors, this system has two other distinct advantages for probing neural circuitry. First, the fly will perform escape behaviors even when tethered such that its head is fixed and neural activity can be imaged or monitored using electrophysiology. Second, using Drosophila as an experimental animal makes available a rich suite of genetic tools to activate, silence, or image small numbers of cells potentially involved in the behaviors. Neural Circuits for Escape Until recently, visually induced escape responses have been considered a hardwired reflex in Drosophila. White-eyed flies with deficient visual pigment will perform a stereotyped middle-leg jump in response to a light-off stimulus, and this reflexive response is known to be coordinated by the well-studied giant fiber (GF) pathway. The GFs are a pair of electrically connected, large-diameter interneurons that traverse the cervical connective. A single GF spike results in a stereotyped pattern of muscle potentials on both sides of the body that extends the fly's middle pair of legs and starts the flight motor. Recently, we have found that a fly escaping a looming object displays many more behaviors than just leg extension. Most of these behaviors could not possibly be coordinated by the known anatomy of the GF pathway. Response to a looming threat thus appears to involve activation of numerous different neural pathways, which the fly may decide if and when to employ. Our goal is to identify the descending pathways involved in coordinating these escape behaviors as well as the central brain circuits, if any, that govern their activation. Automated Single-Fly Screening We have developed a new kind of high-throughput genetic screen to automatically capture fly escape sequences and quantify individual behaviors. We use this system to perform a high-throughput genetic silencing screen to identify cell types of interest. Automation permits analysis at the level of individual fly movements, while retaining the capacity to screen through thousands of GAL4 promoter lines. Single-fly behavioral analysis is essential to detect more subtle changes in behavior during the silencing screen, and thus to identify more specific components of the contributing circuits than previously possible when screening populations of flies. Our goal is to identify candidate neurons involved in coordination and choice of escape behaviors. Measuring Neural Activity During Behavior We use whole-cell patch-clamp electrophysiology to determine the functional roles of any identified candidate neurons. Flies perform escape behaviors even when their head and thorax are immobilized for physiological recording. This allows us to link a neuron's responses directly to an action.

SeminarNeuroscienceRecording

The Standard Model of the Retina

Markus Meister
Caltech
May 24, 2022

The science of the retina has reached an interesting stage of completion. There exists now a consensus standard model of this neural system - at least in the minds of many researchers - that serves as a baseline against which to evaluate new claims. The standard model links phenomena from molecular biophysics, cell biology, neuroanatomy, synaptic physiology, circuit function, and visual psychophysics. It is further supported by a normative theory explaining what the purpose is of processing visual information this way. Most new reports of retinal phenomena fit squarely within the standard model, and major revisions seem increasingly unlikely. Given that our understanding of other brain circuits with comparable complexity is much more rudimentary, it is worth considering an example of what success looks like. In this talk I will summarize what I think are the ingredients that led to this mature understanding of the retina. Equally important, a number of practices and concepts that are currently en vogue in neuroscience were not needed or indeed counterproductive. I look forward to debating how these lessons might extend to other areas of brain research.

SeminarPhysics of LifeRecording

The Equation of State of a Tissue

Vikrant Yadav
Yale University
May 22, 2022

An equation of state is something you hear about in introductory thermodynamics, for example, the Ideal gas equation. The ideal gas equation relates the pressure, volume, and the number of particles of the gas, to its temperature, uniquely defining its state. This description is possible in physics when the system under investigation is in equilibrium or near equilibrium. In biology, a tissue is modeled as a fluid composed of cells. These cells are constantly interacting with each other through mechanical and chemical signaling, driving them far from equilibrium. Can an equation of state exist for such a messy interacting system? In this talk, I show that the presence of strong cell-cell interaction in tissues gives rise to a novel non-equilibrium, size-dependent surface tension, something unheard of for classical fluids. This surface tension, in turn, modifies the packing of cells inside the tissue generating a size-dependent density and pressure. Finally, we show that a combination of these non-equilibrium pressure and densities can yield an equation of state for biological tissues arbitrarily far from equilibrium. In the end, I discuss how this new paradigm of size-dependent biological properties gives rise to novel modes of cellular motion in tissues

SeminarNeuroscience

In pursuit of a universal, biomimetic iBCI decoder: Exploring the manifold representations of action in the motor cortex

Lee Miller
Northwestern University
May 19, 2022

My group pioneered the development of a novel intracortical brain computer interface (iBCI) that decodes muscle activity (EMG) from signals recorded in the motor cortex of animals. We use these synthetic EMG signals to control Functional Electrical Stimulation (FES), which causes the muscles to contract and thereby restores rudimentary voluntary control of the paralyzed limb. In the past few years, there has been much interest in the fact that information from the millions of neurons active during movement can be reduced to a small number of “latent” signals in a low-dimensional manifold computed from the multiple neuron recordings. These signals can be used to provide a stable prediction of the animal’s behavior over many months-long periods, and they may also provide the means to implement methods of transfer learning across individuals, an application that could be of particular importance for paralyzed human users. We have begun to examine the representation within this latent space, of a broad range of behaviors, including well-learned, stereotyped movements in the lab, and more natural movements in the animal’s home cage, meant to better represent a person’s daily activities. We intend to develop an FES-based iBCI that will restore voluntary movement across a broad range of motor tasks without need for intermittent recalibration. However, the nonlinearities and context dependence within this low-dimensional manifold present significant challenges.

SeminarNeuroscienceRecording

Exploring mechanisms of human brain expansion in cerebral organoids

Madeline Lancaster
MRC Laboratory of Molecular Biology, Cambridge
May 16, 2022

The human brain sets us apart as a species, with its size being one of its most striking features. Brain size is largely determined during development as vast numbers of neurons and supportive glia are generated. In an effort to better understand the events that determine the human brain’s cellular makeup, and its size, we use a human model system in a dish, called cerebral organoids. These 3D tissues are generated from pluripotent stem cells through neural differentiation and a supportive 3D microenvironment to generate organoids with the same tissue architecture as the early human fetal brain. Such organoids are allowing us to tackle questions previously impossible with more traditional approaches. Indeed, our recent findings provide insight into regulation of brain size and neuron number across ape species, identifying key stages of early neural stem cell expansion that set up a larger starting cell number to enable the production of increased numbers of neurons. We are also investigating the role of extrinsic regulators in determining numbers and types of neurons produced in the human cerebral cortex. Overall, our findings are pointing to key, human-specific aspects of brain development and function, that have important implications for neurological disease.

SeminarNeuroscienceRecording

A draft connectome for ganglion cell types of the mouse retina

David Berson
Brown University
May 15, 2022

The visual system of the brain is highly parallel in its architecture. This is clearly evident in the outputs of the retina, which arise from neurons called ganglion cells. Work in our lab has shown that mammalian retinas contain more than a dozen distinct types of ganglion cells. Each type appears to filter the retinal image in a unique way and to relay this processed signal to a specific set of targets in the brain. My students and I are working to understand the meaning of this parallel organization through electrophysiological and anatomical studies. We record from light-responsive ganglion cells in vitro using the whole-cell patch method. This allows us to correlate directly the visual response properties, intrinsic electrical behavior, synaptic pharmacology, dendritic morphology and axonal projections of single neurons. Other methods used in the lab include neuroanatomical tracing techniques, single-unit recording and immunohistochemistry. We seek to specify the total number of ganglion cell types, the distinguishing characteristics of each type, and the intraretinal mechanisms (structural, electrical, and synaptic) that shape their stimulus selectivities. Recent work in the lab has identified a bizarre new ganglion cell type that is also a photoreceptor, capable of responding to light even when it is synaptically uncoupled from conventional (rod and cone) photoreceptors. These ganglion cells appear to play a key role in resetting the biological clock. It is just this sort of link, between a specific cell type and a well-defined behavioral or perceptual function, that we seek to establish for the full range of ganglion cell types. My research concerns the structural and functional organization of retinal ganglion cells, the output cells of the retina whose axons make up the optic nerve. Ganglion cells exhibit great diversity both in their morphology and in their responses to light stimuli. On this basis, they are divisible into a large number of types (>15). Each ganglion-cell type appears to send its outputs to a specific set of central visual nuclei. This suggests that ganglion cell heterogeneity has evolved to provide each visual center in the brain with pre-processed representations of the visual scene tailored to its specific functional requirements. Though the outline of this story has been appreciated for some time, it has received little systematic exploration. My laboratory is addressing in parallel three sets of related questions: 1) How many types of ganglion cells are there in a typical mammalian retina and what are their structural and functional characteristics? 2) What combination of synaptic networks and intrinsic membrane properties are responsible for the characteristic light responses of individual types? 3) What do the functional specializations of individual classes contribute to perceptual function or to visually mediated behavior? To pursue these questions, we label retinal ganglion cells by retrograde transport from the brain; analyze in vitro their light responses, intrinsic membrane properties and synaptic pharmacology using the whole-cell patch clamp method; and reveal their morphology with intracellular dyes. Recently, we have discovered a novel ganglion cell in rat retina that is intrinsically photosensitive. These ganglion cells exhibit robust light responses even when all influences from classical photoreceptors (rods and cones) are blocked, either by applying pharmacological agents or by dissociating the ganglion cell from the retina. These photosensitive ganglion cells seem likely to serve as photoreceptors for the photic synchronization of circadian rhythms, the mechanism that allows us to overcome jet lag. They project to the circadian pacemaker of the brain, the suprachiasmatic nucleus of the hypothalamus. Their temporal kinetics, threshold, dynamic range, and spectral tuning all match known properties of the synchronization or "entrainment" mechanism. These photosensitive ganglion cells innervate various other brain targets, such as the midbrain pupillary control center, and apparently contribute to a host of behavioral responses to ambient lighting conditions. These findings help to explain why circadian and pupillary light responses persist in mammals, including humans, with profound disruption of rod and cone function. Ongoing experiments are designed to elucidate the phototransduction mechanism, including the identity of the photopigment and the nature of downstream signaling pathways. In other studies, we seek to provide a more detailed characterization of the photic responsiveness and both morphological and functional evidence concerning possible interactions with conventional rod- and cone-driven retinal circuits. These studies are of potential value in understanding and designing appropriate therapies for jet lag, the negative consequences of shift work, and seasonal affective disorder.

SeminarNeuroscience

How are nervous systems remodeled in complex metazoans?

Marc Freeman
Oregon Health & Science University, Portland OR, USA
May 11, 2022

Early in development the nervous system is constructed with far too many neurons that make an excessive number of synaptic connections.  Later, a wave of neuronal remodeling radically reshapes nervous system wiring and cell numbers through the selective elimination of excess synapses, axons and dendrites, and even whole neurons.  This remodeling is widespread across the nervous system, extensive in terms of how much individual brain regions can change (e.g. in some cases 50% of neurons integrated into a brain circuit are eliminated), and thought to be essential for optimizing nervous system function.  Perturbations of neuronal remodeling are thought to underlie devastating neurodevelopmental disorders including autism spectrum disorder, schizophrenia, and epilepsy.  This seminar will discuss our efforts to use the relatively simple nervous system of Drosophila to understand the mechanistic basis by which cells, or parts of cells, are specified for removal and eliminated from the nervous system.

SeminarPhysics of Life

Emergence of homochirality in large molecular systems

David Lacoste
ESPCI
Apr 21, 2022

The question of the origin of homochirality of living matter, or the dominance of one handedness for all molecules of life across the entire biosphere, is a long-standing puzzle in the research on the Origin of Life. In the fifties, Frank proposed a mechanism to explain homochirality based on the properties of a simple autocatalytic network containing only a few chemical species. Following this work, chemists struggled to find experimental realizations of this model, possibly due to a lack of proper methods to identify autocatalysis [1]. In any case, a model based on a few chemical species seems rather limited, because prebiotic earth is likely to have consisted of complex ‘soups’ of chemicals. To include this aspect of the problem, we recently proposed a mechanism based on certain features of large out-of-equilibrium chemical networks [2]. We showed that a phase transition towards an homochiral state is likely to occur as the number of chiral species in the system becomes large or as the amount of free energy injected into the system increases. Through an analysis of large chemical databases, we showed that there is no need for very large molecules for chiral species to dominate over achiral ones; it already happens when molecules contain about 10 heavy atoms. We also analyzed the various conventions used to measure chirality and discussed the relative chiral signs adopted by different groups of molecules [3]. We then proposed a generalization of Frank’s model for large chemical networks, which we characterized using random matrix theory. This analysis includes sparse networks, suggesting that the emergence of homochirality is a robust and generic transition. References: [1] A. Blokhuis, D. Lacoste, and P. Nghe, PNAS (2020), 117, 25230. [2] G. Laurent, D. Lacoste, and P. Gaspard, PNAS (2021) 118 (3) e2012741118. [3] G. Laurent, D. Lacoste, and P. Gaspard, Proc. R. Soc. A 478:20210590 (2022).

SeminarNeuroscienceRecording

Sensing in Insect Wings

Ali Weber
University of Washington, USA
Apr 18, 2022

Ali Weber (University of Washington, USA) uses the the hawkmoth as a model system, to investigate how information from a small number of mechanoreceptors on the wings are used in flight control. She employs a combination of experimental and computational techniques to study how these sensors respond during flight and how one might optimally array a set of these sensors to best provide feedback during flight.

SeminarNeuroscienceRecording

Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems

Horacio Rotstein
New Jersey Institute of Technology
Apr 12, 2022

Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.

SeminarNeuroscienceRecording

Genetic-based brain machine interfaces for visual restoration

Serge Picaud
Institute Vision Paris
Apr 12, 2022

Visual restoration is certainly the greatest challenge for brain-machine interfaces with the high pixel number and high refreshing rate. In the recent year, we brought retinal prostheses and optogenetic therapy up to successful clinical trials. Concerning visual restoration at the cortical level, prostheses have shown efficacy for limited periods of time and limited pixel numbers. We are investigating the potential of sonogenetics to develop a non-contact brain machine interface allowing long-lasting activation of the visual cortex. The presentation will introduce our genetic-based brain machine interfaces for visual restoration at the retinal and cortical levels.

SeminarNeuroscience

ISYNC: International SynAGE Conference on Healthy Ageing

Prof. Dr. Ulman Lindenberger, Prof. Dr. Carlos Dotti, Prof. Dr. Patrick Verstreken, Prof. Dr. James H. Cole, ...
Mar 28, 2022

The SynAGE committee members are thrilled to host ISYNC, the International SynAGE conference on healthy ageing, on 28-30 March 2022 in Magdeburg, Germany. This conference has been entirely organised from young scientists of the SynAGE research training group RTG 2413 (www.synage.de) and represents a unique occasion for researchers from all over the world to bring together and join great talks and sessions with us and our guests. A constantly updated list of our speakers can be found on the conference webpage: www.isync-md.de. During the conference, attendees will have access to a range of symposia which will deal with Glia, Biomarkers and Immunoresponses during ageing to neurodegeneration brain integrity and cognitive function in health and diseases. Moreover, the conference will offer social events especially for young researchers and the possibility to network together in a beautiful and suggestive location where our conference will take place: the Johanniskirche. The event will be happening in person, but due to the current pandemic situation and restrictions we are planning the conference as a hybrid event with lots of technical support to ensure that every participant can follow the talks and take part in the scientific discussions. The registration to our ISYNC conference is free of charge. However, the number of people attending the conference in person is restricted to 100. Afterwards, registrations will be accepted for joining virtually only. The registration is open until 15.02.2022. Especially for PhD and MD Students: Check our available Travel Grants, Poster Prize and SynAGE Award Dinner: https://www.isync-md.de/index.php/phd-md-specials/ If you need any further information don’t hesitate to contact us via email: contact@synage.de. We are looking forward to meet you in 2022 in Magdeburg to discuss about our research and ideas and bless together science. Your ISYNC organization Committee

SeminarNeuroscience

Mapping Individual Trajectories of Structural and Cognitive Decline in Mild Cognitive Impairment

Shreya Rajagopal
Psychology, University of Michigan
Mar 24, 2022

The US has an aging population. For the first time in US history, the number of older adults is projected to outnumber that of children by 2034. This combined with the fact that the prevalence of Alzheimer's Disease increases exponentially with age makes for a worrying combination. Mild cognitive impairment (MCI) is an intermediate stage of cognitive decline between being cognitively normal and having full-blown Dementia, with every third person with MCI progressing to dementia of the Alzheimer's Type (DAT). While there is no known way to reverse symptoms once they begin, early prediction of disease can help stall its progression and help with early financial planning. While grey matter volume loss in the Hippocampus and Entorhinal Cortex (EC) are characteristic biomarkers of DAT, little is known about the rates of decrease of these volumes within individuals in MCI state across time. We used longitudinal growth curve models to map individual trajectories of volume loss in subjects with MCI. We then looked at whether these rates of volume decrease could predict progression to DAT right in the MCI stage. Finally, we evaluated whether these rates of Hippocampal and EC volume loss were correlated with individual rates of decline of episodic memory, visuospatial ability, and executive function.

SeminarNeuroscienceRecording

Parametric control of flexible timing through low-dimensional neural manifolds

Manuel Beiran
Center for Theoretical Neuroscience, Columbia University & Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 8, 2022

Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.

SeminarNeuroscienceRecording

Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics

Robert Guetig
Charité – Universitätsmedizin Berlin & BIH
Mar 8, 2022

Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.

SeminarPhysics of LifeRecording

4D Chromosome Organization: Combining Polymer Physics, Knot Theory and High Performance Computing

Anna Lappala
Harvard University
Mar 6, 2022

Self-organization is a universal concept spanning numerous disciplines including mathematics, physics and biology. Chromosomes are self-organizing polymers that fold into orderly, hierarchical and yet dynamic structures. In the past decade, advances in experimental biology have provided a means to reveal information about chromosome connectivity, allowing us to directly use this information from experiments to generate 3D models of individual genes, chromosomes and even genomes. In this talk I will present a novel data-driven modeling approach and discuss a number of possibilities that this method holds. I will discuss a detailed study of the time-evolution of X chromosome inactivation, highlighting both global and local properties of chromosomes that result in topology-driven dynamical arrest and present and characterize a novel type of motion we discovered in knots that may have applications to nanoscale materials and machines.

SeminarPhysics of LifeRecording

Exact coherent structures and transition to turbulence in a confined active nematic

Caleb Wagner
University of Nebraska-Lincoln
Feb 27, 2022

Active matter describes a class of systems that are maintained far from equilibrium by driving forces acting on the constituent particles. Here I will focus on confined active nematics, which exhibit especially rich flow behavior, ranging from structured patterns in space and time to disordered turbulent flows. To understand this behavior, I will take a deterministic dynamical systems approach, beginning with the hydrodynamic equations for the active nematic. This approach reveals that the infinite-dimensional phase space of all possible flow configurations is populated by Exact Coherent Structures (ECS), which are exact solutions of the hydrodynamic equations with distinct and regular spatiotemporal structure; examples include unstable equilibria, periodic orbits, and traveling waves. The ECS are connected by dynamical pathways called invariant manifolds. The main hypothesis in this approach is that turbulence corresponds to a trajectory meandering in the phase space, transitioning between ECS by traveling on the invariant manifolds. Similar approaches have been successful in characterizing high Reynolds number turbulence of passive fluids. Here, I will present the first systematic study of active nematic ECS and their invariant manifolds and discuss their role in characterizing the phenomenon of active turbulence.

SeminarPsychology

Leadership Support and Workplace Psychosocial Stressors

Leslie B. Hammer
Portland State University
Feb 22, 2022

Research evidence indicates that psychosocial stressors such as work-life stress serves as a negative occupational exposure relating to poor health behaviors including smoking, poor food choices, low levels of exercise, and even decreased sleep time, as well as a number of chronic health outcomes. The association between work-life stress and adverse health behaviors and chronic health suggests that Occupational Health Psychology (OHP) interventions such as leadership support trainings may be helpful in mitigating effects of work-life stress and improving health, consistent with the Total Worker Health approach. This presentation will review workplace psychosocial stressors and leadership training approaches to reduces stress and improve health, highlighting a randomized controlled trial, the Military Employee Sleep and Health study.

SeminarNeuroscienceRecording

Robustness in spiking networks: a geometric perspective

Christian Machens
Champalimaud Center, Lisboa
Feb 15, 2022

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.

SeminarNeuroscience

Diversification of cortical inhibitory circuits & Molecular programs orchestrating the wiring of inhibitory circuitries

Beatriz Rico and Professor Oscar Marin
MRC Centre for Neurodevelopmental Disorders Centre for Developmental Neurobiology , King’s College London, UK
Feb 2, 2022

GABAergic interneurons play crucial roles in the regulation of neural activity in the cerebral cortex. In this Dual Lecture, Prof Oscar Marín and Prof Beatriz Rico will discuss several aspects of the formation of inhibitory circuits in the mammalian cerebral cortex. Prof. Marín will provide an overview of the mechanisms regulating the generation of the remarkable diversity of GABAergic interneurons and their ultimate numbers. Prof. Rico will describe the molecular logic through which specific pyramidal cell-interneuron circuits are established in the cerebral cortex, and how alterations in some of these connectivity motifs might be liked to disease.   Our web pages for reference: https://devneuro.org.uk/marinlab/ & https://devneuro.org.uk/rico/default

SeminarNeuroscienceRecording

What happens to our ability to perceive multisensory information as we age?

Fiona Newell
Trinity Collge Dublin
Jan 12, 2022

Our ability to perceive the world around us can be affected by a number of factors including the nature of the external information, prior experience of the environment, and the integrity of the underlying perceptual system. A particular challenge for the brain is to maintain a coherent perception from information encoded by the peripheral sensory organs whose function is affected by typical, developmental changes across the lifespan. Yet, how the brain adapts to the maturation of the senses, as well as experiential changes in the multisensory environment, is poorly understood. Over the past few years, we have used a range of multisensory tasks to investigate the role of ageing on the brain’s ability to merge sensory inputs. In particular, we have embedded an audio-visual task based on the sound-induced flash illusion (SIFI) into a large-scale, longitudinal study of ageing. Our findings support the idea that the temporal binding window (TBW) is modulated by age and reveal important individual differences in this TBW that may have clinical implications. However, our investigations also suggest the TWB is experience-dependent with evidence for both long and short term behavioural plasticity. An overview of these findings, including recent evidence on how multisensory integration may be associated with higher order functions, will be discussed.

SeminarNeuroscienceRecording

Human memory: mathematical models and experiments

Misha Tsodyks
Weizmann Institute, Institute for Advanced Study
Jan 4, 2022

I will present my recent work on mathematical modeling of human memory. I will argue that memory recall of random lists of items is governed by the universal algorithm resulting in the analytical relation between the number of items in memory and the number of items that can be successfully recalled. The retention of items in memory on the other hand is not universal and differs for different types of items being remembered, in particular retention curves for words and sketches is different even when sketches are made to only carry information about an object being drawn. I will discuss the putative reasons for these observations and introduce the phenomenological model predicting retention curves.

SeminarNeuroscience

JAK/STAT regulation of the transcriptomic response during epileptogenesis

Amy Brooks-Kayal
Children's Hospital Colorado / UC Davis
Dec 14, 2021

Temporal lobe epilepsy (TLE) is a progressive disorder mediated by pathological changes in molecular cascades and neural circuit remodeling in the hippocampus resulting in increased susceptibility to spontaneous seizures and cognitive dysfunction. Targeting these cascades could prevent or reverse symptom progression and has the potential to provide viable disease-modifying treatments that could reduce the portion of TLE patients (>30%) not responsive to current medical therapies. Changes in GABA(A) receptor subunit expression have been implicated in the pathogenesis of TLE, and the Janus Kinase/Signal Transducer and Activator of Transcription (JAK/STAT) pathway has been shown to be a key regulator of these changes. The JAK/STAT pathway is known to be involved in inflammation and immunity, and to be critical for neuronal functions such as synaptic plasticity and synaptogenesis. Our laboratories have shown that a STAT3 inhibitor, WP1066, could greatly reduce the number of spontaneous recurrent seizures (SRS) in an animal model of pilocarpine-induced status epilepticus (SE). This suggests promise for JAK/STAT inhibitors as disease-modifying therapies, however, the potential adverse effects of systemic or global CNS pathway inhibition limits their use. Development of more targeted therapeutics will require a detailed understanding of JAK/STAT-induced epileptogenic responses in different cell types. To this end, we have developed a new transgenic line where dimer-dependent STAT3 signaling is functionally knocked out (fKO) by tamoxifen-induced Cre expression specifically in forebrain excitatory neurons (eNs) via the Calcium/Calmodulin Dependent Protein Kinase II alpha (CamK2a) promoter. Most recently, we have demonstrated that STAT3 KO in excitatory neurons (eNSTAT3fKO) markedly reduces the progression of epilepsy (SRS frequency) in the intrahippocampal kainate (IHKA) TLE model and protects mice from kainic acid (KA)-induced memory deficits as assessed by Contextual Fear Conditioning. Using data from bulk hippocampal tissue RNA-sequencing, we further discovered a transcriptomic signature for the IHKA model that contains a substantial number of genes, particularly in synaptic plasticity and inflammatory gene networks, that are down-regulated after KA-induced SE in wild-type but not eNSTAT3fKO mice. Finally, we will review data from other models of brain injury that lead to epilepsy, such as TBI, that implicate activation of the JAK/STAT pathway that may contribute to epilepsy development.

SeminarNeuroscienceRecording

Inferring informational structures in neural recordings of drosophila with epsilon-machines

Roberto Muñoz
Monash University
Dec 9, 2021

Measuring the degree of consciousness an organism possesses has remained a longstanding challenge in Neuroscience. In part, this is due to the difficulty of finding the appropriate mathematical tools for describing such a subjective phenomenon. Current methods relate the level of consciousness to the complexity of neural activity, i.e., using the information contained in a stream of recorded signals they can tell whether the subject might be awake, asleep, or anaesthetised. Usually, the signals stemming from a complex system are correlated in time; the behaviour of the future depends on the patterns in the neural activity of the past. However these past-future relationships remain either hidden to, or not taken into account in the current measures of consciousness. These past-future correlations are likely to contain more information and thus can reveal a richer understanding about the behaviour of complex systems like a brain. Our work employs the "epsilon-machines” framework to account for the time correlations in neural recordings. In a nutshell, epsilon-machines reveal how much of the past neural activity is needed in order to accurately predict how the activity in the future will behave, and this is summarised in a single number called "statistical complexity". If a lot of past neural activity is required to predict the future behaviour, then can we say that the brain was more “awake" at the time of recording? Furthermore, if we read the recordings in reverse, does the difference between forward and reverse-time statistical complexity allow us to quantify the level of time asymmetry in the brain? Neuroscience predicts that there should be a degree of time asymmetry in the brain. However, this has never been measured. To test this, we used neural recordings measured from the brains of fruit flies and inferred the epsilon-machines. We found that the nature of the past and future correlations of neural activity in the brain, drastically changes depending on whether the fly was awake or anaesthetised. Not only does our study find that wakeful and anaesthetised fly brains are distinguished by how statistically complex they are, but that the amount of correlations in wakeful fly brains was much more sensitive to whether the neural recordings were read forward vs. backwards in time, compared to anaesthetised brains. In other words, wakeful fly brains were more complex, and time asymmetric than anaesthetised ones.

SeminarNeuroscienceRecording

NMC4 Short Talk: Synchronization in the Connectome: Metastable oscillatory modes emerge from interactions in the brain spacetime network

Francesca Castaldo
University College London
Nov 30, 2021

The brain exhibits a rich repertoire of oscillatory patterns organized in space, time and frequency. However, despite ever more-detailed characterizations of spectrally-resolved network patterns, the principles governing oscillatory activity at the system-level remain unclear. Here, we propose that the transient emergence of spatially organized brain rhythms are signatures of weakly stable synchronization between subsets of brain areas, naturally occurring at reduced collective frequencies due to the presence of time delays. To test this mechanism, we build a reduced network model representing interactions between local neuronal populations (with damped oscillatory response at 40Hz) coupled in the human neuroanatomical network. Following theoretical predictions, weakly stable cluster synchronization drives a rich repertoire of short-lived (or metastable) oscillatory modes, whose frequency inversely depends on the number of units, the strength of coupling and the propagation times. Despite the significant degree of reduction, we find a range of model parameters where the frequencies of collective oscillations fall in the range of typical brain rhythms, leading to an optimal fit of the power spectra of magnetoencephalographic signals from 89 heathy individuals. These findings provide a mechanistic scenario for the spontaneous emergence of frequency-specific long-range phase-coupling observed in magneto- and electroencephalographic signals as signatures of resonant modes emerging in the space-time structure of the Connectome, reinforcing the importance of incorporating realistic time delays in network models of oscillatory brain activity.

ePoster

Efficient Coding of Natural Movies Predicts the Optimal Number of Receptive Field Mosaics

COSYNE 2022

ePoster

The effective number of shared dimensions between neural populations

Hamza Giaffar, Camille Rullan, Mikio Aoi

COSYNE 2023

ePoster

Hierarchical Working Memory and a New Magic Number

Weishun Zhong, Mikhail Katkov, Misha Tsodyks

COSYNE 2025

ePoster

Analysis of anxiety-related/social behaviour and neural circuitry abnormalities in ligand of Numb protein X (LNX) knockout mice

Laura Cioccarelli, Joan Lenihan, Leah Erwin, Paul Young

FENS Forum 2024

ePoster

Is bigger always more? – Investigating developmental changes in non-symbolic number comparison

Judit Pekar, Annette Kinder

FENS Forum 2024

ePoster

Neuronal identity and numbers in the development of neocortical activity

Ioana Genescu, Laura Mòdol-Vidal, Yannick Bollmann, Stéphane Bugeon, Yan to Ling, Zhiyao Zhou, Fursham Hamid, Kenneth Harris, Oscar Marín

FENS Forum 2024