experiments
Latest
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Feedback-induced dispositional changes in risk preferences
Contrary to the original normative decision-making standpoint, empirical studies have repeatedly reported that risk preferences are affected by the disclosure of choice outcomes (feedback). Although no consensus has yet emerged regarding the properties and mechanisms of this effect, a widespread and intuitive hypothesis is that repeated feedback affects risk preferences by means of a learning effect, which alters the representation of subjective probabilities. Here, we ran a series of seven experiments (N= 538), tailored to decipher the effects of feedback on risk preferences. Our results indicate that the presence of feedback consistently increases risk-taking, even when the risky option is economically less advantageous. Crucially, risk-taking increases just after the instructions, before participants experience any feedback. These results challenge the learning account, and advocate for a dispositional effect, induced by the mere anticipation of feedback information. Epistemic curiosity and regret avoidance may drive this effect in partial and complete feedback conditions, respectively.
Influence of the context of administration in the antidepressant-like effects of the psychedelic 5-MeO-DMT
Psychedelics like psilocybin have shown rapid and long-lasting efficacy on depressive and anxiety symptoms. Other psychedelics with shorter half-lives, such as DMT and 5-MeO-DMT, have also shown promising preliminary outcomes in major depression, making them interesting candidates for clinical practice. Despite several promising clinical studies, the influence of the context on therapeutic responses or adverse effects remains poorly documented. To address this, we conducted preclinical studies evaluating the psychopharmacological profile of 5-MeO-DMT in contexts previously validated in mice as either pleasant (positive setting) or aversive (negative setting). Healthy C57BL/6J male mice received a single intraperitoneal (i.p.) injection of 5-MeO-DMT at doses of 0.5, 5, and 10 mg/kg, with assessments at 2 hours, 24 hours, and one week post-administration. In a corticosterone (CORT) mouse model of depression, 5-MeO-DMT was administered in different settings, and behavioral tests mimicking core symptoms of depression and anxiety were conducted. In CORT-exposed mice, an acute dose of 0.5 mg/kg administered in a neutral setting produced antidepressant-like effects at 24 hours, as observed by reduced immobility time in the Tail Suspension Test (TST). In a positive setting, the drug also reduced latency to first immobility and total immobility time in the TST. However, these beneficial effects were negated in a negative setting, where 5-MeO-DMT failed to produce antidepressant-like effects and instead elicited an anxiogenic response in the Elevated Plus Maze (EPM).Our results indicate a strong influence of setting on the psychopharmacological profile of 5-MeO-DMT. Future experiments will examine cortical markers of pre- and post-synaptic density to correlate neuroplasticity changes with the behavioral effects of 5-MeO-DMT in different settings.
Homeostatic Neural Responses to Photic Stimulation
This talk presents findings from open and closed-loop neural stimulation experiments using EEG. Fixed-frequency (10 Hz) stimulation revealed cross-cortical alpha power suppression post-stimulation, modulated by the difference between the individual's alpha frequency and the stimulation frequency. Closed-loop stimulation demonstrated phase-dependent effects: trough stimulation enhanced lower alpha activity, while peak stimulation suppressed high alpha to beta activity. These findings provide evidence for homeostatic mechanisms in the brain's response to photic stimulation, with implications for neuromodulation applications.
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
Deepfake Detection in Super-Recognizers and Police Officers
Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.
The Role of Spatial and Contextual Relations of real world objects in Interval Timing
In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Piecing together the puzzle of emotional consciousness
Conscious emotional experiences are very rich in their nature, and can encompass anything ranging from the most intense panic when facing immediate threat, to the overwhelming love felt when meeting your newborn. It is then no surprise that capturing all aspects of emotional consciousness, such as intensity, valence, and bodily responses, into one theory has become the topic of much debate. Key questions in the field concern how we can actually measure emotions and which type of experiments can help us distill the neural correlates of emotional consciousness. In this talk I will give a brief overview of theories of emotional consciousness and where they disagree, after which I will dive into the evidence proposed to support these theories. Along the way I will discuss to what extent studying emotional consciousness is ‘special’ and will suggest several tools and experimental contrasts we have at our disposal to further our understanding on this intriguing topic.
Tracking subjects' strategies in behavioural choice experiments at trial resolution
Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.
Self as Processes (BACN Mid-career Prize Lecture 2023)
An understanding of the self helps explain not only human thoughts, feelings, attitudes but also many aspects of everyday behaviour. This talk focuses on a viewpoint - self as processes. This viewpoint emphasizes the dynamics of the self that best connects with the development of the self over time and its realist orientation. We are combining psychological experiments and data mining to comprehend the stability and adaptability of the self across various populations. In this talk, I draw on evidence from experimental psychology, cognitive neuroscience, and machine learning approaches to demonstrate why and how self-association affects cognition and how it is modulated by various social experiences and situational factors
Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?
Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.
Studies on the role of relevance appraisal in affect elicitation
A fundamental question in affective sciences is how the human mind decides if, and in what intensity, to elicit an affective response. Appraisal theories assume that preceding the affective response, there is an evaluation stage in which dimensions of an event are being appraised. Common to most appraisal theories is the assumption that the evaluation phase involves the assessment of the stimulus’ relevance to the perceiver’s well-being. In this talk, I first discuss conceptual and methodological challenges in investigating relevance appraisal. Next, I present two lines of experiments that ask how the human mind uses information about objective and subjective probabilities in the decision about the intensity of the emotional response and how these are affected by the valence of the event. The potential contribution of the results to appraisal theory is discussed.
Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
The Effects of Movement Parameters on Time Perception
Mobile organisms must be capable of deciding both where and when to move in order to keep up with a changing environment; therefore, a strong sense of time is necessary, otherwise, we would fail in many of our movement goals. Despite this intrinsic link between movement and timing, only recently has research begun to investigate the interaction. Two primary effects that have been observed include: movements biasing time estimates (i.e., affecting accuracy) as well as making time estimates more precise. The goal of this presentation is to review this literature, discuss a Bayesian cue combination framework to explain these effects, and discuss the experiments I have conducted to test the framework. The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Epigenomic (re)programming of the brain and behavior by ovarian hormones
Rhythmic changes in sex hormone levels across the ovarian cycle exert powerful effects on the brain and behavior, and confer female-specific risks for neuropsychiatric conditions. In this talk, Dr. Kundakovic will discuss the role of fluctuating ovarian hormones as a critical biological factor contributing to the increased depression and anxiety risk in women. Cycling ovarian hormones drive brain and behavioral plasticity in both humans and rodents, and the talk will focus on animal studies in Dr. Kundakovic’s lab that are revealing the molecular and receptor mechanisms that underlie this female-specific brain dynamic. She will highlight the lab’s discovery of sex hormone-driven epigenetic mechanisms, namely chromatin accessibility and 3D genome changes, that dynamically regulate neuronal gene expression and brain plasticity but may also prime the (epi)genome for psychopathology. She will then describe functional studies, including hormone replacement experiments and the overexpression of an estrous cycle stage-dependent transcription factor, which provide the causal link(s) between hormone-driven chromatin dynamics and sex-specific anxiety behavior. Dr. Kundakovic will also highlight an unconventional role that chromatin dynamics may have in regulating neuronal function across the ovarian cycle, including in sex hormone-driven X chromosome plasticity and hormonally-induced epigenetic priming. In summary, these studies provide a molecular framework to understand ovarian hormone-driven brain plasticity and increased female risk for anxiety and depression, opening new avenues for sex- and gender-informed treatments for brain disorders.
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Verb metaphors are processed as analogies
Metaphor is a pervasive phenomenon in language and cognition. To date, the vast majority of psycholinguistic research on metaphor has focused on noun-noun metaphors of the form An X is a Y (e.g., My job is a jail). Yet there is evidence that verb metaphor (e.g., I sailed through my exams) is more common. Despite this, comparatively little work has examined how verb metaphors are processed. In this talk, I will propose a novel account for verb metaphor comprehension: verb metaphors are understood in the same way that analogies are—as comparisons processed via structure-mapping. I will discuss the predictions that arise from applying the analogical framework to verb metaphor and present a series of experiments showing that verb metaphoric extension is consistent with those predictions.
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
Cortical seizure mechanisms: insights from calcium, glutamate and GABA imaging
Focal neocortical epilepsy is associated with intermittent brief population discharges (interictal spikes), which resemble sentinel spikes that often occur at the onset of seizures. Why interictal spikes self-terminate whilst seizures persist and propagate is incompletely understood, but is likely to relate to the intermittent collapse of feed-forward GABAergic inhibition. Inhibition could fail through multiple mechanisms, including (i) an attenuation or even reversal of the driving force for chloride in postsynaptic neurons because of intense activation of GABAA receptors, (ii) an elevation of potassium secondary to chloride influx leading to depolarization of neurons, or (iii) insufficient GABA release from interneurons. I shall describe the results of experiments using fluorescence imaging of calcium, glutamate or GABA in awake rodent models of neocortical epileptiform activity. Interictal spikes were accompanied by brief glutamate transients which were maximal at the initiation site and rapidly propagatedcentrifugally. GABA transients lasted longer than glutamate transients and were maximal ~1.5 mm from the focus. Prior to seizure initiation GABA transients were attenuated, whilst glutamate transients increased, consistent with a progressive failure of local inhibitory restraint. As seizures increased in frequency, there was a gradual increase in the spatial extent of spike-associated glutamate transients associated with interictal spikes. Neurotransmitter imaging thus reveals a progressive collapse of an annulus of feed-forward GABA release, allowing runaway recruitment of excitatory neurons as a fundamental mechanism underlying the escape of seizures from local inhibitory restraint.
Extracting computational mechanisms from neural data using low-rank RNNs
An influential theory in systems neuroscience suggests that brain function can be understood through low-dimensional dynamics [Vyas et al 2020]. However, a challenge in this framework is that a single computational task may involve a range of dynamic processes. To understand which processes are at play in the brain, it is important to use data on neural activity to constrain models. In this study, we present a method for extracting low-dimensional dynamics from data using low-rank recurrent neural networks (lrRNNs), a highly expressive and understandable type of model [Mastrogiuseppe & Ostojic 2018, Dubreuil, Valente et al. 2022]. We first test our approach using synthetic data created from full-rank RNNs that have been trained on various brain tasks. We find that lrRNNs fitted to neural activity allow us to identify the collective computational processes and make new predictions for inactivations in the original RNNs. We then apply our method to data recorded from the prefrontal cortex of primates during a context-dependent decision-making task. Our approach enables us to assign computational roles to the different latent variables and provides a mechanistic model of the recorded dynamics, which can be used to perform in silico experiments like inactivations and provide testable predictions.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Motor contribution to auditory temporal predictions
Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.
Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties
Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.
The role of population structure in computations through neural dynamics
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
Pitch and Time Interact in Auditory Perception
Research into pitch perception and time perception has typically treated the two as independent processes. However, previous studies of music and speech perception have suggested that pitch and timing information may be processed in an integrated manner, such that the pitch of an auditory stimulus can influence a person’s perception, expectation, and memory of its duration and tempo. Typically, higher-pitched sounds are perceived as faster and longer in duration than lower-pitched sounds with identical timing. We conducted a series of experiments to better understand the limits of this pitch-time integrality. Across several experiments, we tested whether the higher-equals-faster illusion generalizes across the broader frequency range of human hearing by asking participants to compare the tempo of a repeating tone played in one of six octaves to a metronomic standard. When participants heard tones from all six octaves, we consistently found an inverted U-shaped effect of the tone’s pitch height, such that perceived tempo peaked between A4 (440 Hz) and A5 (880 Hz) and decreased at lower and higher octaves. However, we found that the decrease in perceived tempo at extremely high octaves could be abolished by exposing participants to high-pitched tones only, suggesting that pitch-induced timing biases are context sensitive. We additionally tested how the timing of an auditory stimulus influences the perception of its pitch, using a pitch discrimination task in which probe tones occurred early, late, or on the beat within a rhythmic context. Probe timing strongly biased participants to rate later tones as lower in pitch than earlier tones. Together, these results suggest that pitch and time exert a bidirectional influence on one another, providing evidence for integrated processing of pitch and timing information in auditory perception. Identifying the mechanisms behind this pitch-time interaction will be critical for integrating current models of pitch and tempo processing.
Trial by trial predictions of subjective time from human brain activity
Our perception of time isn’t like a clock; it varies depending on other aspects of experience, such as what we see and hear in that moment. However, in everyday life, the properties of these simple features can change frequently, presenting a challenge to understanding real-world time perception based on simple lab experiments. We developed a computational model of human time perception based on tracking changes in neural activity across brain regions involved in sensory processing, using fMRI. By measuring changes in brain activity patterns across these regions, our approach accommodates the different and changing feature combinations present in natural scenarios, such as walking on a busy street. Our model reproduces people’s duration reports for natural videos (up to almost half a minute long) and, most importantly, predicts whether a person reports a scene as relatively shorter or longer–the biases in time perception that reflect how natural experience of time deviates from clock time
Real-world scene perception and search from foveal to peripheral vision
A high-resolution central fovea is a prominent design feature of human vision. But how important is the fovea for information processing and gaze guidance in everyday visual-cognitive tasks? Following on from classic findings for sentence reading, I will present key results from a series of eye-tracking experiments in which observers had to search for a target object within static or dynamic images of real-world scenes. Gaze-contingent scotomas were used to selectively deny information processing in the fovea, parafovea, or periphery. Overall, the results suggest that foveal vision is less important and peripheral vision is more important for scene perception and search than previously thought. The importance of foveal vision was found to depend on the specific requirements of the task. Moreover, the data support a central-peripheral dichotomy in which peripheral vision selects and central vision recognizes.
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
Time as its own representation? Exploring a link between timing of cognition and time perception
The way we represent and perceive time has crucial implications for studying temporality in conscious experience. Contrasting positions posit that temporal information is separately abstracted out like any other perceptual property, or that time is represented through representations having temporal properties themselves. To add to this debate, we investigated alterations in felt time in conditions where only conscious visual experience is altered while a bistable figure remains physically unchanged. In this talk, I will discuss two studies that we have done in relation to answering this question. In study 1, we investigated whether perceptual switches in fixed intervals altered felt time. In three experiments we showed that a break in visual experience (via a perceptual switch) also leads to a break in felt time. In study 2, we are currently looking at figure-ground perception in ambigous displays. Here, in experiment 1 we show that differences in flicker frequencies on ambigous regions can induce figure-ground segregation. To see if a reverse complementarity exists for felt time, we ask participants to view ambigous regions as figure/ground and show that they have different temporal resolutions for the same region based on whether it is seen as figure or background. Overall, the two studies provide evidence for temporal mirroring and isomorphism in visual experience, arguing for a link between the timing of experience and time perception.
Learning static and dynamic mappings with local self-supervised plasticity
Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
Binocular combination of light
The brain combines signals across the eyes. This process is well-characterized for the perceptual anatomical pathway through V1 that primarily codes contrast, where interocular normalization ensures that responses are approximately equal for monocular and binocular stimulation. But we have much less understanding of how luminance is combined binocularly, both in the cortex and in subcortical structures that govern pupil diameter. Here I will describe the results of experiments using a novel combined EEG and pupillometry paradigm to simultaneously index binocular combination of luminance flicker in parallel pathways. The results show evidence of a more linear process than for spatial contrast, that may reflect different operational constraints in distinct anatomical pathways.
Analogical retrieval across disparate task domains
Previous experiments have shown that a comparison of two written narratives highlights their shared relational structure, which in turn facilitates the retrieval of analogous narratives from the past (e.g., Gentner, Loewenstein, Thompson, & Forbus, 2009). However, analogical retrieval occurs across domains that appear more conceptually distant than merely different narratives, and the deepest analogies use matches in higher-order relational structure. The present study investigated whether comparison can facilitate analogical retrieval of higher-order relations across written narratives and abstract symbolic problems. Participants read stories which became retrieval targets after a delay, cued by either analogous stories or letter-strings. In Experiment 1 we replicated Gentner et al. who used narrative retrieval cues, and also found preliminary evidence for retrieval between narrative and symbolic domains. In Experiment 2 we found clear evidence that a comparison of analogous letter-string problems facilitated the retrieval of source stories with analogous higher-order relations. Experiment 3 replicated the retrieval results of Experiment 2 but with a longer delay between encoding and recall, and a greater number of distractor source stories. These experiments offer support for the schema induction account of analogical retrieval (Gentner et al., 2009) and show that the schemas abstracted from comparison of narratives can be transferred to non-semantic symbolic domains.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
A Game Theoretical Framework for Quantifying Causes in Neural Networks
Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.
An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals
Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks.
How communication networks promote cross-cultural similarities: The case of category formation
Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations to construct divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. For this investigation, I designed an online “Grouping Game” to observe how people construct categories in both small and large populations when tasked with grouping together the same novel and ambiguous images. I replicated this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution. In particular, I discuss how participants in both cultures readily harnessed analogies when categorizing novel stimuli, and I examine the role of communication networks in promoting cross-cultural similarities in analogy-making as the key engine of category formation.
Controversial stimuli: Optimizing experiments to adjudicate among computational hypotheses
Malignant synaptic plasticity in pediatric high-grade gliomas
Pediatric high-grade gliomas (pHGG) are a devastating group of diseases that urgently require novel therapeutic options. We have previously demonstrated that pHGGs directly synapse onto neurons and the subsequent tumor cell depolarization, mediated by calcium-permeable AMPA channels, promotes their proliferation. The regulatory mechanisms governing these postsynaptic connections are unknown. Here, we investigated the role of BDNF-TrkB signaling in modulating the plasticity of the malignant synapse. BDNF ligand activation of its canonical receptor, TrkB (which is encoded for by the gene NTRK2), has been shown to be one important modulator of synaptic regulation in the normal setting. Electrophysiological recordings of glioma cell membrane properties, in response to acute neurotransmitter stimulation, demonstrate in an inward current resembling AMPA receptor (AMPAR) mediated excitatory neurotransmission. Extracellular BDNF increases the amplitude of this glutamate-induced tumor cell depolarization and this effect is abrogated in NTRK2 knockout glioma cells. Upon examining tumor cell excitability using in situ calcium imaging, we found that BDNF increases the intensity of glutamate-evoked calcium transients in GCaMP6s expressing glioma cells. Western blot analysis indicates the tumors AMPAR properties are altered downstream of BDNF induced TrkB activation in glioma. Cell membrane protein capture (via biotinylation) and live imaging of pH sensitive GFP-tagged AMPAR subunits demonstrate an increase of calcium permeable channels at the tumors postsynaptic membrane in response to BDNF. We find that BDNF-TrkB signaling promotes neuron-to-glioma synaptogenesis as measured by high-resolution confocal and electron microscopy in culture and tumor xenografts. Our analysis of published pHGG transcriptomic datasets, together with brain slice conditioned medium experiments in culture, indicates the tumor microenvironment as the chief source of BDNF ligand. Disruption of the BDNF-TrkB pathway in patient-derived orthotopic glioma xenograft models, both genetically and pharmacologically, results in an increased overall survival and reduced tumor proliferation rate. These findings suggest that gliomas leverage normal mechanisms of plasticity to modulate the excitatory channels involved in synaptic neurotransmission and they reveal the potential to target the regulatory components of glioma circuit dynamics as a therapeutic strategy for these lethal cancers.
A draft connectome for ganglion cell types of the mouse retina
The visual system of the brain is highly parallel in its architecture. This is clearly evident in the outputs of the retina, which arise from neurons called ganglion cells. Work in our lab has shown that mammalian retinas contain more than a dozen distinct types of ganglion cells. Each type appears to filter the retinal image in a unique way and to relay this processed signal to a specific set of targets in the brain. My students and I are working to understand the meaning of this parallel organization through electrophysiological and anatomical studies. We record from light-responsive ganglion cells in vitro using the whole-cell patch method. This allows us to correlate directly the visual response properties, intrinsic electrical behavior, synaptic pharmacology, dendritic morphology and axonal projections of single neurons. Other methods used in the lab include neuroanatomical tracing techniques, single-unit recording and immunohistochemistry. We seek to specify the total number of ganglion cell types, the distinguishing characteristics of each type, and the intraretinal mechanisms (structural, electrical, and synaptic) that shape their stimulus selectivities. Recent work in the lab has identified a bizarre new ganglion cell type that is also a photoreceptor, capable of responding to light even when it is synaptically uncoupled from conventional (rod and cone) photoreceptors. These ganglion cells appear to play a key role in resetting the biological clock. It is just this sort of link, between a specific cell type and a well-defined behavioral or perceptual function, that we seek to establish for the full range of ganglion cell types. My research concerns the structural and functional organization of retinal ganglion cells, the output cells of the retina whose axons make up the optic nerve. Ganglion cells exhibit great diversity both in their morphology and in their responses to light stimuli. On this basis, they are divisible into a large number of types (>15). Each ganglion-cell type appears to send its outputs to a specific set of central visual nuclei. This suggests that ganglion cell heterogeneity has evolved to provide each visual center in the brain with pre-processed representations of the visual scene tailored to its specific functional requirements. Though the outline of this story has been appreciated for some time, it has received little systematic exploration. My laboratory is addressing in parallel three sets of related questions: 1) How many types of ganglion cells are there in a typical mammalian retina and what are their structural and functional characteristics? 2) What combination of synaptic networks and intrinsic membrane properties are responsible for the characteristic light responses of individual types? 3) What do the functional specializations of individual classes contribute to perceptual function or to visually mediated behavior? To pursue these questions, we label retinal ganglion cells by retrograde transport from the brain; analyze in vitro their light responses, intrinsic membrane properties and synaptic pharmacology using the whole-cell patch clamp method; and reveal their morphology with intracellular dyes. Recently, we have discovered a novel ganglion cell in rat retina that is intrinsically photosensitive. These ganglion cells exhibit robust light responses even when all influences from classical photoreceptors (rods and cones) are blocked, either by applying pharmacological agents or by dissociating the ganglion cell from the retina. These photosensitive ganglion cells seem likely to serve as photoreceptors for the photic synchronization of circadian rhythms, the mechanism that allows us to overcome jet lag. They project to the circadian pacemaker of the brain, the suprachiasmatic nucleus of the hypothalamus. Their temporal kinetics, threshold, dynamic range, and spectral tuning all match known properties of the synchronization or "entrainment" mechanism. These photosensitive ganglion cells innervate various other brain targets, such as the midbrain pupillary control center, and apparently contribute to a host of behavioral responses to ambient lighting conditions. These findings help to explain why circadian and pupillary light responses persist in mammals, including humans, with profound disruption of rod and cone function. Ongoing experiments are designed to elucidate the phototransduction mechanism, including the identity of the photopigment and the nature of downstream signaling pathways. In other studies, we seek to provide a more detailed characterization of the photic responsiveness and both morphological and functional evidence concerning possible interactions with conventional rod- and cone-driven retinal circuits. These studies are of potential value in understanding and designing appropriate therapies for jet lag, the negative consequences of shift work, and seasonal affective disorder.
Time as a continuous dimension in natural and artificial networks
Neural representations of time are central to our understanding of the world around us. I review cognitive, neurophysiological and theoretical work that converges on three simple ideas. First, the time of past events is remembered via populations of neurons with a continuum of functional time constants. Second, these time constants evenly tile the log time axis. This results in a neural Weber-Fechner scale for time which can support behavioral Weber-Fechner laws and characteristic behavioral effects in memory experiments. Third, these populations appear as dual pairs---one type of population contains cells that change firing rate monotonically over time and a second type of population that has circumscribed temporal receptive fields. These ideas can be used to build artificial neural networks that have novel properties. Of particular interest, a convolutional neural network built using these principles can generalize to arbitrary rescaling of its inputs. That is, after learning to perform a classification task on a time series presented at one speed, it successfully classifies stimuli presented slowed down or sped up. This result illustrates the point that this confluence of ideas originating in cognitive psychology and measured in the mammalian brain could have wide-reaching impacts on AI research.
Extrinsic control and autonomous computation in the hippocampal CA1 circuit
In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.
Retinal responses to natural inputs
The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.
A new experimental paradigm to study analogy transfer
Analogical reasoning is one of the most complex cognitive functions in humans that allows abstract thinking, high-level reasoning, and learning. Based on analogical reasoning, one can extract an abstract and general concept (i.e., an analogy schema) from a familiar situation and apply it to a new context or domain (i.e., analogy transfer). These processes allow us to solve problems we never encountered before and generate new ideas. However, the place of analogy transfer in problem solving mechanisms is unclear. This presentation will describe several experiments with three main findings. First, we show how analogy transfer facilitates problem-solving, replicating existing empirical data largely based on the radiation/fortress problems with four new riddles. Second, we propose a new experimental task that allows us to quantify analogy transfer. Finally, using science network methodology, we show how restructuring the mental representation of a problem can predict successful solving of an analogous problem. These results shed new light on the cognitive mechanism underlying solution transfer by analogy and provide a new tool to quantify individual abilities.
Orbitofrontal cortex and the integrative approach to functional neuroanatomy
The project of functional neuroanatomy typically considers single brain areas as the core functional unit of the brain. Functional neuroanatomists typically use specialized tasks that are designed to isolate hypothesized functions from other cognitive processes. Our lab takes a broader view; specifically, we consider brain regions as parts of larger circuits and we take cognitive processes as part of more complex behavioral repertoires. In my talk, I will discuss the ramifications of this perspective for thinking about the role of the orbitofrontal cortex. I will discuss results of recent experiments from my lab that tackle the question of OFC function within the context of larger brain networks and in freely moving foraging tasks. I will argue that this perspective challenges conventional accounts of the role of OFC and invites new ones. I will conclude by speculating on implications for the practice of functional neuroanatomy.
Online neural modeling and Bayesian optimization for closed-loop adaptive experiments
COSYNE 2022
Online neural modeling and Bayesian optimization for closed-loop adaptive experiments
COSYNE 2022
Sparse autoencoders for mechanistic insights on neural computation in naturalistic experiments
COSYNE 2025
Analysis of gaze control neuronal circuits combining behavioural experiments with a novel virtual reality platform
FENS Forum 2024
Computation with neuronal cultures: Effects of connectivity modularity on response separation and generalisation in simulations and experiments
FENS Forum 2024
Conducting virtual neuronal experiments with Blue Brain Cellular Laboratory (BlueCelluLab)
FENS Forum 2024
A novel MRI-compatible restrain setup for awake rat multimodal experiments
FENS Forum 2024
Simultaneous calcium imaging and extracellular electrophysiology using CMOS-based imaging devices with an integrated carbon electrode for freely moving mice experiments
FENS Forum 2024
Wireless headstage controlled via Bluetooth for closed-loop optogenetics experiments in rodents
FENS Forum 2024
experiments coverage
59 items