distance
Latest
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
Vocal emotion perception at millisecond speed
The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.
Why spikes?
On a fast timescale, neurons mostly interact by short, stereotypical electrical impulses or spikes. Why? A common answer is that spikes are useful for long-distance communication, to avoid alterations while traveling along axons. But as it turns out, spikes are seen in many places outside neurons: in the heart, in muscles, in plants and even in protists. From these examples, it appears that action potentials mediate some form of coordinated action, a timed event. From this perspective, spikes should not be seen simply as noisy implementations of underlying continuous signals (a sort of analog-to-digital conversion), but rather as events or actions. I will give a number of examples of functional spike-based interactions in living systems.
Behavioural Basis of Subjective Time Distortions
Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, duration, and distance in time of two sequentially-organized events—standard S, with constant duration, and comparison C, with duration varying trial-by-trial—are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer inter-stimulus interval (ISI) helps to counteract such serial distortion effect only when the constant S is in the first position, but not if the unpredictable C is in the first position. These results imply the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. Our results clarify the mechanisms generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, something akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Central place foraging: how insects anchor spatial information
Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
Semantic Distance and Beyond: Interacting Predictors of Verbal Analogy Performance
Prior studies of A:B::C:D verbal analogies have identified several factors that affect performance, including the semantic similarity between source and target domains (semantic distance), the semantic association between the C-term and incorrect answers (distracter salience), and the type of relations between word pairs (e.g., categorical, compositional, and causal). However, it is unclear how these stimulus properties affect performance when utilized together. Moreover, how do these item factors interact with individual differences such as crystallized intelligence and creative thinking? Several studies reveal interactions among these item and individual difference factors impacting verbal analogy performance. For example, a three-way interaction demonstrated that the effects of semantic distance and distracter salience had a greater impact on performance for compositional and causal relations than for categorical ones (Jones, Kmiecik, Irwin, & Morrison, 2022). Implications for analogy theories and future directions are discussed.
Computation in the neuronal systems close to the critical point
It was long hypothesized that natural systems might take advantage of the extended temporal and spatial correlations close to the critical point to improve their computational capabilities. However, on the other side, different distances to criticality were inferred from the recordings of nervous systems. In my talk, I discuss how including additional constraints on the processing time can shift the optimal operating point of the recurrent networks. Moreover, the data from the visual cortex of the monkeys during the attentional task indicate that they flexibly change the closeness to the critical point of the local activity. Overall it suggests that, as we would expect from common sense, the optimal state depends on the task at hand, and the brain adapts to it in a local and fast manner.
The effect of gravity on the perception of distance and self-motion: a multisensory perspective
Gravity is a constant in our lives. It provides an internalized reference to which all other perceptions are related. We can experimentally manipulate the relationship between physical gravity with other cues to the direction of “up” using virtual reality - with either HMDs or specially built tilting environments - to explore how gravity contributes to perceptual judgements. The effect of gravity can also be cancelled by running experiments on the International Space Station in low Earth orbit. Changing orientation relative to gravity - or even just perceived orientation – affects your perception of how far away things are (they appear closer when supine or prone). Cancelling gravity altogether has a similar effect. Changing orientation also affects how much visual motion is needed to perceive a particular travel distance (you need less when supine or prone). Adapting to zero gravity has the opposite effect (you need more). These results will be discussed in terms of their practical consequences and the multisensory processes involved, in particular the response to visual-vestibular conflict.
Neural Codes for Natural Behaviors in Flying Bats
This talk will focus on the importance of using natural behaviors in neuroscience research – the “Natural Neuroscience” approach. I will illustrate this point by describing studies of neural codes for spatial behaviors and social behaviors, in flying bats – using wireless neurophysiology methods that we developed – and will highlight new neuronal representations that we discovered in animals navigating through 3D spaces, or in very large-scale environments, or engaged in social interactions. In particular, I will discuss: (1) A multi-scale neural code for very large environments, which we discovered in bats flying in a 200-meter long tunnel. This new type of neural code is fundamentally different from spatial codes reported in small environments – and we show theoretically that it is superior for representing very large spaces. (2) Rapid modulation of position × distance coding in the hippocampus during collision-avoidance behavior between two flying bats. This result provides a dramatic illustration of the extreme dynamism of the neural code. (3) Local-but-not-global order in 3D grid cells – a surprising experimental finding, which can be explained by a simple physics-inspired model, which successfully describes both 3D and 2D grids. These results strongly argue against many of the classical, geometrically-based models of grid cells. (4) I will also briefly describe new results on the social representation of other individuals in the hippocampus, in a highly social multi-animal setting. The lecture will propose that neuroscience experiments – in bats, rodents, monkeys or humans – should be conducted under evermore naturalistic conditions.
Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex
During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.
Deforming the metric of cognitive maps distorts memory
Environmental boundaries anchor cognitive maps that support memory. However, trapezoidal boundary geometry distorts the regular firing patterns of entorhinal grid cells proposedly providing a metric for cognitive maps. Here, we test the impact of trapezoidal boundary geometry on human spatial memory using immersive virtual reality. Consistent with reduced regularity of grid patterns in rodents and a grid-cell model based on the eigenvectors of the successor representation, human positional memory was degraded in a trapezoid compared to a square environment; an effect particularly pronounced in the trapezoid’s narrow part. Congruent with spatial frequency changes of eigenvector grid patterns, distance estimates between remembered positions were persistently biased; revealing distorted memory maps that explained behavior better than the objective maps. Our findings demonstrate that environmental geometry affects human spatial memory similarly to rodent grid cell activity — thus strengthening the putative link between grid cells and behavior along with their cognitive functions beyond navigation.
Neural signature for accumulated evidence underlying temporal decisions
Cognitive models of timing often include a pacemaker analogue whose ticks are accumulated to form an internal representation of time, and a threshold that determines when a target duration has elapsed. However, clear EEG manifestations of these abstract components have not yet been identified. We measured the EEG of subjects while they performed a temporal bisection task in which they were requested to categorize visual stimuli as short or long in duration. We report an ERP component whose amplitude depends monotonically on the stimulus duration. The relation of the ERP amplitude and stimulus duration can be captured by a simple model, adapted from a known drift-diffusion model for time perception. It includes a noisy accumulator that starts with the stimulus onset and a threshold. If the threshold is reached during stimulus presentation, the stimulus is categorized as "long", otherwise the stimulus is categorized as "short". At the stimulus offset, a response proportional to the distance to the threshold is emitted. This simple model has two parameters that fit both the behavior and ERP amplitudes recorded in the task. Two subsequent experiments replicate and extend this finding to another modality (touch) as well as to different time ranges (subsecond and suprasecond), establishing the described ERP component as a useful handle on the cognitive processes involved in temporal decisions.
Neural network models of binocular depth perception
Our visual experience of living in a three-dimensional world is created from the information contained in the two-dimensional images projected into our eyes. The overlapping visual fields of the two eyes mean that their images are highly correlated, and that the small differences that are present represent an important cue to depth. Binocular neurons encode this information in a way that both maximises efficiency and optimises disparity tuning for the depth structures that are found in our natural environment. Neural network models provide a clear account of how these binocular neurons encode the local binocular disparity in images. These models can be expanded to multi-layer models that are sensitive to salient features of scenes, such as the orientations and discontinuities between surfaces. These deep neural network models have also shown the importance of binocular disparity for the segmentation of images into separate objects, in addition to the estimation of distance. These results demonstrate the usefulness of machine learning approaches as a tool for understanding biological vision.
Novel word generalization in comparison designs: How do young children align stimuli when they learn object nouns and relational nouns?
It is well established that the opportunity to compare learning stimuli in a novel word learning/extension task elicits a larger number of conceptually relevant generalizations than standard no-comparison conditions. I will present results suggesting that the effectiveness of comparison depends on factors such as semantic distance, number of training items, dimension distinctiveness and interactions with age. I will address these issues in the case of familiar and unfamiliar object nouns and relational nouns. The alignment strategies followed by children during learning and at test (i.e., when learning items are compared and how children reach a solution) will be described with eye-tracking data. We will also assess the extent to which children’s performance in these tasks are associated with executive functions (inhibition and flexibility) and world knowledge. Finally, we will consider these issues in children with cognitive deficits (Intellectual deficiency, DLD)
Transdiagnostic approaches to understanding neurodevelopment
Macroscopic brain organisation emerges early in life, even prenatally, and continues to develop through adolescence and into early adulthood. The emergence and continual refinement of large-scale brain networks, connecting neuronal populations across anatomical distance, allows for increasing functional integration and specialisation. This process is thought crucial for the emergence of complex cognitive processes. But how and why is this process so diverse? We used structural neuroimaging collected from a large diverse cohort, to explore how different features of macroscopic brain organisation are associated with diverse cognitive trajectories. We used diffusion-weighted imaging (DWI) to construct whole-brain white-matter connectomes. A simulated attack on each child's connectome revealed that some brain networks were strongly organized around highly connected 'hubs'. The more children's brains were critically dependent on hubs, the better their cognitive skills. Conversely, having poorly integrated hubs was a very strong risk factor for cognitive and learning difficulties across the sample. We subsequently developed a computational framework, using generative network modelling (GNM), to model the emergence of this kind of connectome organisation. Relatively subtle changes within the wiring rules of this computational framework give rise to differential developmental trajectories, because of small biases in the preferential wiring properties of different nodes within the network. Finally, we were able to use this GNM to implicate the molecular and cellular processes that govern these different growth patterns.
Becoming what you smell: adaptive sensing in the olfactory system
I will argue that the circuit architecture of the early olfactory system provides an adaptive, efficient mechanism for compressing the vast space of odor mixtures into the responses of a small number of sensors. In this view, the olfactory sensory repertoire employs a disordered code to compress a high dimensional olfactory space into a low dimensional receptor response space while preserving distance relations between odors. The resulting representation is dynamically adapted to efficiently encode the changing environment of volatile molecules. I will show that this adaptive combinatorial code can be efficiently decoded by systematically eliminating candidate odorants that bind to silent receptors. The resulting algorithm for 'estimation by elimination' can be implemented by a neural network that is remarkably similar to the early olfactory pathway in the brain. Finally, I will discuss how diffuse feedback from the central brain to the bulb, followed by unstructured projections back to the cortex, can produce the convergence and divergence of the cortical representation of odors presented in shared or different contexts. Our theory predicts a relation between the diversity of olfactory receptors and the sparsity of their responses that matches animals from flies to humans. It also predicts specific deficits in olfactory behavior that should result from optogenetic manipulation of the olfactory bulb and cortex, and in some disease states.
Demystifying the richness of visual perception
Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.
Physical Computation in Insect Swarms
Our world is full of living creatures that must share information to survive and reproduce. As humans, we easily forget how hard it is to communicate within natural environments. So how do organisms solve this challenge, using only natural resources? Ideas from computer science, physics and mathematics, such as energetic cost, compression, and detectability, define universal criteria that almost all communication systems must meet. We use insect swarms as a model system for identifying how organisms harness the dynamics of communication signals, perform spatiotemporal integration of these signals, and propagate those signals to neighboring organisms. In this talk I will focus on two types of communication in insect swarms: visual communication, in which fireflies communicate over long distances using light signals, and chemical communication, in which bees serve as signal amplifiers to propagate pheromone-based information about the queen’s location.
Population dynamics of the thalamic head direction system during drift and reorientation
The head direction (HD) system is classically modeled as a ring attractor network which ensures a stable representation of the animal’s head direction. This unidimensional description popularized the view of the HD system as the brain’s internal compass. However, unlike a globally consistent magnetic compass, the orientation of the HD system is dynamic, depends on local cues and exhibits remapping across familiar environments5. Such a system requires mechanisms to remember and align to familiar landmarks, which may not be well described within the classic 1-dimensional framework. To search for these mechanisms, we performed large population recordings of mouse thalamic HD cells using calcium imaging, during controlled manipulations of a visual landmark in a familiar environment. First, we find that realignment of the system was associated with a continuous rotation of the HD network representation. The speed and angular distance of this rotation was predicted by a 2nd dimension to the ring attractor which we refer to as network gain, i.e. the instantaneous population firing rate. Moreover, the 360-degree azimuthal profile of network gain, during darkness, maintained a ‘memory trace’ of a previously displayed visual landmark. In a 2nd experiment, brief presentations of a rotated landmark revealed an attraction of the network back to its initial orientation, suggesting a time-dependent mechanism underlying the formation of these network gain memory traces. Finally, in a 3rd experiment, continuous rotation of a visual landmark induced a similar rotation of the HD representation which persisted following removal of the landmark, demonstrating that HD network orientation is subject to experience-dependent recalibration. Together, these results provide new mechanistic insights into how the neural compass flexibly adapts to environmental cues to maintain a reliable representation of the head direction.
Conceptual Change Induced by Analogical Reasoning Sparks “Aha!” Moments
Although analogical reasoning has been assumed to involve insight and its associated “aha!” experience, the relationship between these phenomena has never been directly probed empirically. In this study we investigated the relationship between representational change and the “aha!” experience during analogical reasoning. A novel set of verbal analogy stimuli were developed for use as an insight task. Across two experiments, participants reported significantly stronger aha moments and showed greater evidence of representational change on trials with more semantically distant analogies. Further, the strength of reported aha moments was correlated with the degree to which participants’ descriptions of the analogies changed over the course of each trial. Lastly, we probed the individual differences associated with a tendency to report stronger "aha" experiences, particularly related to mood, curiosity, and reward responsiveness. The findings shed light on the affective components of analogical reasoning and suggest that measuring affective responses during such tasks may elucidate novel insights into the mechanisms of creative analogical reasoning.
Natural switches in sensory attention rapidly modulate hippocampal spatial codes
During natural behavior animals dynamically switch between different behaviors, yet little is known about how the brain performs behavioral-switches. Navigation is a complex dynamic behavior that enables testing these kind of behavioral switches: It requires the animal to know its own allocentric (world-centered) location within the environment, while also paying attention to incoming sudden events such as obstacles or other conspecifics – and therefore the animal may need to rapidly switch from representing its own allocentric position to egocentrically representing ‘things out-there’. Here we used an ethological task where two bats flew together in a very large environment (130 meters), and had to switch between two behaviors: (i) navigation, and (ii) obstacle-avoidance during ‘cross-over’ events with the other bat. Bats increased their echolocation click-rate before a cross-over, indicating spatial attention to the other bat. Hippocampal CA1 neurons represented the bat’s own position when flying alone (allocentric place-coding); surprisingly, when meeting the other bat, neurons switched very rapidly to jointly representing the inter-bat distance × position (egocentric × allocentric coding). This switching to a neuronal representation of the other bat was correlated on a trial-by-trial basis with the attention signal, as indexed by the bat’s echolocation calls – suggesting that sensory attention is controlling these major switches in neural coding. Interestingly, we found that in place-cells, the different place-fields of the same neuron could exhibit very different tuning to inter-bat distance – creating a non-separable coding of allocentric position × egocentric distance. Together, our results suggest that attentional switches during navigation – which in bats can be measured directly based on their echolocation signals – elicit rapid dynamics of hippocampal spatial coding. More broadly, this study demonstrates that during natural behavior, when animals often switch between different behaviors, neural circuits can rapidly and flexibly switch their core computations.
The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium
Join the Department of Bioengineering on the 26th May at 9:00am for The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium. This year’s lecture speaker will be distinguished bioengineer and neuroscientist Professor Mandyam V. Srinivasan AM FRS, from the University of Queensland. Professor Srinivasan studies visual systems, particularly those of bees and birds. His research has revealed how flying insects negotiate narrow gaps, regulate the height and speed of flight, estimate distance flown, and orchestrate smooth landings. Apart from enhancing fundamental knowledge, these findings are leading to novel, biologically inspired approaches to the design of guidance systems for unmanned aerial vehicles with applications in the areas of surveillance, security and planetary exploration. Following Professor Srinivasan’s lecture will be the Bioinspired GNC Mini Symposium with guest speakers from Google Deepmind, Imperial College London, the University of Würzburg and the University of Konstanz giving talks on their research into autonomous robot navigation, neural mechanisms of compass orientation in insects and computational approaches to motor control.
Learning to perceive with new sensory signals
I will begin by describing recent research taking a new, model-based approach to perceptual development. This approach uncovers fundamental changes in information processing underlying the protracted development of perception, action, and decision-making in childhood. For example, integration of multiple sensory estimates via reliability-weighted averaging – widely used by adults to improve perception – is often not seen until surprisingly late into childhood, as assessed by both behaviour and neural representations. This approach forms the basis for a newer question: the scope for the nervous system to deploy useful computations (e.g. reliability-weighted averaging) to optimise perception and action using newly-learned sensory signals provided by technology. Our initial model system is augmenting visual depth perception with devices translating distance into auditory or vibro-tactile signals. This problem has immediate applications to people with partial vision loss, but the broader question concerns our scope to use technology to tune in to any signal not available to our native biological receptors. I will describe initial progress on this problem, and our approach to operationalising what it might mean to adopt a new signal comparably to a native sense. This will include testing for its integration (weighted averaging) alongside the native senses, assessing the level at which this integration happens in the brain, and measuring the degree of ‘automaticity’ with which new signals are used, compared with native perception.
Stereo vision in humans and insects
Stereopsis – deriving information about distance by comparing views from two eyes – is widespread in vertebrates but so far known in only class of invertebrates, the praying mantids. Understanding stereopsis which has evolved independently in such a different nervous system promises to shed light on the constraints governing any stereo system. Behavioral experiments indicate that insect stereopsis is functionally very different from that studied in vertebrates. Vertebrate stereopsis depends on matching up the pattern of contrast in the two eyes; it works in static scenes, and may have evolved in order to break camouflage rather than to detect distances. Insect stereopsis matches up regions of the image where the luminance is changing; it is insensitive to the detailed pattern of contrast and operates to detect the distance to a moving target. Work from my lab has revealed a network of neurons within the mantis brain which are tuned to binocular disparity, including some that project to early visual areas. This is in contrast to previous theories which postulated that disparity was computed only at a single, late stage, where visual information is passed down to motor neurons. Thus, despite their very different properties, the underlying neural mechanisms supporting vertebrate and insect stereopsis may be computationally more similar than has been assumed.
Locally-ordered representation of 3D space in the entorhinal cortex
When animals navigate on a two-dimensional (2D) surface, many neurons in the medial entorhinal cortex (MEC) are activated as the animal passes through multiple locations (‘firing fields’) arranged in a hexagonal lattice that tiles the locomotion-surface; these neurons are known as grid cells. However, although our world is three-dimensional (3D), the 3D volumetric representation in MEC remains unknown. Here we recorded MEC cells in freely-flying bats and found several classes of spatial neurons, including 3D border cells, 3D head-direction cells, and neurons with multiple 3D firing-fields. Many of these multifield neurons were 3D grid cells, whose neighboring fields were separated by a characteristic distance – forming a local order – but these cells lacked any global lattice arrangement of their fields. Thus, while 2D grid cells form a global lattice – characterized by both local and global order – 3D grid cells exhibited only local order, thus creating a locally ordered metric for space. We modeled grid cells as emerging from pairwise interactions between fields, which yielded a hexagonal lattice in 2D and local order in 3D – thus describing both 2D and 3D grid cells using one unifying model. Together, these data and model illuminate the fundamental differences and similarities between neural codes for 3D and 2D space in the mammalian brain.
The effect of gravity on the perception of distance and self-motion
Gravity is a constant in our lives. It provides an internalized reference to which all other perceptions are related. We can experimentally manipulate the relationship between physical gravity with other cues to the direction of “up” using virtual reality - with either HMDs or specially built tilting environments - to explore how gravity contributes to perceptual judgements. The effect of gravity can also be cancelled by running experiments on the International Space Station in low Earth orbit. Changing orientation relative to gravity - or even just perceived orientation – affects your perception of how far away things are (they appear closer when supine or prone). Cancelling gravity altogether has a similar effect. Changing orientation also affects how much visual motion is needed to perceive a particular travel distance (you need less when supine or prone). Adapting to zero gravity has the opposite effect (you need more). These results will be discussed in terms of their practical consequences and the multisensory processes involved, in particular the response to visual-vestibular conflict.
Australian Bogong moths use a true stellar compass for long-distance navigation at night
Each spring, billions of Bogong moths escape hot conditions in different regions of southeast Australia by migrating over 1000 km to a limited number of cool caves in the Australian Alps, historically used for aestivating over the summer. At the beginning of autumn the same individuals make a return migration to their breeding grounds to reproduce and die. To steer migration Bogong moths sense the Earth’s magnetic field and correlate its directional information with visual cues. In this presentation, we will show that a critically important visual cue is the distribution of starlight within the austral night sky. By tethering spring and autumn migratory moths in a flight simulator, we found that under natural dorsally-projected night skies, and in a nulled magnetic field (disabling the magnetic sense), moths flew in their seasonally appropriate migratory directions, turning in the opposite direction when the night sky was rotated 180°. Visual interneurons in the moth’s optic lobe and central brain responded vigorously to identical sky rotations. Migrating Bogong moths thus use the starry night sky as a true compass to distinguish geographic cardinal directions, the first invertebrate known to do so. These stellar cues are likely reinforced by the Earth’s magnetic field to create a robust compass mechanism for long-distance nocturnal navigation.
State-dependent egocentric and allocentric heading representation in the monarch butterfly sun compass
For spatial orientation, heading information can be processed in two different frames of reference, a self-centered egocentric or a viewpoint allocentric frame of reference. Using the most efficient frame of reference is in particular important if an animal migrates over large distances, as it the case for the monarch butterfly (Danaus plexippus). These butterflies employ a sun compass to travel over more than 4,000 kilometers to their destination in central Mexico. We developed tetrode recordings from the heading-direction network of tethered flying monarch butterflies that were allowed to orient with respect to a sun stimulus. We show that the neurons switch their frame of reference depending on the animal’s locomotion state. In quiescence, the heading-direction cells encode a sun bearing in an egocentric reference frame, while during active flight, the heading-direction is encoded within an allocentric reference frame. By switching to an allocentric frame of reference during flight, monarch butterflies convert the sun to a global compass cue for long-distance navigation, an ideal strategy for maintaining a migratory heading.
Analogical Reasoning and Executive Functions - A Life Span Approach
From a developmental standpoint, it has been argued that two major complementary factors contribute to the development of analogy comprehension: world knowledge and executive functions. Here I will provide evidence in support of the second view. Beyond paradigms that manipulate task difficulty (e.g., number and types of distractors and semantic distance between domains) we will provide eye-tracking data that describes differences in the way children and adults compare the base and target domains in analogy problems. We will follow the same approach with ageing people. This latter population provides a unique opportunity to disentangle the contribution of knowledge and executive processes in analogy making since knowledge is (more than) preserved and executive control is decreasing. Using this paradigm, I will show the extent to which world knowledge (assessed through vocabulary) compensates for decreasing executive control in older populations. Our eye-tracking data suggests that, to a certain extent, differences between younger and older adults are analogous to the differences between younger adults and children in the way they compare the base and the target domains in analogy problems.
Short-Distance Connections Enhance Neural Network Dynamics
Bernstein Conference 2024
Comparing noisy neural population dynamics using optimal transport distances
COSYNE 2025
Tracking the distance to criticality across the mouse visual hierarchy
COSYNE 2025
Visual circuitry for distance estimation in Drosophila
COSYNE 2025
Acoustical distance to the average voice modulates neural tuning in the macaque voice patches
FENS Forum 2024
Active tool-use training in near and far distances does not change time perception in peripersonal or far space
FENS Forum 2024
Computational model-based analysis of spatial navigation strategies under stress and uncertainty using place, distance, and border cells
FENS Forum 2024
HOISDF: Estimating hand-object interactions from a single camera via global signed distance fields
FENS Forum 2024
distance coverage
41 items