Physics
physics
Latest
Eero Simoncelli, Ph.D.
The Center for Neural Science at New York University (NYU), jointly with the Center for Computational Neuroscience (CCN) at the Flatiron Institute of the Simons Foundation, invites applications for an open rank joint position, with a preference for junior or mid-career candidates. We seek exceptional candidates that use computational frameworks to develop concepts, models, and tools for understanding brain function. Areas of interest include sensory representation and perception, memory, decision-making, adaptation and learning, and motor control. A Ph.D. in a relevant field, such as neuroscience, engineering, physics or applied mathematics, is required. Review of applications will begin 28 March 2021. Further information: * Joint position: https://apply.interfolio.com/83845 * NYU Center for Neural Science: https://www.cns.nyu.edu/ * Flatiron Institute Center for Computational Neuroscience: https://www.simonsfoundation.org/flatiron/center-for-computational-neuroscience/
Prof. Dr. Tobias Rose
The selected candidate will investigate the 'Encoding of Landmark Stability and Stability of Landmark Encoding'. You will study visual landmark encoding at the intersection of hippocampal, thalamic, and cortical inputs to retrosplenial cortex. You will use cutting-edge miniature two-photon Ca2+ imaging, enabling you to longitudinally record activity in defined, large neuronal populations and long-range afferents in freely moving animals. You will carry out rigorous neuronal and behavioral analyses within the confines of automatized closed-loop tasks tailored for visual navigation. This will involve the application of advanced tools for dense behavioral quantification, including multi-angle videography, inertial motion sensing, and egocentric recording with head-mounted cameras for the reconstruction of retinal input. Our aim is to gain a comprehensive understanding of the immediate and sustained multi-area neuronal representation of visual landmarks during unrestricted behavior. We aim to elucidate the mechanisms through which stable visual landmarks are encoded and the processes by which these representations are stabilized to facilitate robust allocentric navigation.
N/A
New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.
Geoffrey J Goodhill
An NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. A postdoc position is available in the Goodhill lab to contribute to the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong are recording from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab is analyzing the resulting huge datasets using a variety of sophisticated computational approaches, and using these results to build new theoretical models that reveal how neural circuits interact to govern sleep.
N/A
New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.
Professor Geoffrey J Goodhill
The Department of Neuroscience at Washington University School of Medicine is currently recruiting investigators with the passion to create knowledge, pursue bold visions, and challenge canonical thinking as we expand into our new 600,000 sq ft purpose-built neurosciences research building. We are now seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidates will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. We are particularly interested in outstanding researchers who are both creative and collaborative.
Jorge Jaramillo
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. Appointees will join as Grossman Center Postdoctoral Fellows, with the freedom to work with any of its faculty members. We especially welcome applicants who develop computational models and machine learning analysis methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron, Jorge Jaramillo, and Ramon Nogueira. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Department of Neurobiology, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and the Data Science Institute. The Grossman Center is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.
N/A
The PostDoctoral researcher will conduct research activities in modelling and simulation of reward-modulated prosocial behavior and decision-making. The position is part of a larger effort to uncover the computational and mechanistic bases of prosociality and empathy at the behavioral and circuit levels. The role involves working at the interface between experimental data (animal behavior and electrophysiology) and theoretical modelling, with an emphasis on Multi-Agent Reinforcement Learning and neural population dynamics.
Jean-Pascal Pfister
The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.
Professor Geoffrey J Goodhill
The Department of Neuroscience at Washington University School of Medicine is seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidate will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. The Department’s focus on fundamental neuroscience, outstanding research support facilities, and the depth, breadth and collegiality of our culture provide an exceptional environment to launch your independent research program.
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Where are you Moving? Assessing Precision, Accuracy, and Temporal Dynamics in Multisensory Heading Perception Using Continuous Psychophysics
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Human Echolocation for Localization and Navigation – Behaviour and Brain Mechanisms
Virtual Brain Twins for Brain Medicine and Epilepsy
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.
Visual-vestibular cue comparison for perception of environmental stationarity
Note the later time!
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
The physics of sentience
The balanced brain: two-photon microscopy of inhibitory synapse formation
Coordination between excitatory and inhibitory synapses (providing positive and negative signals respectively) is required to ensure proper information processing in the brain. Many brain disorders, especially neurodevelopental disorders, are rooted in a specific disturbance of this coordination. In my research group we use a combination of two-photon microscopy and electrophisiology to examine how inhibitory synapses are fromed and how this formation is coordinated with nearby excitatroy synapses.
Self-perception: mechanosensation and beyond
Brain-organ communications play a crucial role in maintaining the body's physiological and psychological homeostasis, and are controlled by complex neural and hormonal systems, including the internal mechanosensory organs. However, the progress has been slow due to technical hurdles: the sensory neurons are deeply buried inside the body and are not readily accessible for direct observation, the projection patterns from different organs or body parts are complex rather than converging into dedicate brain regions, the coding principle cannot be directly adapted from that learned from conventional sensory pathways. Our lab apply the pipeline of "biophysics of receptors-cell biology of neurons-functionality of neural circuits-animal behaviors" to explore the molecular and neural mechanisms of self-perception. In the lab, we mainly focus on the following three questions: 1, The molecular and cellular basis for proprioception and interoception. 2, The circuit mechanisms of sensory coding and integration of internal and external information. 3, The function of interoception in regulating behavior homeostasis.
Understanding Machine Learning via Exactly Solvable Statistical Physics Models
The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.
Setting network states via the dynamics of action potential generation
To understand neural computation and the dynamics in the brain, we usually focus on the connectivity among neurons. In contrast, the properties of single neurons are often thought to be negligible, at least as far as the activity of networks is concerned. In this talk, I will contradict this notion and demonstrate how the biophysics of action-potential generation can have a decisive impact on network behaviour. Our recent theoretical work shows that, among regularly firing neurons, the somewhat unattended homoclinic type (characterized by a spike onset via a saddle homoclinic orbit bifurcation) particularly stands out: First, spikes of this type foster specific network states - synchronization in inhibitory and splayed-out/frustrated states in excitatory networks. Second, homoclinic spikes can easily be induced by changes in a variety of physiological parameters (like temperature, extracellular potassium, or dendritic morphology). As a consequence, such parameter changes can even induce switches in network states, solely based on a modification of cellular voltage dynamics. I will provide first experimental evidence and discuss functional consequences of homoclinic spikes for the design of efficient pattern-generating motor circuits in insects as well as for mammalian pathologies like febrile seizures. Our analysis predicts an interesting role for homoclinic action potentials as an integral part of brain dynamics in both health and disease.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals
Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks.
The Standard Model of the Retina
The science of the retina has reached an interesting stage of completion. There exists now a consensus standard model of this neural system - at least in the minds of many researchers - that serves as a baseline against which to evaluate new claims. The standard model links phenomena from molecular biophysics, cell biology, neuroanatomy, synaptic physiology, circuit function, and visual psychophysics. It is further supported by a normative theory explaining what the purpose is of processing visual information this way. Most new reports of retinal phenomena fit squarely within the standard model, and major revisions seem increasingly unlikely. Given that our understanding of other brain circuits with comparable complexity is much more rudimentary, it is worth considering an example of what success looks like. In this talk I will summarize what I think are the ingredients that led to this mature understanding of the retina. Equally important, a number of practices and concepts that are currently en vogue in neuroscience were not needed or indeed counterproductive. I look forward to debating how these lessons might extend to other areas of brain research.
Retinal responses to natural inputs
The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.
Homeostatic Plasticity in Health and Disease
Dr. Davis will present a summary regarding the identification and characterization of mechanisms of homeostatic plasticity as they relate to the control of synaptic transmission. He will then provide evidence of translation to the mammalian neuromuscular junction and central synapses, and provide tangible links to the etiology of neurological disease.
Attention to visual motion: shaping sensation into perception
Evolution has endowed primates, including humans, with a powerful visual system, seemingly providing us with a detailed perception of our surroundings. But in reality the underlying process is one of active filtering, enhancement and reshaping. For visual motion perception, the dorsal pathway in primate visual cortex and in particular area MT/V5 is considered to be of critical importance. Combining physiological and psychophysical approaches we have used the processing and perception of visual motion and area MT/V5 as a model for the interaction of sensory (bottom-up) signals with cognitive (top-down) modulatory influences that characterizes visual perception. Our findings document how this interaction enables visual cortex to actively generate a neural representation of the environment that combines the high-performance sensory periphery with selective modulatory influences for producing an “integrated saliency map’ of the environment.
Neural Codes for Natural Behaviors in Flying Bats
This talk will focus on the importance of using natural behaviors in neuroscience research – the “Natural Neuroscience” approach. I will illustrate this point by describing studies of neural codes for spatial behaviors and social behaviors, in flying bats – using wireless neurophysiology methods that we developed – and will highlight new neuronal representations that we discovered in animals navigating through 3D spaces, or in very large-scale environments, or engaged in social interactions. In particular, I will discuss: (1) A multi-scale neural code for very large environments, which we discovered in bats flying in a 200-meter long tunnel. This new type of neural code is fundamentally different from spatial codes reported in small environments – and we show theoretically that it is superior for representing very large spaces. (2) Rapid modulation of position × distance coding in the hippocampus during collision-avoidance behavior between two flying bats. This result provides a dramatic illustration of the extreme dynamism of the neural code. (3) Local-but-not-global order in 3D grid cells – a surprising experimental finding, which can be explained by a simple physics-inspired model, which successfully describes both 3D and 2D grids. These results strongly argue against many of the classical, geometrically-based models of grid cells. (4) I will also briefly describe new results on the social representation of other individuals in the hippocampus, in a highly social multi-animal setting. The lecture will propose that neuroscience experiments – in bats, rodents, monkeys or humans – should be conducted under evermore naturalistic conditions.
Individual differences in visual (mis)perception: a multivariate statistical approach
Common factors are omnipresent in everyday life, e.g., it is widely held that there is a common factor g for intelligence. In vision, however, there seems to be a multitude of specific factors rather than a strong and unique common factor. In my thesis, I first examined the multidimensionality of the structure underlying visual illusions. To this aim, the susceptibility to various visual illusions was measured. In addition, subjects were tested with variants of the same illusion, which differed in spatial features, luminance, orientation, or contextual conditions. Only weak correlations were observed between the susceptibility to different visual illusions. An individual showing a strong susceptibility to one visual illusion does not necessarily show a strong susceptibility to other visual illusions, suggesting that the structure underlying visual illusions is multifactorial. In contrast, there were strong correlations between the susceptibility to variants of the same illusion. Hence, factors seem to be illusion-specific but not feature-specific. Second, I investigated whether a strong visual factor emerges in healthy elderly and patients with schizophrenia, which may be expected from the general decline in perceptual abilities usually reported in these two populations compared to healthy young adults. Similarly, a strong visual factor may emerge in action video gamers, who often show enhanced perceptual performance compared to non-video gamers. Hence, healthy elderly, patients with schizophrenia, and action video gamers were tested with a battery of visual tasks, such as a contrast detection and orientation discrimination task. As in control groups, between-task correlations were weak in general, which argues against the emergence of a strong common factor for vision in these populations. While similar tasks are usually assumed to rely on similar neural mechanisms, the performances in different visual tasks were only weakly related to each other, i.e., performance does not generalize across visual tasks. These results highlight the relevance of an individual differences approach to unravel the multidimensionality of the visual structure.
How much depth do you see? It depends…
NMC4 Short Talk: Sensory intermixing of mental imagery and perception
Several lines of research have demonstrated that internally generated sensory experience - such as during memory, dreaming and mental imagery - activates similar neural representations as externally triggered perception. This overlap raises a fundamental challenge: how is the brain able to keep apart signals reflecting imagination and reality? In a series of online psychophysics experiments combined with computational modelling, we investigated to what extent imagination and perception are confused when the same content is simultaneously imagined and perceived. We found that simultaneous congruent mental imagery consistently led to an increase in perceptual presence responses, and that congruent perceptual presence responses were in turn associated with a more vivid imagery experience. Our findings can be best explained by a simple signal detection model in which imagined and perceived signals are added together. Perceptual reality monitoring can then easily be implemented by evaluating whether this intermixed signal is strong or vivid enough to pass a ‘reality threshold’. Our model suggests that, in contrast to self-generated sensory changes during movement, our brain does not discount self-generated sensory signals during mental imagery. This has profound implications for our understanding of reality monitoring and perception in general.
Spatial summation for motion detection
Retinoblastoma: Canadian global leadership
Neural mechanisms of altered states of consciousness under psychedelics
Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.
The brain control of appetite: Can an old dog teach us new tricks?
It is clear that the cause of obesity is a result of eating more than you burn. It is physics. What is more complex to answer is why some people eat more than others? Differences in our genetic make-up mean some of us are slightly more hungry all the time and so eat more than others. We now know that the genetics of body-weight, on which obesity sits on one end of the spectrum, is in actuality the genetics of appetite control. In contrast to the prevailing view, body-weight is not a choice. People who are obese are not bad or lazy; rather, they are fighting their biology.
Ultrasound imaging in neuroscience
Through the bottleneck: my adventures with the 'Tishby program'
One of Tali's cherished goals was to transform biology into physics. In his view, biologists were far too enamored by the details of the specific models they studied, losing sight of the big principles that may govern the behavior of these models. One such big principle that he suggested was the 'information bottleneck (IB) principle'. The iIB principle is an information-theoretical approach for extracting the relevant information that one random variable carries about another. Tali applied the IB principle to numerous problems in biology, gaining important insights in the process. Here I will describe two applications of the IB principle to neurobiological data. The first is the formalization of the notion of surprise that allowed us to rigorously estimate the memory duration and content of neuronal responses in auditory cortex, and the second is an application to behavior, allowing us to estimate 'optimal policies under information constraints' that shed interesting light on rat behavior.
Physical Computation in Insect Swarms
Our world is full of living creatures that must share information to survive and reproduce. As humans, we easily forget how hard it is to communicate within natural environments. So how do organisms solve this challenge, using only natural resources? Ideas from computer science, physics and mathematics, such as energetic cost, compression, and detectability, define universal criteria that almost all communication systems must meet. We use insect swarms as a model system for identifying how organisms harness the dynamics of communication signals, perform spatiotemporal integration of these signals, and propagate those signals to neighboring organisms. In this talk I will focus on two types of communication in insect swarms: visual communication, in which fireflies communicate over long distances using light signals, and chemical communication, in which bees serve as signal amplifiers to propagate pheromone-based information about the queen’s location.
Swarms for people
As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Toward Naturalistic Paradigms of Agency
Voluntary control of behavior requires the ability to dynamically integrate internal states and external evidence to achieve one’s goals. However, neuroscientific studies of intentional action and critical philosophical commentary of that research have taken a rather narrow turn in recent years, focussing on the neural precursors of spontaneous simple actions as potential realizers of intentions. In this session, we show how the debate can benefit from incorporating other types of experimental approaches, focussing on agency in dynamic contexts.
Analyzing Retinal Disease Using Electron Microscopic Connectomics
John DowlingJohn E. Dowling received his AB and PhD from Harvard University. He taught in the Biology Department at Harvard from 1961 to 1964, first as an Instructor, then as assistant professor. In 1964 he moved to Johns Hopkins University, where he held an appointment as associate professor of Ophthalmology and Biophysics. He returned to Harvard as professor of Biology in 1971, was the Maria Moors Cabot Professor of Natural Sciences from 1971-2001, Harvard College professor from 1999-2004 and is presently the Gordon and Llura Gund Professor of Neurosciences. Dowling was chairman of the Biology Department at Harvard from 1975 to 1978 and served as associate dean of the faculty of Arts and Sciences from 1980 to 1984. He was Master of Leverett House at Harvard from 1981-1998 and currently serves as president of the Corporation of The Marine Biological Laboratory in Woods Hole. He is a Fellow of the American Academy of Arts and Sciences, a member of the National Academy of Sciences and a member of the American Philosophical Society. Awards that Dowling received include the Friedenwald Medal from the Association of Research in Ophthalmology and Vision in 1970, the Annual Award of the New England Ophthalmological Society in 1979, the Retinal Research Foundation Award for Retinal Research in 1981, an Alcon Vision Research Recognition Award in 1986, a National Eye Institute's MERIT award in 1987, the Von Sallman Prize in 1992, The Helen Keller Prize for Vision Research in 2000 and the Llura Ligget Gund Award for Lifetime Achievement and Recognition of Contribution to the Foundation Fighting Blindness in 2001. He was granted an honorary MD degree by the University of Lund (Sweden) in 1982 and an honorary Doctor of Laws degree from Dalhousie University (Canada) in 2012. Dowling's research interests have focused on the vertebrate retina as a model piece of the brain. He and his collaborators have long been interested in the functional organization of the retina, studying its synaptic organization, the electrical responses of the retinal neurons, and the mechanisms underlying neurotransmission and neuromodulation in the retina. Dowling became interested in zebrafish as a system in which one could explore the development and genetics of the vertebrate retina about 20 years ago. Part of his research team has focused on retinal development in zebrafish and the role of retinoic acid in early eye and photoreceptor development. A second group has developed behavioral tests to isolate mutations, both recessive and dominant, specific to the visual system.
Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software
Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.
Understanding Perceptual Priors with Massive Online Experiments
One of the most important questions in psychology and neuroscience is understanding how the outside world maps to internal representations. Classical psychophysics approaches to this problem have a number of limitations: they mostly study low dimensional perpetual spaces, and are constrained in the number and diversity of participants and experiments. As ecologically valid perception is rich, high dimensional, contextual, and culturally dependent, these impediments severely bias our understanding of perceptual representations. Recent technological advances—the emergence of so-called “Virtual Labs”— can significantly contribute toward overcoming these barriers. Here I present a number of specific strategies that my group has developed in order to probe representations across a number of dimensions. 1) Massive online experiments can increase significantly the amount of participants and experiments that can be carried out in a single study, while also significantly diversifying the participant pool. We have developed a platform, PsyNet, that enables “experiments as code,” whereby the orchestration of computer servers, recruiting, compensation of participants, and data management is fully automated and every experiment can be fully replicated with one command line. I will demonstrate how PsyNet allows us to recruit thousands of participants for each study with a large number of control experimental conditions, significantly increasing our understanding of auditory perception. 2) Virtual lab methods also enable us to run experiments that are nearly impossible in a traditional lab setting. I will demonstrate our development of adaptive sampling, a set of behavioural methods that combine machine learning sampling techniques (Monte Carlo Markov Chains) with human interactions and allow us to create high-dimensional maps of perceptual representations with unprecedented resolution. 3) Finally, I will demonstrate how the aforementioned methods can be applied to the study of perceptual priors in both audition and vision, with a focus on our work in cross-cultural research, which studies how perceptual priors are influenced by experience and culture in diverse samples of participants from around the world.
Novel Object Detection and Multiplexed Motion Representation in Retinal Bipolar Cells
Detection of motion is essential for survival, but how the visual system processes moving stimuli is not fully understood. Here, based on a detailed analysis of glutamate release from bipolar cells, we outline the rules that govern the representation of object motion in the early processing stages. Our main findings are as follows: (1) Motion processing begins already at the first retinal synapse. (2) The shape and the amplitude of motion responses cannot be reliably predicted from bipolar cell responses to stationary objects. (3) Enhanced representation of novel objects - particularly in bipolar cells with transient dynamics. (4) Response amplitude in bipolar cells matches visual salience reported in humans: suddenly appearing objects > novel motion > existing motion. These findings can be explained by antagonistic interactions in the center-surround receptive field, demonstrate that despite their simple operational concepts, classical center-surround receptive fields enable sophisticated visual computations.
From real problems to beast machines: the somatic basis of selfhood
At the foundation of human conscious experience lie basic embodied experiences of selfhood – experiences of simply ‘being alive’. In this talk, I will make the case that this central feature of human existence is underpinned by predictive regulation of the interior of the body, using the framework of predictive processing, or active inference. I start by showing how conscious experiences of the world around us can be understood in terms of perceptual predictions, drawing on examples from psychophysics and virtual reality. Then, turning the lens inwards, we will see how the experience of being an ‘embodied self’ rests on control-oriented predictive (allostatic) regulation of the body’s physiological condition. This approach implies a deep connection between mind and life, and provides a new way to understand the subjective nature of consciousness as emerging from systems that care intrinsically about their own existence. Contrary to the old doctrine of Descartes, we are conscious because we are beast machines.
Can non-random collapses of the wavefunction enable libertarian free will?
Agent-causal libertarian free will asserts that the conscious agent is the ultimate cause of her own voluntary behavior. A major reason to reject libertarian free will is that it seems incompatible with our current knowledge of physics. In this talk I will argue how quantum processes, specifically non-random collapses of the wavefunction in the human cortex, may enable libertarian free will. I will discuss how this account can be empirically tested.
Agency through Physical Lenses
I will offer a broad-brush account of what explains the emergence of agents from a physics perspective, what sorts of conditions have to be in place for them to arise, and the essential features of agents when they are viewed through the lenses of physics. One implication will be a tight link to informational asymmetries associated with the thermodynamic gradient. Another will be a reversal of the direction of explanation from the one that is usually assumed in physical discussions. In in an evolved system, while it is true in some sense that the macroscopic behavior is the way it is because of the low-level dynamics, there is another sense in which the low-level dynamics is the way that it is because of the high-level behavior it supports. (More precisely and accurately, the constraints on the configuration of its components that define system as the kind of system it is are the way they are to exploit the low-level dynamics to produce the emergent behavior.) Another will be some insight into what might make human agency special.
The quest for the cortical algorithm
The cortical algorithm hypothesis states that there is one common computational framework to solve diverse cognitive problems such as vision, voice recognition and motion control. In my talk, I propose a strategy to guide the search for this algorithm and I present a few ideas on how some of its components might look like. I'll explain why a highly interdisciplinary approach is needed from neuroscience, computer science, mathematics and physics to make further progress in this important question.
Dynamical Neuromorphic Systems
In this talk, I aim to show that the dynamical properties of emerging nanodevices can accelerate the development of smart, and environmentally friendly chips that inherently learn through their physics. The goal of neuromorphic computing is to draw inspiration from the architecture of the brain to build low-power circuits for artificial intelligence. I will first give a brief overview of the state of the art of neuromorphic computing, highlighting the opportunities offered by emerging nanodevices in this field, and the associated challenges. I will then show that the intrinsic dynamical properties of these nanodevices can be exploited at the device and algorithmic level to assemble systems that infer and learn though their physics. I will illustrate these possibilities with examples from our work on spintronic neural networks that communicate and compute through their microwave oscillations, and on an algorithm called Equilibrium Propagation that minimizes both the error and energy of a dynamical system.
The pathophysiology of prodromal Parkinson’s disease
Studying the pathophysiology of late stage Parkinson’s disease (PD) – after the patients have experienced severe neuronal loss – has helped develop various symptomatic treatments for PD (e.g., deep brain stimulation). However, it has been of limited use in developing neuroprotective disease-modifying therapies (DMTs), because DMTs require interventions at much earlier stages of PD when vulnerable neurons are still intact. Because PD patients exhibit various non-motor prodromal symptoms (ie, symptoms that predate diagnosis), understanding the pathophysiology underlying these symptom could lead to earlier diagnosis and intervention. In my talk, I will present a recently elucidated example of how PD pathologies alter the channel biophysics of intact vagal motoneurons (known to be selectively vulnerable in PD) to drive dysautonomia that is reminiscent of prodromal PD. I will discuss how elucidating the pathophysiology of prodromal symptoms can lead to earlier diagnosis through the development of physiological biomarkers for PD.
Photovoltaic Restoration of Sight in Age-related Macular Degeneration
Models of Core Knowledge (Physics, Really)
Even young children seem to have an early understanding of the world around them, and the people in it. Before children can reliably say "ball", "wall", or "Saul", they expect balls to not go through walls, and for Saul to go right for a ball (if there's no wall). What is the formal conceptual structure underlying this commonsense reasoning about objects and agents? I will raise several possibilities for models underlying core intuitive physics as a way of talking about models of core knowledge and intuitive theories more generally. In particular, I will present some recent ML work trying to capture early expectations about object solidly, cohesion, and permanence, that relies on a rough-derendering approach.
Computational psychophysics at the intersection of theory, data and models
Behavioural measurements are often overlooked by computational neuroscientists, who prefer to focus on electrophysiological recordings or neuroimaging data. This attitude is largely due to perceived lack of depth/richness in relation to behavioural datasets. I will show how contemporary psychophysics can deliver extremely rich and highly constraining datasets that naturally interface with computational modelling. More specifically, I will demonstrate how psychophysics can be used to guide/constrain/refine computational models, and how models can be exploited to design/motivate/interpret psychophysical experiments. Examples will span a wide range of topics (from feature detection to natural scene understanding) and methodologies (from cascade models to deep learning architectures).
Advances and setbacks in prion biology
Transmissible spongiform encephalopathies (TSEs) are neurodegenerative diseases of humans and many animal species caused by prions. The main constituent of prions is PrPSc, an aggregated moiety of the host-derived membrane glycolipoprotein PrPC. Prions were found to encipher many phenotypic, genetically stable TSE variants. The latter is very surprising, since PrPC is encoded by the host genome and all prion strains share the same amino acid sequence. Here I will review what is known about the infectivity, the neurotoxicity, and the neuroinvasiveness of prions. Also, I will explain why I regard the prion strain question as a fascinating challenge – with implications that go well beyond prion science. Finally, I will report some recent results obtained in my laboratory, which is attempting to address the strain question and some other basic issues of prion biology with a “systems” approach that utilizes organic chemistry, photophysics, proteomics, and mouse transgenesis.
Dr Lindsay reads from "Models of the Mind : How Physics, Engineering and Mathematics Shaped Our Understanding of the Brain" 📖
Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics? — The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories. — At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience. — Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.
physics coverage
60 items