Law
law
Neural mechanisms of optimal performance
When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.
Towards open meta-research in neuroimaging
When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/
Analogy and Law
Abstracts: https://sites.google.com/site/analogylist/analogical-minds-seminar/analogy-and-law-symposium
Use of Artificial Intelligence by Law Enforcement Authorities in the EU
Recently, artificial intelligence (AI) has become a global priority. Rapid and ongoing technological advancements in AI have prompted European legislative initiatives to regulate its use. In April 2021, the European Commission submitted a proposal for a Regulation that would harmonize artificial intelligence rules across the EU, including the law enforcement sector. Consequently, law enforcement officials await the outcome of the ongoing inter-institutional negotiations (trilogue) with great anticipation, as it will define how to capitalize on the opportunities presented by AI and how to prevent criminals from abusing this emergent technology.
Signatures of criticality in efficient coding networks
The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related
Perceiving relational structure
Convex neural codes in recurrent networks and sensory systems
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
Don't forget the gametes: Neurodevelopmental pathogenesis starts in the sperm and egg
Proper development of the nervous system depends not only on the inherited DNA sequence, but also on proper regulation of gene expression, as controlled in part by epigenetic mechanisms present in the parental gametes. In this presentation an internationally recognized research advocate explains why researchers concerned about the origins of increasingly prevalent neurodevelopmental disorders such as autism and attention deficit hyperactivity disorder should look beyond genetics in probing the origins of dysregulated transcription of brain-related genes. The culprit for a subset of cases, she contends, may lie in the exposure history of the parents, and thus their germ cells. To illustrate how environmentally informed, nongenetic dysfunction may occur, she focuses on the example of parents' histories of exposure to common agents of modern inhalational anesthesia, a highly toxic exposure that in mammalian models has been seen to induce heritable neurodevelopmental abnormality in offspring born of exposed germline.
Time as a continuous dimension in natural and artificial networks
Neural representations of time are central to our understanding of the world around us. I review cognitive, neurophysiological and theoretical work that converges on three simple ideas. First, the time of past events is remembered via populations of neurons with a continuum of functional time constants. Second, these time constants evenly tile the log time axis. This results in a neural Weber-Fechner scale for time which can support behavioral Weber-Fechner laws and characteristic behavioral effects in memory experiments. Third, these populations appear as dual pairs---one type of population contains cells that change firing rate monotonically over time and a second type of population that has circumscribed temporal receptive fields. These ideas can be used to build artificial neural networks that have novel properties. Of particular interest, a convolutional neural network built using these principles can generalize to arbitrary rescaling of its inputs. That is, after learning to perform a classification task on a time series presented at one speed, it successfully classifies stimuli presented slowed down or sped up. This result illustrates the point that this confluence of ideas originating in cognitive psychology and measured in the mammalian brain could have wide-reaching impacts on AI research.
A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
NMC4 Short Talk: A mechanism for inter-areal coherence through communication based on connectivity and oscillatory power
Inter-areal coherence between cortical field-potentials is a widespread phenomenon and depends on numerous behavioral and cognitive factors. It has been hypothesized that inter-areal coherence reflects phase-synchronization between local oscillations and flexibly gates communication. We reveal an alternative mechanism, where coherence results from and is not the cause of communication, and naturally emerges as a consequence of the fact that spiking activity in a sending area causes post-synaptic inputs both in the same area and in other areas. Consequently, coherence depends in a lawful manner on oscillatory power and phase-locking in a sending area and inter-areal connectivity. We show that changes in oscillatory power explain prominent changes in fronto-parietal beta-coherence with movement and memory, and LGN-V1 gamma-coherence with arousal and visual stimulation. Optogenetic silencing of a receiving area and E/I network simulations demonstrate that afferent synaptic inputs rather than spiking entrainment are the main determinant of inter-areal coherence. These findings suggest that the unique spectral profiles of different brain areas automatically give rise to large-scale inter-areal coherence patterns that follow anatomical connectivity and continuously reconfigure as a function of behavior and cognition.
Consistency of Face Identity Processing: Basic & Translational Research
Previous work looking at individual differences in face identity processing (FIP) has found that most commonly used lab-based performance assessments are unfortunately not sufficiently sensitive on their own for measuring performance in both the upper and lower tails of the general population simultaneously. So more recently, researchers have begun incorporating multiple testing procedures into their assessments. Still, though, the growing consensus seems to be that at the individual level, there is quite a bit of variability between test scores. The overall consequence of this is that extreme scores will still occur simply by chance in large enough samples. To mitigate this issue, our recent work has developed measures of intra-individual FIP consistency to refine selection of those with superior abilities (i.e. from the upper tail). For starters, we assessed consistency of face matching and recognition in neurotypical controls, and compared them to a sample of SRs. In terms of face matching, we demonstrated psychophysically that SRs show significantly greater consistency than controls in exploiting spatial frequency information than controls. Meanwhile, we showed that SRs’ recognition of faces is highly related to memorability for identities, yet effectively unrelated among controls. So overall, at the high end of the FIP spectrum, consistency can be a useful tool for revealing both qualitative and quantitative individual differences. Finally, in conjunction with collaborators from the Rheinland-Pfalz Police, we developed a pair of bespoke work samples to get bias-free measures of intraindividual consistency in current law enforcement personnel. Officers with higher composite scores on a set of 3 challenging FIP tests tended to show higher consistency, and vice versa. Overall, this suggests that not only is consistency a reasonably good marker of superior FIP abilities, but could present important practical benefits for personnel selection in many other domains of expertise.
Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features
It is an open question whether preferences for visual art can be lawfully predicted from the basic constituent elements of a visual image. Here, we developed and tested a computational framework to investigate how aesthetic values are formed. We show that it is possible to explain human preferences for a visual art piece based on a mixture of low- and high-level features of the image. Subjective value ratings could be predicted not only within but also across individuals, using a regression model with a common set of interpretable features. We also show that the features predicting aesthetic preference can emerge hierarchically within a deep convolutional neural network trained only for object recognition. Our findings suggest that human preferences for art can be explained at least in part as a systematic integration over the underlying visual features of an image.
Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models
The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
The role of high- and low-level factors in smooth pursuit of predictable and random motions
Smooth pursuit eye movements are among our most intriguing motor behaviors. They are able to keep the line of sight on smoothly moving targets with little or no overt effort or deliberate planning, and they can respond quickly and accurately to changes in the trajectory of motion of targets. Nevertheless, despite these seeming automatic characteristics, pursuit is highly sensitive to high-level factors, such as the choices made about attention, or beliefs about the direction of upcoming motion. Investigators have struggled for decades with the problem of incorporating both high- and low-level processes into a single coherent model. This talk will present an overview of the current state of efforts to incorporate high- and low-level influences, as well as new observations that add to our understanding of both types of influences. These observations (in contrast to much of the literature) focus on the directional properties of pursuit. Studies will be presented that show: (1) the direction of smooth pursuit made to pursue fields of noisy random dots depends on the relative reliability of the sensory signal and the expected motion direction; (2) smooth pursuit shows predictive responses that depend on the interpretation of cues that signal an impending collision; and (3) smooth pursuit during a change in target direction displays kinematic properties consistent with the well-known two-thirds power law. Implications for incorporating high- and low-level factors into the same framework will be discussed.
Analyzing Retinal Disease Using Electron Microscopic Connectomics
John DowlingJohn E. Dowling received his AB and PhD from Harvard University. He taught in the Biology Department at Harvard from 1961 to 1964, first as an Instructor, then as assistant professor. In 1964 he moved to Johns Hopkins University, where he held an appointment as associate professor of Ophthalmology and Biophysics. He returned to Harvard as professor of Biology in 1971, was the Maria Moors Cabot Professor of Natural Sciences from 1971-2001, Harvard College professor from 1999-2004 and is presently the Gordon and Llura Gund Professor of Neurosciences. Dowling was chairman of the Biology Department at Harvard from 1975 to 1978 and served as associate dean of the faculty of Arts and Sciences from 1980 to 1984. He was Master of Leverett House at Harvard from 1981-1998 and currently serves as president of the Corporation of The Marine Biological Laboratory in Woods Hole. He is a Fellow of the American Academy of Arts and Sciences, a member of the National Academy of Sciences and a member of the American Philosophical Society. Awards that Dowling received include the Friedenwald Medal from the Association of Research in Ophthalmology and Vision in 1970, the Annual Award of the New England Ophthalmological Society in 1979, the Retinal Research Foundation Award for Retinal Research in 1981, an Alcon Vision Research Recognition Award in 1986, a National Eye Institute's MERIT award in 1987, the Von Sallman Prize in 1992, The Helen Keller Prize for Vision Research in 2000 and the Llura Ligget Gund Award for Lifetime Achievement and Recognition of Contribution to the Foundation Fighting Blindness in 2001. He was granted an honorary MD degree by the University of Lund (Sweden) in 1982 and an honorary Doctor of Laws degree from Dalhousie University (Canada) in 2012. Dowling's research interests have focused on the vertebrate retina as a model piece of the brain. He and his collaborators have long been interested in the functional organization of the retina, studying its synaptic organization, the electrical responses of the retinal neurons, and the mechanisms underlying neurotransmission and neuromodulation in the retina. Dowling became interested in zebrafish as a system in which one could explore the development and genetics of the vertebrate retina about 20 years ago. Part of his research team has focused on retinal development in zebrafish and the role of retinoic acid in early eye and photoreceptor development. A second group has developed behavioral tests to isolate mutations, both recessive and dominant, specific to the visual system.
Perception, attention, visual working memory, and decision making: The complete consort dancing together
Our research investigates how processes of attention, visual working memory (VWM), and decision-making combine to translate perception into action. Within this framework, the role of VWM is to form stable representations of transient stimulus events that allow them to be identified by a decision process, which we model as a diffusion process. In psychophysical tasks, we find the capacity of VWM is well defined by a sample-size model, which attributes changes in VWM precision with set-size to differences in the number evidence samples recruited to represent stimuli. In the first part of the talk, I review evidence for the sample-size model and highlight the model's strengths: It provides a parameter-free characterization of the set-size effect; it has plausible neural and cognitive interpretations; an attention-weighted version of the model accounts for the power-law of VWM, and it accounts for the selective updating of VWM in multiple-look experiments. In the second part of the talk, I provide a characterization of the theoretical relationship between two-choice and continuous-outcome decision tasks using the circular diffusion model, highlighting their common features. I describe recent work characterizing the joint distributions of decision outcomes and response times in continuous-outcome tasks using the circular diffusion model and show that the model can clearly distinguish variable-precision and simple mixture models of the evidence entering the decision process. The ability to distinguish these kinds of processes is central to current VWM studies.
Optical and acoustic forces for biomedical applications
Exerting controlled forces in a non-contact way is important in biomedical investigations which require holding, moving, or mechanically probing biomedical samples. Optical and acoustic manipulation of microscopic samples both play a prominent role among suitable technologies. The differences in the physical laws and in the typical length scales governing acoustic and optical forces make them complementary: Acoustic forces can levitate large and heavy particles, which optical tweezers could not handle without adverse high-power effects, while optical forces cover subcellular scales. The talk will contrast the two modalities, and identify situations where one or the other is favorable, or when a combination of both is the best choice.
Tissue fluidization at the onset of zebrafish gastrulation
Embryo morphogenesis is impacted by dynamic changes in tissue material properties, which have been proposed to occur via processes akin phase transitions (PTs). Here, we show that rigidity percolation provides a simple and robust theoretical framework to predict material/structural PTs of embryonic tissues from local cell connectivity. By using percolation theory, combined with directly monitoring dynamic changes in tissue rheology and cell contact mechanics, we demonstrate that the zebrafish blastoderm undergoes a genuine rigidity PT, brought about by a small reduction in adhesion-dependent cell connectivity below a critical value. We quantitatively predict and experimentally verify hallmarks of PTs, including power-law exponents and associated discontinuities of macroscopic observables at criticality. Finally, we show that this uniform PT depends on blastoderm cells undergoing meta-synchronous divisions causing random and, consequently, uniform changes in cell connectivity. Collectively, our theoretical and experimental findings reveal the structural basis of material PTs in an organismal context.
Accuracy versus consistency: Investigating face and voice matching abilities
Deciding whether two different face photographs or voice samples are from the same person represent fundamental challenges within applied settings. To date, most research has focussed on average performance in these tests, failing to consider individual differences and within-person consistency in responses. In the current studies, participants completed the same face or voice matching test on two separate occasions, allowing comparison of overall accuracy across the two timepoints as well as consistency in trial-level responses. In both experiments, participants were highly consistent in their performances. In addition, we demonstrated a large association between consistency and accuracy, with the most accurate participants also tending to be the most consistent. This is an important result for applied settings in which organisational groups of super-matchers are deployed in real-world contexts. Being able to reliably identify these high performers based upon only a single test informs regarding recruitment for law enforcement agencies worldwide.
Life of Pain and Pleasure
The ability to experience pain is old in evolutionary terms. It is an experience shared across species. Acute pain is the body’s alarm system, and as such it is a good thing. Pain that persists beyond normal tissue healing time (3-4 months) is defined as chronic – it is the system gone wrong and it is not a good thing. Chronic pain has recently been classified as both a symptom and disease in its own right. It is one of the largest medical health problems worldwide with one in five adults diagnosed with the condition. The brain is key to the experience of pain and pain relief. This is the place where pain emerges as a perception. So, relating specific brain measures using advanced neuroimaging to the change patients describe in their pain perception induced by peripheral or central sensitization (i.e. amplification), psychological or pharmacological mechanisms has tremendous value. Identifying where amplification or attenuation processes occur along the journey from injury to the brain (i.e. peripheral nerves, spinal cord, brainstem and brain) for an individual and relating these neural mechanisms to specific pain experiences, measures of pain relief, persistence of pain states, degree of injury and the subject's underlying genetics, has neuroscientific and potential diagnostic relevance. This is what neuroimaging has afforded – a better understanding and explanation of why someone’s pain is the way it is. We can go ‘behind the scenes’ of the subjective report to find out what key changes and mechanisms make up an individual’s particular pain experience. A key area of development has been pharmacological imaging where objective evidence of drugs reaching the target and working can be obtained. We even now understand the mechanisms of placebo analgesia – a powerful phenomenon known about for millennia. More recently, researchers have been investigating through brain imaging whether there is a pre-disposing vulnerability in brain networks towards developing chronic pain. So, advanced neuroimaging studies can powerfully aid explanation of a subject’s multidimensional pain experience, pain relief (analgesia) and even what makes them vulnerable to developing chronic pain. The application of this goes beyond the clinic and has relevance in courts of law, and other areas of society, such as in veterinary care. Relatively far less work has been directed at understanding what changes in the brain occur during altered states of consciousness induced either endogenously (e.g. sleep) or exogenously (e.g. anaesthesia). However, that situation is changing rapidly. Our recent multimodal neuroimaging work explores how anaesthetic agents produce altered states of consciousness such that perceptual experiences of pain and awareness are degraded. This is bringing us fascinating insights into the complex phenomenon of anaesthesia, consciousness and even the concept of self-hood. These topics will be discussed in my talk alongside my ‘side-story’ of life as a scientist combining academic leadership roles with doing science and raising a family.
Personality Evaluated: What Do Other People Really Think of You?
What do other people really think of you? In this talk, I highlight the unique perspective that other people have on the most consequential aspects of our personalities—how we treat others, our best and worst qualities, and our moral character. First, I compare how people thought they behaved with how they actually behaved in everyday life (based on observer ratings of unobtrusive audio recordings; 217 people, 2,519 observations). I show that when people think they are being kind (vs. rude), others do not necessarily agree. This suggests that people may have blind spots about how well they are treating others in the moment. Next, I compare what 463 people thought their own best and worst traits were with what their friends thought about them. The results reveal that friends are more likely to point out flaws in the prosocial and moral domains (e.g., “inconsiderate”, “selfish”, “manipulative”) than are people themselves. Does this imply that others might want us to be more moral? To find out, I compare what targets (N = 800) want to change about their own personalities with what their close others (N = 958) want to change about them. The results show that people don’t particularly want to be more moral, and their close others don’t want them to be more moral, either. I conclude with future directions on honest feedback as a pathway to self-insight and, ultimately, self-improvement.
Research talk: Drosophila dendrites follow a novel diameter scaling law
Global AND Scale-Free? Spontaneous cortical dynamics between functional networks and cortico-hippocampal communication
Recent advancements in anatomical and functional imaging emphasize the presence of whole-brain networks organized according to functional and connectivity gradients, but how such structure shapes activity propagation and memory processes still lacks asatisfactory model. We analyse the fine-grained spatiotemporal dynamics of spontaneous activity in the entire dorsal cortex. through simultaneous recordings of wide-field voltage sensitive dye transients (VS), cortical ECoG, and hippocampal LFP in anesthetized mice. Both VS and ECoG show cortical avalanches. When measuring avalanches from the VS signal, we find a major deviation of the size scaling from the power-law distribution predicted by the criticality hypothesis and well approximated by the results from the ECoG. Breaking from scale-invariance, avalanches can thus be grouped in two regimes. Small avalanches consists of a limited number of co-activation modes involving a sub-set of cortical networks (related to the Default Mode Network), while larger avalanches involve a substantial portion of the cortical surface and can be clustered into two families: one immediately preceded by Retrosplenial Cortex activation and mostly involving medial-posterior networks, the other initiated by Somatosensory Cortex and extending preferentially along the lateral-anterior region. Rather than only differing in terms of size, these two set of events appear to be associated with markedly different brain-wide dynamical states: they are accompaniedby a shift in the hippocampal LFP, from the ripple band (smaller) to the gamma band (larger avalanches), and correspond to opposite directionality in the cortex-to-hippocampus causal relationship. These results provide a concrete description of global cortical dynamics, and shows how cortex in its entirety is involved in bi-directional communication in the hippocampus even in sleep-like states.
Generalization guided exploration
How do people learn in real-world environments where the space of possible actions can be vast or even infinite? The study of human learning has made rapid progress in past decades, from discovering the neural substrate of reward prediction errors, to building AI capable of mastering the game of Go. Yet this line of research has primarily focused on learning through repeated interactions with the same stimuli. How are humans able to rapidly adapt to novel situations and learn from such sparse examples? I propose a theory of how generalization guides human learning, by making predictions about which unobserved options are most promising to explore. Inspired by Roger Shepard’s law of generalization, I show how a Bayesian function learning model provides a mechanism for generalizing limited experiences to a wide set of novel possibilities, based on the simple principle that similar actions produce similar outcomes. This model of generalization generates predictions about the expected reward and underlying uncertainty of unexplored options, where both are vital components in how people actively explore the world. This model allows us to explain developmental differences in the explorative behavior of children, and suggests a general principle of learning across spatial, conceptual, and structured domains.
European University for Brain and Technology Virtual Opening
The European University for Brain and Technology, NeurotechEU, is opening its doors on the 16th of December. From health & healthcare to learning & education, Neuroscience has a key role in addressing some of the most pressing challenges that we face in Europe today. Whether the challenge is the translation of fundamental research to advance the state of the art in prevention, diagnosis or treatment of brain disorders or explaining the complex interactions between the brain, individuals and their environments to design novel practices in cities, schools, hospitals, or companies, brain research is already providing solutions for society at large. There has never been a branch of study that is as inter- and multi-disciplinary as Neuroscience. From the humanities, social sciences and law to natural sciences, engineering and mathematics all traditional disciplines in modern universities have an interest in brain and behaviour as a subject matter. Neuroscience has a great promise to become an applied science, to provide brain-centred or brain-inspired solutions that could benefit the society and kindle a new economy in Europe. The European University of Brain and Technology (NeurotechEU) aims to be the backbone of this new vision by bringing together eight leading universities, 250+ partner research institutions, companies, societal stakeholders, cities, and non-governmental organizations to shape education and training for all segments of society and in all regions of Europe. We will educate students across all levels (bachelor’s, master’s, doctoral as well as life-long learners) and train the next generation multidisciplinary scientists, scholars and graduates, provide them direct access to cutting-edge infrastructure for fundamental, translational and applied research to help Europe address this unmet challenge.
The emergence of contrast invariance in cortical circuits
Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.
Is there universality in biology?
It is sometimes said that there are two reasons why physics is so successful as a science. One is that it deals with very simple problems. The other is that it attempts to account only for universal aspects of systems at a desired level of description, with lower level phenomena subsumed into a small number of adjustable parameters. It is a widespread belief that this approach seems unlikely to be useful in biology, which is intimidatingly complex, where “everything has an exception”, and where there are a huge number of undetermined parameters. I will try to argue, nonetheless, that there are important, experimentally-testable aspects of biology that exhibit universality, and should be amenable to being tackled from a physics perspective. My suggestion is that this can lead to useful new insights into the existence and universal characteristics of living systems. I will try to justify this point of view by contrasting the goals and practices of the field of condensed matter physics with materials science, and then by extension, the goals and practices of the newly emerging field of “Physics of Living Systems” with biology. Specific biological examples that I will discuss include the following: Universal patterns of gene expression in cell biology Universal scaling laws in ecosystems, including the species-area law, Kleiber’s law, Paradox of the Plankton Universality of the genetic code Universality of thermodynamic utilization in microbial communities Universal scaling laws in the tree of life The question of what can be learned from studying universal phenomena in biology will also be discussed. Universal phenomena, by their very nature, shed little light on detailed microscopic levels of description. Yet there is no point in seeking idiosyncratic mechanistic explanations for phenomena whose explanation is found in rather general principles, such as the central limit theorem, that every microscopic mechanism is constrained to obey. Thus, physical perspectives may be better suited to answering certain questions such as universality than traditional biological perspectives. Concomitantly, it must be recognized that the identification and understanding of universal phenomena may not be a good answer to questions that have traditionally occupied biological scientists. Lastly, I plan to talk about what is perhaps the central question of universality in biology: why does the phenomenon of life occur at all? Is it an inevitable consequence of the laws of physics or some special geochemical accident? What methodology could even begin to answer this question? I will try to explain why traditional approaches to biology do not aim to answer this question, by comparing with our understanding of superconductivity as a physical phenomenon, and with the theory of universal computation. References Nigel Goldenfeld, Tommaso Biancalani, Farshid Jafarpour. Universal biology and the statistical mechanics of early life. Phil. Trans. R. Soc. A 375, 20160341 (14 pages) (2017). Nigel Goldenfeld and Carl R. Woese. Life is Physics: evolution as a collective phenomenon far from equilibrium. Ann. Rev. Cond. Matt. Phys. 2, 375-399 (2011).
Neural coding in the auditory cortex - "Emergent Scientists Seminar Series
Dr Jennifer Lawlor Title: Tracking changes in complex auditory scenes along the cortical pathway Complex acoustic environments, such as a busy street, are characterised by their everchanging dynamics. Despite their complexity, listeners can readily tease apart relevant changes from irrelevant variations. This requires continuously tracking the appropriate sensory evidence while discarding noisy acoustic variations. Despite the apparent simplicity of this perceptual phenomenon, the neural basis of the extraction of relevant information in complex continuous streams for goal-directed behavior is currently not well understood. As a minimalistic model for change detection in complex auditory environments, we designed broad-range tone clouds whose first-order statistics change at a random time. Subjects (humans or ferrets) were trained to detect these changes.They were faced with the dual-task of estimating the baseline statistics and detecting a potential change in those statistics at any moment. To characterize the extraction and encoding of relevant sensory information along the cortical hierarchy, we first recorded the brain electrical activity of human subjects engaged in this task using electroencephalography. Human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. To further this investigation, we performed a series of electrophysiological recordings in the primary auditory cortex (A1), secondary auditory cortex (PEG) and frontal cortex (FC) of the fully trained behaving ferret. A1 neurons exhibited strong onset responses and change-related discharges specific to neuronal tuning. PEG population showed reduced onset-related responses, but more categorical change-related modulations. Finally, a subset of FC neurons (dlPFC/premotor) presented a generalized response to all change-related events only during behavior. We show using a Generalized Linear Model (GLM) that the same subpopulation in FC encodes sensory and decision signals, suggesting that FC neurons could operate conversion of sensory evidence to perceptual decision. All together, these area-specific responses suggest a behavior-dependent mechanism of sensory extraction and generalization of task-relevant event. Aleksandar Ivanov Title: How does the auditory system adapt to different environments: A song of echoes and adaptation
Bacterial cell size, scaling laws and the physicochemical properties of the cell
High-dimensional geometry of visual cortex
Interpreting high-dimensional datasets requires new computational and analytical methods. We developed such methods to extract and analyze neural activity from 20,000 neurons recorded simultaneously in awake, behaving mice. The neural activity was not low-dimensional as commonly thought, but instead was high-dimensional and obeyed a power-law scaling across its eigenvalues. We developed a theory that proposes that neural responses to external stimuli maximize information capacity while maintaining a smooth neural code. We then observed power-law eigenvalue scaling in many real-world datasets, and therefore developed a nonlinear manifold embedding algorithm called Rastermap that can capture such high-dimensional structure.
A generalized Weber’s law reveals behaviorally limiting slow noise in evidence accumulation
COSYNE 2023
Variance-limited scaling laws for plausible learning rules
COSYNE 2023
A universal power law in visual adaptation: balancing representation fidelity and metabolic cost
COSYNE 2025