Statistics
statistics
Lancaster University Leipzig
Assistant Professor (Lecturer) in Computer Science, Data Science Lancaster University Leipzig Salary: €60,722 to €68,051 Closing Date: Monday 31 July 2023 Interview Date: Wednesday 16 August 2023 Reference: 0721-23 Lancaster University Leipzig, Germany Lancaster University invites applications for one post of Assistant Professor (Lecturer) in Computer Science to join at its exciting new campus in Leipzig, Germany. Located in one of Germany’s most vibrant, livable, and attractive cities, the Leipzig campus offers the same high academic quality and fully rounded student experience as in the UK, with a strong strategic vision of excellence in teaching, research, and engagement. The position is to support the upcoming MSc programme in Data Science, and to complement the department’s current research strengths on Intelligent Systems and Artificial Intelligence. The candidate is expected to have solid research foundations and a strong commitment in teaching Data Science topics such as Data Science Fundamentals, Data Mining, and Intelligent Data Analysis and Visualisation. The ideal candidate should have a completed PhD degree and demonstrated capabilities in teaching, research, and engagement in the areas of Data Science. Candidates should be able to deliver excellent teaching at graduate and undergraduate level, pursue their own independent research, and develop publications in high quality academic journals or conferences. Candidates are expected to have a suitable research track record of targeting high quality journals or a record of equivalent high-quality research outputs. Colleagues joining LU Leipzig’s computer science department will benefit from a very active research team in Leipzig with a focus on Intelligent Systems and Artificial Intelligence in the wider sense, but will also have access to the research environment at the School of Computing and Communications in the UK. We offer a collegial and multidisciplinary environment with enormous potential for collaboration and work on challenging real-world problems especially. German language skills are not a prerequisite for the role, though we are seeking applicants with an interest in making a long-term commitment to Lancaster University in Leipzig. Please note this role is a full time, indefinite-duration appointment based in Leipzig, Germany. The contract is with Lancaster University Leipzig under German law. Ideally we would like the appointed candidate to start at latest on 1st January 2024. The opportunity is unique as the role is permanent after a 6-months probationary period, and offers the opportunity of being promoted to associate professor and professor in line with performance. We also support administrative procedures related to settlement in Germany and assistance in finding accommodation, family integration (school registration for children), German training (for you and your partner), and relocation expenses.
Navin Pokala
The Department of Biological and Chemical Sciences at New York Institute of Technology seeks outstanding applicants for a tenure-track position at the Assistant Professor level to develop a research program in the broadly defined fields of biostatistics, bioinformatics or computational biology that complements existing research programs and carries potential to establish external collaborations. The successful candidate will teach introductory and advanced courses in the biological sciences at the undergraduate level, notably Biostatistics. The Department has undergraduate programs in Biology, Chemistry, and Biotechnology at the New York City and Long Island (Old Westbury) campuses. New York Tech emphasizes interdisciplinary scholarship, research, and teaching. Department faculty research interests are diverse, including medicinal and organic chemistry, neuroscience, cell and molecular biology, genetics, biochemistry, microbiology, computational chemistry, and analytical chemistry. Faculty in the Department have ample opportunity to collaborate with faculty at the New York Tech’s College of Engineering and Computer Sciences and College of Osteopathic Medicine.
John Pearson
The laboratories of Dr. Richard Mooney (https://www.neuro.duke.edu/mooney-lab) and Dr. John Pearson (http://pearsonlab.github.io) at Duke University are seeking two (2) postdoctoral scholars in conjunction with an NIH BRAIN Initiative-funded project investigating the contributions of basal ganglia to vocal motor exploration and reinforcement learning. Candidates will combine state of the art viral, electrophysiological, imaging, and computational methods and work as part of a multi-institution team that also includes Dr. Carlos Lois, in the Division of Biology and Biological Engineering at CalTech, and Dr. Tim Gardner, in the Phil and Penny Knight Campus for Accelerating Scientific Impact at the University of Oregon. The first postdoc, appointed in the Department of Neurobiology (https://careers.duke.edu/job/Durham-POSTDOCTORAL-ASSOCIATE-NC-27710/681792500/), will use behavioral, optogenetic, electrophysiological and optical imaging methods to explore how cortico-basal ganglia circuits contribute to vocal exploration and learning. Previous experience with imaging and electrophysiological methods is desirable, and experience using viral gene transfer methods to monitor and manipulate neural activity will be especially helpful. Candidates with strong quantitative skills and an interest in developing or improving computational skills are especially desired. This postdoc will work closely with a related postdoc hire in the Department of Biostatistics & Bioinformatics, as well as team members across all institutions. The second postdoc, appointed in the Department of Biostatistics & Bioinformatics (https://careers.duke.edu/job/Durham-POSTDOCTORAL-ASSOCIATE-NC-27710/681799400/), will perform computational modeling of reinforcement learning in the birdsong system, including development of new statistical machine learning methods for the analysis of song, electrophysiology, and calcium imaging data. The postdoc will work closely with experimentalists to design studies, analyze data, and refine hypotheses. Candidates should hold a PhD in a quantitative discipline such as computational neuroscience, physics, statistics, or computer science. Previous experience in neurobiology is a plus but not required. This postdoc will work closely with a related postdoc hire in the Department of Neurobiology, as well as team members across the other institutions.
Rava Azeredo da Silveira
Several postdoctoral openings in the lab of Rava Azeredo da Silveira (Paris & Basel) The lab of Rava Azeredo da Silveira invites applications for Postdoctoral Researcher positions at ENS, Paris, and IOB, an associated institute of the University of Basel. Research questions will be chosen from a broad range of topics in theoretical/computational neuroscience and cognitive science (see the description of the lab’s activity, below). One of the postdoc positions to be filled in Basel will be part of a collaborative framework with Michael Woodford (Columbia University) and will involve projects relating the study of decision making to models of perception and memory. Candidates with backgrounds in mathematics, statistics, artificial intelligence, physics, computer science, engineering, biology, and psychology are welcome. Experience with data analysis and proficiency with numerical methods, in addition to familiarity with neuroscience topics and mathematical and statistical methods, are desirable. Equally desirable are a spirit of intellectual adventure, eagerness, and drive. The positions will come with highly competitive work conditions and salaries. Application deadline: Applications will be reviewed starting on 1 November 2020. How to apply: Please send the following information in one single PDF, to silveira@iob.ch: 1. letter of motivation; 2. statement of research interests, limited to two pages; 3. curriculum vitæ including a list of publications; 4. any relevant publications that you wish to showcase. In addition, please arrange for three letters of recommendations to be sent to the same email address. In all email correspondence, please include the mention “APPLICATION-POSTDOC” in the subject header, otherwise the application will not be considered. * ENS, together with a number of neighboring institutions (College de France, Institut Curie, ESPCI, Sorbonne Université, and Institut Pasteur), offers a rich scientific and intellectual environment, with a strong representation in computational neuroscience and related fields. * IOB is a research institute combining basic and clinical research. Its mission is to drive innovations in understanding vision and its diseases and develop new therapies for vision loss. IOB is an equal-opportunity employer with family-friendly work policies. * The Silveira Lab focuses on a range of topics, which, however, are tied together through a central question: How does the brain represent and manipulate information? Among the more concrete approaches to this question, the lab analyses and models neural activity in circuits that can be identified, recorded from, and perturbed experimentally, such as visual neural circuits in the retina and the cortex. Establishing links between physiological specificity and the structure of neural activity yields an understanding of circuits as building blocks of cerebral information processing. On a more abstract level, the lab investigates the representation of information in populations of neurons, from a statistical and algorithmic—rather than mechanistic—point of view, through theories of coding and data analyses. These studies aim at understanding the statistical nature of high-dimensional neural activity in different conditions, and how this serves to encode and process information from the sensory world. In the context of cognitive studies, the lab investigates mental processes such as inference, learning, and decision-making, through both theoretical developments and behavioral experiments. A particular focus is the study of neural constraints and limitations and, further, their impact on mental processes. Neural limitations impinge on the structure and variability of mental representations, which in turn inform the cognitive algorithms that produce behavior. The lab explores the nature of neural limitations, mental representations, and cognitive algorithms, and their interrelations.
Mitra Baratchi
We are looking for an excellent candidate with a master’s degree in MSc in Artificial Intelligence, Computer Science, Mathematics, Statistics, or a closely related field to join a project focused on developing an advanced transparent machine learning framework with application on movement behavioural analysis. Smartwatches and other wearable technologies allow us to continuously collect data on our daily movement behaviour patterns. We would like to understand how machine learning techniques can be used to learn causal effects from time-series data to identify and recommend effective changes in daily activities (i.e., possible behavioural interventions) that are expected to result in concrete health improvements (e.g., improving cardiorespiratory fitness). This research, at the intersection of machine learning and causality, aims to develop algorithms for finding causal relations between behavioural indicators learned from the time series data and associated health-outcomes.
Felipe Tobar
The Initiative for Data & Artificial Intelligence at Universidad de Chile is looking for Postdoctoral Researchers to join a collaborative team of PIs working on theoretical and applied aspects of Data Science. The role of the postholder(s) is twofold: first, they will engage and collaborate in current projects at the Initiative related to statistical machine learning, natural language processing and deep learning, with applications to time series analysis, health informatics, and astroinformatics. Second, they are expected to bring novel research lines affine to those currently featured at the Initiative, possibly in the form of theoretical work or applications to real-world problems of general interest. These positions are offered on a fixed term basis for up to one year with a possibility for a further year extension.
N/A
IIT welcomes applicants with an outstanding track-record in Computational Neuroscience. Appropriate research areas include computational and modelling approaches for understanding the function of the nervous system. Investigators with expertise in mathematics, physics, statistics, and machine learning for neuroscience are also encouraged to apply. The position can be either tenured or tenure-track, depending on seniority and expertise. If tenure-track, the position is for an initial period of 5 years with renewal depending on evaluation. We provide generous support for salary, start-up budget, and annual running costs.
Jorge Almeida
This Master’s is centered on research and on preparing students for a PhD in Psychology. Most of its core courses focus on hands-on in-lab research, science management and communication, and statistics, while offering the possibility of having many elective courses in computational biology/neuroscience, neuroimaging and others. The Master’s in Psychological Sciences has an English-only program available for those interested (the official languages are Portuguese and English). A major concentration of this Master’s will be in Cognitive Neuroscience, and will be associated with lab work and mentoring at the Proaction Lab lead by Jorge Almeida and within the transformative ERA Chair grant from the European Union to FPCE-UC CogBooster (lead by Alfonso Caramazza).
Jorge Jaramillo
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. Appointees will join as Grossman Center Postdoctoral Fellows, with the freedom to work with any of its faculty members. We especially welcome applicants who develop computational models and machine learning analysis methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron, Jorge Jaramillo, and Ramon Nogueira. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Department of Neurobiology, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and the Data Science Institute. The Grossman Center is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.
Jörn Diedrichsen
We are looking to recruit a new postdoctoral associate for a large collaborative project on the anatomical development of the human cerebellum. The overall goal of the project is to develop a high-resolution normative model of human cerebellar development across the entire life span. The successful candidate will join the Diedrichsen Lab (Western University, Canada) and will work with a team of colleagues at Erasmus Medical Center, the Donders Institute (Netherlands), McGill, Dalhousie, Sick Kids, and UBC (Canada).
Marcos M. Raimundo
We are looking for a motivated postdoc with an interest in LLMs applied to code.
I-Chun Lin
The Gatsby Unit is seeking applications for a postdoctoral training fellowship under Dr Agostina Palmigiano, focused on developing theoretical approaches to investigate the mechanisms underlying sensory, motor, or cognitive computations. Responsibilities include the primary execution of the project, opportunities for co-supervision of students, presentation of results at conferences and seminars, and publication in suitable media. The post is initially funded for 2 years with the possibility of a one-year extension.
Ali Ramezani-Kebrya
13 PhD positions in Machine Learning, Statistics, Logic, Language Technology, and Ethics at Integreat, The Norwegian Centre for Knowledge-driven Machine Learning, University of Oslo & UiT - The Arctic University of Norway, Tromsø. The positions are part of an interdisciplinary PhD program with engaged supervisors and many fellow students. The projects include: Project 9: Embedded Sufficient Statistics (in Oslo), Project 4: Exploration and Control of the Inner Representation in Generative AI Models (in Tromsø), and Project 3: Developing Novel Information Theoretic Discrepancy Measures (in Tromsø).
Marcos M. Raimundo
A motivated postdoc with an interest in machine learning applied to fair and transparent credit scoring.
Marcos M. Raimundo
We are looking for a motivated postdoc with an interest in machine learning applied to fair and transparent credit scoring.
Marcos M. Raimundo
We are looking for a motivated postdoc with an interest in machine learning applied to fair and transparent credit scoring.
Marcos M. Raimundo
We are looking for a motivated postdoc with an interest in LLMs applied to code.
Pranav Nerurkar
Join our comprehensive online internship program focusing on Two Sample Hypothesis Testing, designed for students eager to delve into the world of statistical analysis and coding. This internship offers a unique blend of theoretical learning and practical application, providing participants with a robust understanding of hypothesis testing using real-world data. Key features include Interactive Video Lectures, Hands-On Coding Assignments, Practical Applications, Mentorship and Support, and Certification.
Kenji Doya
Okinawa Institute of Science and Technology Graduate University (OIST) is calling for applications to the tenure-track and tenured faculty positions in Mathematical Sciences, including applied math, statistics, and data sciences. OIST provides innovative, international and interdisciplinary research environment with strong internal funding for research units.
Tiago de Paula Peixoto
We’re hiring a post-doctoral researcher to join the Inverse Complexity Lab at IT:U, Linz, Austria. We are looking for an early-stage or more advanced postdoctoral scholar who is interested in building on our ongoing projects, or developing their own research agenda related to inverse problems in network science, complex systems modeling, and/or connections to machine learning. This position is not bound to a particular research project, and the successful applicant will enjoy intellectual independence and freedom to choose research topics. This position is guaranteed for 3 years. The gross salary range is € 66,532 to € 70,000 (corrected for inflation), depending on previous experience. The employment conditions in Austria include completely free health care (also for family members), social security benefits, 25 days per year of paid vacations, flexible working hours, and possibility of home office. In addition, IT:U will provide a KlimaTicket—a unified transport pass which gives free access to the entire transportation system in Austria, including trains and local public transport.
Pedro Delicado
We are looking for candidates for a pre-doctoral contract associated with the research project “Advanced Statistics and Data Science 2: New data, new models, new challenges” Project PID2023-148158OB-I00 funded by MCIU /AEI /10.13039/501100011033 / FEDER, UE. The main objective of this project is to address the challenges posed by new data sets (increasingly large and complex) and new ways of analyzing them (more flexible but less transparent than traditional statistical techniques). We propose to pursue five lines of research: (1) New directions in the interpretability and explainability of predictive models. (2) Data from wearable devices: A functional approach to data analysis. (3) EEG data: Contributions from functional data analysis and interpretability. (4) Data from graphs: Bayesian prediction and modelling. (5) Non-linear dimensionality reduction methods for big data.
Pavel Sanda
We offer 2 fully funded Postdoc / Ph.D. positions for the OPJAK Trust project. Our group is part of the Department of Complex Systems at the Institute of Computer Science of the Czech Academy of Sciences. The topic is focused on analysing modern and historical textual records through the lenses of graph-network analysis, statistics and machine learning. The topics include: Networks in Historical Scholarly/Philosophical Writings and Information Flows in Scholarly & Scientific Networks. Positions are available starting March 1, 2025 and later. Applications will be reviewed on a rolling basis, until the positions are filled. Contract is for 12-36 months duration, with exceptions negotiable. Possibility of extension until end of project (31. 12. 2028), support for transition to tenure-track or external funding applications. This is a fixed term contract appointment, part time contract possible. Competitive salary based on qualifications and experience. Bonuses and travel funding for conferences and research stays depending on performance. No teaching duties.
Anna V. Kononova
The Utrecht University offers a fully paid 4-year PhD position on causality-aware explanations for probabilistic graphical models. This project is a collaboration between Intelligent Systems group at Utrecht University and Leiden Institute of Advanced Computer Science and will be part of the Hybrid Intelligence Centre. The position falls under the Collective Labour Agreement of Dutch Universities (solid pension scheme, substantial holiday leave, paid sick leave, maternity/paternity leave) with a gross monthly salary between €2,901 and €3,707, 8% holiday pay and 8.3% year-end bonus.
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
A recurrent network model of planning predicts hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss this paper on Neuroquery, a relatively new web-based meta-analysis tool: https://elifesciences.org/articles/53385.pdf. This is different from Neurosynth in that it generates meta-analysis maps using predictive modeling from the string of text provided at the prompt, instead of performing inferential statistics to calculate the overlap of activation from different studies. This allows the user to generate predictive maps for more nuanced cognitive processes - especially for clinical populations which may be underrepresented in the literature compared to controls - and can be useful in generating predictions about where the activity will be for one's own study, and for creating ROIs.
The development of visual experience
Vision and visual cognition is experience-dependent with likely multiple sensitive periods, but we know very little about statistics of visual experience at the scale of everyday life and how they might change with development. By traditional assumptions, the world at the massive scale of daily life presents pretty much the same visual statistics to all perceivers. I will present an overview our work on ego-centric vision showing that this is not the case. The momentary image received at the eye is spatially selective, dependent on the location, posture and behavior of the perceiver. If a perceiver’s location, possible postures and/or preferences for looking at some kinds of scenes over others are constrained, then their sampling of images from the world and thus the visual statistics at the scale of daily life could be biased. I will present evidence with respect to both low-level and higher level visual statistics about the developmental changes in the visual input over the first 18 months post-birth.
A recurrent network model of planning explains hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
The strongly recurrent regime of cortical networks
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures
The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.
Bridging the gap between artificial models and cortical circuits
Artificial neural networks simplify complex biological circuits into tractable models for computational exploration and experimentation. However, the simplification of artificial models also undermines their applicability to real brain dynamics. Typical efforts to address this mismatch add complexity to increasingly unwieldy models. Here, we take a different approach; by reducing the complexity of a biological cortical culture, we aim to distil the essential factors of neuronal dynamics and plasticity. We leverage recent advances in growing neurons from human induced pluripotent stem cells (hiPSCs) to analyse ex vivo cortical cultures with only two distinct excitatory and inhibitory neuron populations. Over 6 weeks of development, we record from thousands of neurons using high-density microelectrode arrays (HD-MEAs) that allow access to individual neurons and the broader population dynamics. We compare these dynamics to two-population artificial networks of single-compartment neurons with random sparse connections and show that they produce similar dynamics. Specifically, our model captures the firing and bursting statistics of the cultures. Moreover, tightly integrating models and cultures allows us to evaluate the impact of changing architectures over weeks of development, with and without external stimuli. Broadly, the use of simplified cortical cultures enables us to use the repertoire of theoretical neuroscience techniques established over the past decades on artificial network models. Our approach of deriving neural networks from human cells also allows us, for the first time, to directly compare neural dynamics of disease and control. We found that cultures e.g. from epilepsy patients tended to have increasingly more avalanches of synchronous activity over weeks of development, in contrast to the control cultures. Next, we will test possible interventions, in silico and in vitro, in a drive for personalised approaches to medical care. This work starts bridging an important theoretical-experimental neuroscience gap for advancing our understanding of mammalian neuron dynamics.
Exploiting sensory statistics in decision making
Learning static and dynamic mappings with local self-supervised plasticity
Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.
Probabilistic computation in natural vision
A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.
A Panoramic View on Vision
Statistics of natural scenes are not uniform - their structure varies dramatically from ground to sky. It remains unknown whether these non-uniformities are reflected in the large-scale organization of the early visual system and what benefits such adaptations would confer. By deploying an efficient coding argument, we predict that changes in the structure of receptive fields across visual space increase the efficiency of sensory coding. To test this experimentally, developed a simple, novel imaging system that is indispensable for studies at this scale. In agreement with our predictions, we could show that receptive fields of retinal ganglion cells change their shape along the dorsoventral axis, with a marked surround asymmetry at the visual horizon. Our work demonstrates that, according to principles of efficient coding, the panoramic structure of natural scenes is exploited by the retina across space and cell-types.
Taming chaos in neural circuits
Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.
From natural scene statistics to multisensory integration: experiments, models and applications
To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.
Network mechanisms underlying representational drift in area CA1 of hippocampus
Recent chronic imaging experiments in mice have revealed that the hippocampal code exhibits non-trivial turnover dynamics over long time scales. Specifically, the subset of cells which are active on any given session in a familiar environment changes over the course of days and weeks. While some cells transition into or out of the code after a few sessions, others are stable over the entire experiment. The mechanisms underlying this turnover are unknown. Here we show that the statistics of turnover are consistent with a model in which non-spatial inputs to CA1 pyramidal cells readily undergo plasticity, while spatially tuned inputs are largely stable over time. The heterogeneity in stability across the cell assembly, as well as the decrease in correlation of the population vector of activity over time, are both quantitatively fit by a simple model with Gaussian input statistics. In fact, such input statistics emerge naturally in a network of spiking neurons operating in the fluctuation-driven regime. This correspondence allows one to map the parameters of a large-scale spiking network model of CA1 onto the simple statistical model, and thereby fit the experimental data quantitatively. Importantly, we show that the observed drift is entirely consistent with random, ongoing synaptic turnover. This synaptic turnover is, in turn, consistent with Hebbian plasticity related to continuous learning in a fast memory system.
A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics
Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.
Target detection in the natural world
Animal sensory systems are optimally adapted to those features typically encountered in natural surrounds, thus allowing neurons that have a limited bandwidth to encode almost impossibly large input ranges. Importantly, natural scenes are not random, and peripheral visual systems have therefore evolved to reduce the predictable redundancy. The vertebrate visual cortex is also optimally tuned to the spatial statistics of natural scenes, but much less is known about how the insect brain responds to these. We are redressing this deficiency using several techniques. Olga Dyakova uses exquisite image manipulation to give natural images unnatural image statistics, or vice versa. Marissa Holden then uses these images as stimuli in electrophysiological recordings of neurons in the fly optic lobes, to see how the brain codes for the statistics typically encountered in natural scenes, and Olga Dyakova measures the behavioral optomotor response on our trackball set-up.
Deriving local synaptic learning rules for efficient representations in networks of spiking neurons
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.
Demystifying the richness of visual perception
Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.
Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise
Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.
SimBA for Behavioral Neuroscientists
Several excellent computational frameworks exist that enable high-throughput and consistent tracking of freely moving unmarked animals. SimBA introduce and distribute a plug-and play pipeline that enables users to use these pose-estimation approaches in combination with behavioral annotation for the generation of supervised machine-learning behavioral predictive classifiers. SimBA was developed for the analysis of complex social behaviors, but includes the flexibility for users to generate predictive classifiers across other behavioral modalities with minimal effort and no specialized computational background. SimBA has a variety of extended functions for large scale batch video pre-processing, generating descriptive statistics from movement features, and interactive modules for user-defined regions of interest and visualizing classification probabilities and movement patterns.
Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Inclusive Data Science
A single person can be the source of billions of data points, whether these are generated from everyday internet use, healthcare records, wearable sensors or participation in experimental research. This vast amount of data can be used to make predictions about people and systems: what is the probability this person will develop diabetes in the next year? Will commit a crime? Will be a good employee? Is of a particular ethnicity? Predictions are simply represented by a number, produced by an algorithm. A single number in itself is not biased. How that number was generated, interpreted and subsequently used are all processes deeply susceptible to human bias and prejudices. This session will explore a philosophical perspective of data ethics and discuss practical steps to reducing statistical bias. There will be opportunity in the last section of the session for attendees to discuss and troubleshoot ethical questions from their own analyses in a ‘Data Clinic’.
Advances in Computational Psychiatry: Understanding (cognitive) control as a network process
The human brain is a complex organ characterized by heterogeneous patterns of interconnections. Non-invasive imaging techniques now allow for these patterns to be carefully and comprehensively mapped in individual humans, paving the way for a better understanding of how wiring supports cognitive processes. While a large body of work now focuses on descriptive statistics to characterize these wiring patterns, a critical open question lies in how the organization of these networks constrains the potential repertoire of brain dynamics. In this talk, I will describe an approach for understanding how perturbations to brain dynamics propagate through complex wiring patterns, driving the brain into new states of activity. Drawing on a range of disciplinary tools – from graph theory to network control theory and optimization – I will identify control points in brain networks and characterize trajectories of brain activity states following perturbation to those points. Finally, I will describe how these computational tools and approaches can be used to better understand the brain's intrinsic control mechanisms and their alterations in psychiatric conditions.
A theory for Hebbian learning in recurrent E-I networks
The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons with a supralinear input-output relation. It can explain cortical computations such as response normalization and inhibitory stabilization. However, the network's connectivity is designed by hand, based on experimental measurements. How the recurrent synaptic weights can be learned from the sensory input statistics in a biologically plausible way is unknown. Earlier theoretical work on plasticity focused on single neurons and the balance of excitation and inhibition but did not consider the simultaneous plasticity of recurrent synapses and the formation of receptive fields. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic, and E neurons self-stabilize by recruiting co-tuned inhibition. Motivated by experimental results, we employ a local Hebbian plasticity rule with multiplicative normalization for E and I synapses. We develop a theoretical framework that explains how plasticity enables inhibition balanced excitatory receptive fields that match experimental results. We show analytically that sufficiently strong inhibition allows neurons' receptive fields to decorrelate and distribute themselves across the stimulus space. For strong recurrent excitation, the network becomes stabilized by inhibition, which prevents unconstrained self-excitation. In this regime, external inputs integrate sublinearly. As in the Stabilized Supralinear Network, this results in response normalization and winner-takes-all dynamics: when two competing stimuli are presented, the network response is dominated by the stronger stimulus while the weaker stimulus is suppressed. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.
The neuroscience of color and what makes primates special
Among mammals, excellent color vision has evolved only in certain non-human primates. And yet, color is often assumed to be just a low-level stimulus feature with a modest role in encoding and recognizing objects. The rationale for this dogma is compelling: object recognition is excellent in grayscale images (consider black-and-white movies, where faces, places, objects, and story are readily apparent). In my talk I will discuss experiments in which we used color as a tool to uncover an organizational plan in inferior temporal cortex (parallel, multistage processing for places, faces, colors, and objects) and a visual-stimulus functional representation in prefrontal cortex (PFC). The discovery of an extensive network of color-biased domains within IT and PFC, regions implicated in high-level object vision and executive functions, compels a re-evaluation of the role of color in behavior. I will discuss behavioral studies prompted by the neurobiology that uncover a universal principle for color categorization across languages, the first systematic study of the color statistics of objects and a chromatic mechanism by which the brain may compute animacy, and a surprising paradoxical impact of memory on face color. Taken together, my talk will put forward the argument that color is not primarily for object recognition, but rather for the assessment of the likely behavioral relevance, or meaning, of the stuff we see.
Acoustically Levitated Granular Matter
Granular matter can serve as a prototype for exploring the rich physics of many-body systems driven far from equilibrium. This talk will outline a new direction for granular physics with macroscopic particles, where acoustic levitation compensates the forces due to gravity and eliminates frictional interactions with supporting surfaces in order to focus on particle interactions. Levitating small particles by intense ultrasound fields in air makes it possible to manipulate and control their positions and assemble them into larger aggregates. The small air viscosity implies that the regime of underdamped dynamics can be explored, where inertial effects are important, in contrast to typical colloids in a liquid, where inertia can be neglected. Sound scattered off individual, levitated solid particles gives rise to controllable attractive forces with neighboring particles. I will discuss some of the key concepts underlying acoustic levitation, describe how detuning an acoustic cavity can introduce active fluctuations that control the assembly statistics of small levitated particles clusters, and give examples of how interactions between neighboring levitated objects can be controlled by their shape.
Glassy phase in dynamically balanced networks
We study the dynamics of (inhibitory) balanced networks at varying (i) the level of symmetry in the synaptic connectivity; and (ii) the ariance of the synaptic efficacies (synaptic gain). We find three regimes of activity. For suitably low synaptic gain, regardless of the level of symmetry, there exists a unique stable fixed point. Using a cavity-like approach, we develop a quantitative theory that describes the statistics of the activity in this unique fixed point, and the conditions for its stability. Increasing the synaptic gain, the unique fixed point destabilizes, and the network exhibits chaotic activity for zero or negative levels of symmetry (i.e., random or antisymmetric). Instead, for positive levels of symmetry, there is multi-stability among a large number of marginally stable fixed points. In this regime, ergodicity is broken and the network exhibits non-exponential relaxational dynamics. We discuss the potential relevance of such a “glassy” phase to explain some features of cortical activity.
Mice alternate between discrete strategies during perceptual decision-making
Classical models of perceptual decision-making assume that animals use a single, consistent strategy to integrate sensory evidence and form decisions during an experiment. In this talk, I aim to convince you that this common view is incorrect. I will show results from applying a latent variable framework, the “GLM-HMM”, to hundreds of thousands of trials of mouse choice data. Our analysis reveals that mice don’t lapse. Instead, mice switch back and forth between engaged and disengaged behavior within a single session, and each mode of behavior lasts tens to hundreds of trials.
A geometric framework to predict structure from function in neural networks
The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.
Stochastic control of passive colloidal objects by micro-swimmers
The way single colloidal objects behave in presence of active forces arising from within the bulk of the system is crucial to many situations, notably biological and ecological (e.g. intra-cellular transport, predation), and potential medical or environmental applications (e.g. targeted delivery of cargoes, depollution of waters and soils). In this talk I will present experimental findings that my collaborators and I have obtained over the past years on the dynamics of single Brownian colloids in suspensions of biological micro-swimmers, especially the green alga Chlamydomonas reinhardtii. I'll show notably that spatial heterogeneities and anisotropies in the active particles statistics can control the preferential localisation of their passive counterparts. The results will be rationalized using theoretical approaches from hydrodynamics and stochastic processes.
The Gist of False Memory
It has long been known that when viewing a set of images, we misjudge individual elements as being closer to the mean than they are (Hollingworth, 1910) and recall seeing the (absent) set mean (Deese, 1959; Roediger & McDermott (1995). Recent studies found that viewing sets of images, simultaneously or sequentially, leads to perception of set statistics (mean, range) with poor memory for individual elements. Ensemble perception was found for sets of simple images (e.g. circles varying in size or brightness; lines of varying orientation), complex objects (e.g. faces of varying emotion), as well as for objects belonging to the same category. When the viewed set does not include its mean or prototype, nevertheless, observers report and act as if they have seen this central image or object – a form of false memory. Physiologically, detailed sensory information at cortical input levels is processed hierarchically to form an integrated scene gist at higher levels. However, we are aware of the gist before the details. We propose that images and objects belonging to a set or category are represented as their gist, mean or prototype, plus individual differences from that gist. Under constrained viewing conditions, only the gist is perceived and remembered. This theory also provides a basis for compressed neural representation. Extending this theory to scenes and episodes supplies a generalized basis for false memories. They seem right, match generalized expectations, so are believable without challenging examination. This theory could be tested by analyzing the typicality of false memories, compared to rejected alternatives.
Time perception: how our judgment of time is influenced by the regularity and change in stimulus distribution?
To organize various experiences in a coherent mental representation, we need to properly estimate the duration and temporal order of different events. Yet, our perception of time is noisy and vulnerable to various illusions. Studying these illusions can elucidate the mechanism by which the brain perceives time. In this talk, I will review a few studies on how the brain perceives duration of events and the temporal order between self-generated motion and sensory feedback. Combined with computational models at different levels, these experiments illustrated that the brain incorporates the prior knowledge of the statistical distribution of the duration of stimuli and the decay of memory when estimating duration of an individual event, and adjusts its perception of temporal order to changes in the statistics of the environment.
The geometry of abstraction in hippocampus and pre-frontal cortex
The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. Here we characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Natural visual stimuli for mice
During the course of evolution, a species’ environment shapes its sensory abilities, as individuals with more optimized sensory abilities are more likely survive and procreate. Adaptations to the statistics of the natural environment can be observed along the early visual pathway and across species. Therefore, characterising the properties of natural environments and studying the representation of natural scenes along the visual pathway is crucial for advancing our understanding of the structure and function of the visual system. In the past 20 years, mice have become an important model in vision research, but the fact that they live in a different environment than primates and have different visual needs is rarely considered. One particular challenge for characterising the mouse’s visual environment is that they are dichromats with photoreceptors that detect UV light, which the typical camera does not record. This also has consequences for experimental visual stimulation, as the blue channel of computer screens fails to excite mouse UV cone photoreceptors. In my talk, I will describe our approach to recording “colour” footage of the habitat of mice – from the mouse’s perspective – and to studying retinal circuits in the ex vivo retina with natural movies.
Neural coding in the auditory cortex - "Emergent Scientists Seminar Series
Dr Jennifer Lawlor Title: Tracking changes in complex auditory scenes along the cortical pathway Complex acoustic environments, such as a busy street, are characterised by their everchanging dynamics. Despite their complexity, listeners can readily tease apart relevant changes from irrelevant variations. This requires continuously tracking the appropriate sensory evidence while discarding noisy acoustic variations. Despite the apparent simplicity of this perceptual phenomenon, the neural basis of the extraction of relevant information in complex continuous streams for goal-directed behavior is currently not well understood. As a minimalistic model for change detection in complex auditory environments, we designed broad-range tone clouds whose first-order statistics change at a random time. Subjects (humans or ferrets) were trained to detect these changes.They were faced with the dual-task of estimating the baseline statistics and detecting a potential change in those statistics at any moment. To characterize the extraction and encoding of relevant sensory information along the cortical hierarchy, we first recorded the brain electrical activity of human subjects engaged in this task using electroencephalography. Human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. To further this investigation, we performed a series of electrophysiological recordings in the primary auditory cortex (A1), secondary auditory cortex (PEG) and frontal cortex (FC) of the fully trained behaving ferret. A1 neurons exhibited strong onset responses and change-related discharges specific to neuronal tuning. PEG population showed reduced onset-related responses, but more categorical change-related modulations. Finally, a subset of FC neurons (dlPFC/premotor) presented a generalized response to all change-related events only during behavior. We show using a Generalized Linear Model (GLM) that the same subpopulation in FC encodes sensory and decision signals, suggesting that FC neurons could operate conversion of sensory evidence to perceptual decision. All together, these area-specific responses suggest a behavior-dependent mechanism of sensory extraction and generalization of task-relevant event. Aleksandar Ivanov Title: How does the auditory system adapt to different environments: A song of echoes and adaptation
Learning from the infant’s point of view
Learning depends on both the learning mechanism and the regularities in the training material, yet most research on human and machine learning focus on the discovering the mechanisms that underlie powerful learning. I will present evidence from our research focusing on the statistical structure of infant visual learning environments. The findings suggest that the statistical structure of those learning environments are not like those used in laboratory experiments on visual learning, in machine learning, or in our adult assumptions about how teach visual categories. The data derive from our use of head cameras and head-mounted eye trackers capturing FOV experiences in the home as well as in simulated home environments in the laboratory. The participants range from 1 month of age to 24 months. The observed statistical structure offers new insights into the developmental foundations of visual object recognition and suggest a computational rethinking of the problem of visual category formation. The observed environmental statistics also have direct implications for understanding the development of cortical visual systems.
What the eye tells the brain: Visual feature extraction in the mouse retina
Visual processing begins in the retina: within only two synaptic layers, multiple parallel feature channels emerge, which relay highly processed visual information to different parts of the brain. To functionally characterize these feature channels we perform calcium and glutamate population activity recordings at different levels of the mouse retina. This allows following the complete visual signal across consecutive processing stages in a systematic way. In my talk, I will summarize our recent findings on the functional diversity of retinal output channels and how they arise within the retinal network. Specifically, I will talk about the role of inhibition and cell-type specific dendritic processing in generating diverse visual channels. Then, I will focus on how color – a single visual feature – emerges across all retinal processing layers and link our results to behavioral output and the statistics of mouse natural scenes. With our approach, we hope to identify general computational principles of retinal signaling, thereby increasing our understanding of what the eye tells the brain.
The geometry of abstraction in artificial and biological neural networks
The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
Inferring Brain Rhythm Circuitry and Burstiness
Bursts in gamma and other frequency ranges are thought to contribute to the efficiency of working memory or communication tasks. Abnormalities in bursts have also been associated with motor and psychiatric disorders. The determinants of burst generation are not known, specifically how single cell and connectivity parameters influence burst statistics and the corresponding brain states. We first present a generic mathematical model for burst generation in an excitatory-inhibitory (EI) network with self-couplings. The resulting equations for the stochastic phase and envelope of the rhythm’s fluctuations are shown to depend on only two meta-parameters that combine all the network parameters. They allow us to identify different regimes of amplitude excursions, and to highlight the supportive role that network finite-size effects and noisy inputs to the EI network can have. We discuss how burst attributes, such as their durations and peak frequency content, depend on the network parameters. In practice, the problem above follows the a priori challenge of fitting such E-I spiking networks to single neuron or population data. Thus, the second part of the talk will discuss a novel method to fit mesoscale dynamics using single neuron data along with a low-dimensional, and hence statistically tractable, single neuron model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous ‘pools’ of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using an E-I network of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived. We show that both single-neuron and connectivity parameters can be adequately recovered from simulated data.
Enhancing the power of higher order statistics by temporal stripe preselection
Bernstein Conference 2024
A homeostatic mechanism or statistics can maintain input-output relations of multilayer drifting assemblies
Bernstein Conference 2024
Environmental Statistics of Temporally Ordered Stimuli Modify Activity in the Primary Visual Cortex
COSYNE 2022
Statistics of sub-threshold voltage dynamics in cortical networks
COSYNE 2022
Statistics of sub-threshold voltage dynamics in cortical networks
COSYNE 2022
Task-dependent contribution of higher-order statistics to natural texture processing
COSYNE 2022
Task-dependent contribution of higher-order statistics to natural texture processing
COSYNE 2022
Control of locomotor statistics by contralateral inhibition in a pre-motor center
COSYNE 2023
Retinal scene statistics for freely moving mice
COSYNE 2023
Studying sensory statistics and priors during sound categorisation in head-fixed mice
COSYNE 2025
Hierarchical organization of multivariate spiking statistics across cortical areas
FENS Forum 2024
Neural signatures of learning and exploiting sensory statistics in a sound categorisation task in rats
FENS Forum 2024
The role of subcortical-cortical interactions in learning sound statistics
FENS Forum 2024
skiftiTools: An R package for visualizing and manipulating skeletonized brain diffusion tensor imaging data for versatile statistics of choice
FENS Forum 2024
Statistics versus animal welfare: Validation of the experimental unit in the focus of 3R
FENS Forum 2024