Hypotheses
hypotheses
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Brain-heart interactions at the edges of consciousness
Various clinical cases have provided evidence linking cardiovascular, neurological, and psychiatric disorders to changes in the brain-heart interaction. Our recent experimental evidence on patients with disorders of consciousness revealed that observing brain-heart interactions helps to detect residual consciousness, even in patients with absence of behavioral signs of consciousness. Those findings support hypotheses suggesting that visceral activity is involved in the neurobiology of consciousness and sum to the existing evidence in healthy participants in which the neural responses to heartbeats reveal perceptual and self-consciousness. Furthermore, the presence of non-linear, complex, and bidirectional communication between brain and heartbeat dynamics can provide further insights into the physiological state of the patient following severe brain injury. These developments on methodologies to analyze brain-heart interactions open new avenues for understanding neural functioning at a large-scale level, uncovering that peripheral bodily activity can influence brain homeostatic processes, cognition, and behavior.
Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making
How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.
Identifying mechanisms of cognitive computations from spikes
Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.
From spikes to factors: understanding large-scale neural computations
It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.
Cognitive supports for analogical reasoning in rational number understanding
In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.
Predictive modeling, cortical hierarchy, and their computational implications
Predictive modeling and dimensionality reduction of functional neuroimaging data have provided rich information about the representations and functional architectures of the human brain. While these approaches have been effective in many cases, we will discuss how neglecting the internal dynamics of the brain (e.g., spontaneous activity, global dynamics, effective connectivity) and its underlying computational principles may hinder our progress in understanding and modeling brain functions. By reexamining evidence from our previous and ongoing work, we will propose new hypotheses and directions for research that consider both internal dynamics and the computational principles that may govern brain processes.
Controversial stimuli: Optimizing experiments to adjudicate among computational hypotheses
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Emotions and Partner Phubbing: The Role of Understanding and Validation in Predicting Anger and Loneliness
Interactions between romantic partners may be disturbed by problematic mobile phone use, i.e., phubbing. Research shows that phubbing reduces the ability to be responsive, but emotional aspects of phubbing, such as experiences of anger and loneliness, have not been explored. Anger has been linked to partner blame in negative social interactions, whereas loneliness has been associated with low social acceptance. Moreover, two aspects of partner responsiveness, understanding and validation, refer to the ability to recognize partner’s perspective and convey acceptance of their point of view, respectively. High understanding and validation by partner have been found to prevent from negative affect during social interaction. The impact of understanding and validation on emotions has not been investigated in the context of phubbing, therefore we posit the following exploratory hypotheses. (1) Participants will report higher levels of anger and loneliness on days with phubbing by partner, compared to days without; (2) understanding and validation will moderate the relationship between phubbing intensity and levels of anger and loneliness. We conducted a daily diary study over seven days. Based on a sample of 133 participants in intimate relationships and living with their partners, we analyzed the nested within and between-person data using multilevel models. Participants reported higher levels of anger and loneliness on days they experienced phubbing. Both, understanding and validation, buffer the relationship between phubbing intensity and negative experiences, and the interaction effects indicate certain nuances between the two constructs. Our research provides a unique insight into how specific mechanisms related to couple interactions may explain experiences of anger and loneliness.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach
Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.
NMC4 Short Talk: Different hypotheses on the role of the PFC in solving simple cognitive tasks
Low-dimensional population dynamics can be observed in neural activity recorded from the prefrontal cortex (PFC) of subjects performing simple cognitive tasks. Many studies have shown that recurrent neural networks (RNNs) trained on the same tasks can reproduce qualitatively these state space trajectories, and have used them as models of how neuronal dynamics implement task computations. The PFC is also viewed as a conductor that organizes the communication between cortical areas and provides contextual information. It is then not clear what is its role in solving simple cognitive tasks. Do the low-dimensional trajectories observed in the PFC really correspond to the computations that it performs? Or do they indirectly reflect the computations occurring within the cortical areas projecting to the PFC? To address these questions, we modelled cortical areas with a modular RNN and equipped it with a PFC-like cognitive system. When trained on cognitive tasks, this multi-system brain model can reproduce the low-dimensional population responses observed in neuronal activity as well as classical RNNs. Qualitatively different mechanisms can emerge from the training process when varying some details of the architecture such as the time constants. In particular, there is one class of models where it is the dynamics of the cognitive system that is implementing the task computations, and another where the cognitive system is only necessary to provide contextual information about the task rule as task performance is not impaired when preventing the system from accessing the task inputs. These constitute two different hypotheses about the causal role of the PFC in solving simple cognitive tasks, which could motivate further experiments on the brain.
NMC4 Short Talk: Maggot brain, mirror image? A statistical analysis of bilateral symmetry in an insect brain connectome
Neuroscientists have many questions about connectomes that revolve around the ability to compare networks. For example, comparing connectomes could help explain how neural wiring is related to individual differences, genetics, disease, development, or learning. One such question is that of bilateral symmetry: are the left and right sides of a connectome the same? Here, we investigate the bilateral symmetry of a recently presented connectome of an insect brain, the Drosophila larva. We approach this question from the perspective of two-sample testing for networks. First, we show how this question of “sameness” can be framed as a variety of different statistical hypotheses, each with different assumptions. Then, we describe test procedures for each of these hypotheses. We show how these different test procedures perform on both the observed connectome as well as a suite of synthetic perturbations to the connectome. We also point out that these tests require careful attention to parameter alignment and differences in network density in order to provide biologically meaningful results. Taken together, these results provide the first statistical characterization of bilateral symmetry for an entire brain at the single-neuron level, while also giving practical recommendations for future comparisons of connectome networks.
NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity
Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.
Neural mechanisms of altered states of consciousness under psychedelics
Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.
Learning from unexpected events in the neocortical microcircuit
Predictive learning hypotheses posit that the neocortex learns a hierarchical model of the structure of features in the environment. Under these hypotheses, expected or predictable features are differentiated from unexpected ones by comparing bottom-up and top-down streams of data, with unexpected features then driving changes in the representation of incoming stimuli. This is supported by numerous studies in early sensory cortices showing that pyramidal neurons respond particularly strongly to unexpected stimulus events. However, it remains unknown how their responses govern subsequent changes in stimulus representations, and thus, govern learning. Here, I present results from our study of layer 2/3 and layer 5 pyramidal neurons imaged in primary visual cortex of awake, behaving mice using two-photon calcium microscopy at both the somatic and distal apical planes. Our data reveals that individual neurons and distal apical dendrites show distinct, but predictable changes in unexpected event responses when tracked over several days. Considering existing evidence that bottom-up information is primarily targeted to somata, with distal apical dendrites receiving the bulk of top-down inputs, our findings corroborate hypothesized complementary roles for these two neuronal compartments in hierarchical computing. Altogether, our work provides novel evidence that the neocortex indeed instantiates a predictive hierarchical model in which unexpected events drive learning.
Storythinking: Why Your Brain is Creative in Ways that Computer AI Can't Ever Be
Computer AI thinks differently from us, which is why it's such a useful tool. Thanks to the ingenuity of human programmers, AI's different method of thinking has made humans redundant at certain human tasks, such as chess. Yet there are mechanical limits to how far AI can replicate the products of human thinking. In this talk, we'll trace one such limit by exploring how AI and humans create differently. Humans create by reverse-engineering tools or behaviors to accomplish new actions. AI creates by mix-and-matching pieces of preexisting structures and labeling which combos are associated with positive and negative results. This different procedure is why AI cannot (and will never) learn to innovate technology or tactics and why it also cannot (and will never) learn to generate narratives (including novels, business plans, and scientific hypotheses). It also serves as a case study in why there's no reason to believe in "general intelligence" and why computer AI would have to partner with other mechanical forms of AI (run on non-computer hardware that, as of yet, does not exist, and would require humans to invent) for AI to take over the globe.
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Achieving Abstraction: Early Competence & the Role of the Learning Context
Children's emerging ability to acquire and apply relational same-different concepts is often cited as a defining feature of human cognition, providing the foundation for abstract thought. Yet, young learners often struggle to ignore irrelevant surface features to attend to structural similarity instead. I will argue that young children have--and retain--genuine relational concepts from a young age, but tend to neglect abstract similarity due to a learned bias to attend to objects and their properties. Critically, this account predicts that differences in the structure of children's environmental input should lead to differences in the type of hypotheses they privilege and apply. I will review empirical support for this proposal that has (1) evaluated the robustness of early competence in relational reasoning, (2) identified cross-cultural differences in relational and object bias, and (3) provided evidence that contextual factors play a causal role in relational reasoning. Together, these studies suggest that the development of abstract thought may be more malleable and context-sensitive than initially believed.
Integration and unification in the science of consciousness
Despite undeniable progress in the science of consciousness, there is no consensus on even fundamental theoretical and empirical questions, such as whether ‘phenomenal consciousness’ is a scientifically respectable concept, whether phenomenal consciousness overflows access consciousness, or whether the neural correlates of perceptual consciousness are in the front or in the back of the cerebral cortex. Notably, disagreement also concerns proposed theories of consciousness. However, since not all theories are mutually incompatible, there have been attempts to make theoretical progress by integrating or unifying them. I shall argue that this is preferable over proposing yet another theory, but that one should not expect it to yield a complete theory of consciousness. Rather, theoretical work in consciousness research should focus on core hypotheses about consciousness that different theories of consciousness have in common. Such a ‘minimal unifying model’ of consciousness can then be used as a basis for formulating more specific hypotheses about consciousness.
Context and Comparison During Open-Ended Induction
A key component of humans' striking creativity in solving problems is our ability to construct novel descriptions to help us characterize novel categories. Bongard problems, which challenge the problem solver to come up with a rule for distinguishing visual scenes that fall into two categories, provide an elegant test of this ability. Bongard problems are challenging for both human and machine category learners because only a handful of example scenes are presented for each category, and they often require the open-ended creation of new descriptions. A new sub-type of Bongard problem called Physical Bongard Problems (PBPs) is introduced, which require solvers to perceive and predict the physical spatial dynamics implicit in the depicted scenes. The PATHS (Perceiving And Testing Hypotheses on Structures) computational model which can solve many PBPs is presented, and compared to human performance on the same problems. PATHS and humans are similarly affected by the ordering of scenes within a PBP, with spatially and temporally juxtaposed scenes promoting category learning when they are similar and belong to different categories, or dissimilar and belong to the same category. The core theoretical commitments of PATHS which we believe to also exemplify human open-ended category learning are a) the continual perception of new scene descriptions over the course of category learning; b) the context-dependent nature of that perceptual process, in which the scenes establish the context for one another; c) hypothesis construction by combining descriptions into logical expressions; and d) bi-directional interactions between perceiving new aspects of scenes and constructing hypotheses for the rule that distinguishes categories.
A journey through connectomics: from manual tracing to the first fully automated basal ganglia connectomes
The "mind of the worm", the first electron microscopy-based connectome of C. elegans, was an early sign of where connectomics is headed, followed by a long time of little progress in a field held back by the immense manual effort required for data acquisition and analysis. This changed over the last few years with several technological breakthroughs, which allowed increases in data set sizes by several orders of magnitude. Brain tissue can now be imaged in 3D up to a millimeter in size at nanometer resolution, revealing tissue features from synapses to the mitochondria of all contained cells. These breakthroughs in acquisition technology were paralleled by a revolution in deep-learning segmentation techniques, that equally reduced manual analysis times by several orders of magnitude, to the point where fully automated reconstructions are becoming useful. Taken together, this gives neuroscientists now access to the first wiring diagrams of thousands of automatically reconstructed neurons connected by millions of synapses, just one line of program code away. In this talk, I will cover these developments by describing the past few years' technological breakthroughs and discuss remaining challenges. Finally, I will show the potential of automated connectomics for neuroscience by demonstrating how hypotheses in reinforcement learning can now be tackled through virtual experiments in synaptic wiring diagrams of the songbird basal ganglia.
Finding Needles in Genomic Haystacks
The ability to read the DNA sequences of different organisms has transformed biology in much the same way that the telescope transformed astronomy. And yet, much of the sequence found in these genomes is as enigmatic as the Rosetta Stone was to early Egyptologists. With the aim of making steps to crack the genomic Rosetta Stone, I will describe unexpected ways of using the physics of information transfer first developed at Bell Labs for thinking about telephone communications to try to decipher the meaning of the regulatory features of genomes. Specifically, I will show how we have been able to explore genes for which we know nothing about how they are regulated by using a combination of mutagenesis, deep sequencing and the physics of information, with the result that we now have falsifiable hypotheses about how those genes work. With those results in hand, I will show how simple tools from statistical physics can be used to predict the level of expression of different genes, followed by a description of precision measurements used to test those predictions. Bringing the two threads of the talk together, I will think about next steps in reading and writing genomes at will.
How sleep remodels the brain
50 years ago it was found that sleep somehow made memories better and more permanent, but neither sleep nor memory researchers knew enough about sleep and memory to devise robust, effective tests. Today the fields of sleep and memory have grown and what is now understood is astounding. Still, great mysteries remain. What is the functional difference between the subtly different slow oscillation vs the slow wave of sleep and do they really have opposite memory consolidation effects? How do short spindles (e.g. <0.5 s as in schizophrenia) differ in function from longer ones and are longer spindles key to integrating new memories with old? Is the nesting of slow oscillations together with sleep spindles and hippocampal ripples necessary? What happens if all else is fine but the neurochemical environment is altered? Does sleep become maladaptive and “cement” memories into the hippocampal warehouse where they are assembled, together with all of their emotional baggage? Does maladaptive sleep underlie post-traumatic stress disorder and other stress-related disorders? How do we optimize sleep characteristics for top emotional and cognitive function? State of the art findings and current hypotheses will be presented.
Information and Decision-Making
In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk. This perception-action loop picture of an agent's life cycle is well established and expounded especially in the context of Fuster's sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective. On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes. In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.
Deep reinforcement learning and its neuroscientific implications
The last few years have seen some dramatic developments in artificial intelligence research. What implications might these have for neuroscience? Investigations of this question have, to date, focused largely on deep neural networks trained using supervised learning, in tasks such as image classification. However, there is another area of recent AI work which has so far received less attention from neuroscientists, but which may have more profound neuroscientific implications: Deep reinforcement learning. Deep RL offers a rich framework for studying the interplay among learning, representation and decision-making, offering to the brain sciences a new set of research tools and a wide range of novel hypotheses. I’ll provide a high level introduction to deep RL, discuss some recent neuroscience-oriented investigations from my group at DeepMind, and survey some wider implications for research on brain and behavior.
Spanning the arc between optimality theories and data
Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or "close to", data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.
Sleep, semantic memory, and creative problem solving
Creative thought relies on the reorganisation of existing knowledge. Sleep is known to be important for creative thinking, but there is a debate about which sleep stage is most relevant, and why. I will address this issue by proposing that Rapid Eye Movement sleep, or 'REM', and Non-REM sleep facilitate creativity in different ways. Memory replay mechanisms in Non-REM can abstract rules from corpuses of learned information, while replay in REM may promote novel associations. I propose that the iterative interleaving of REM and Non-REM across a night boosts the formation of complex knowledge frameworks, and allows these frameworks to be restructured - thus facilitating creative thought. My talk will discuss experiments exploring these hypotheses, and the mechanisms for these processes.