Topic spotlight
TopicWorld Wide

MENA

Discover seminars, jobs, and research tagged with MENA across World Wide.
61 curated items60 Seminars1 ePoster
Updated 9 months ago
61 items · MENA
61 results
SeminarNeuroscience

What it’s like is all there is: The value of Consciousness

Axel Cleeremans
Université Libre de Bruxelles
Mar 6, 2025

Over the past thirty years or so, cognitive neuroscience has made spectacular progress understanding the biological mechanisms of consciousness. Consciousness science, as this field is now sometimes called, was not only inexistent thirty years ago, but its very name seemed like an oxymoron: how can there be a science of consciousness? And yet, despite this scepticism, we are now equipped with a rich set of sophisticated behavioural paradigms, with an impressive array of techniques making it possible to see the brain in action, and with an ever-growing collection of theories and speculations about the putative biological mechanisms through which information processing becomes conscious. This is all good and fine, even promising, but we also seem to have thrown the baby out with the bathwater, or at least to have forgotten it in the crib: consciousness is not just mechanisms, it’s what it feels like. In other words, while we know thousands of informative studies about access-consciousness, we have little in the way of phenomenal consciousness. But that — what it feels like — is truly what “consciousness” is about. Understanding why it feels like something to be me and nothing (panpsychists notwithstanding) for a stone to be a stone is what the field has always been after. However, while it is relatively easy to study access-consciousness through the contrastive approach applied to reports, it is much less clear how to study phenomenology, its structure and its function. Here, I first overview work on what consciousness does (the "how"). Next, I ask what difference feeling things makes and what function phenomenology might play. I argue that subjective experience has intrinsic value and plays a functional role in everything that we do.

SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 11, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscience

Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging

Alan Jasanoff
Massachusetts Institute of Technology
Jan 27, 2025

Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.

SeminarNeuroscienceRecording

Rethinking Attention: Dynamic Prioritization

Sarah Shomstein
George Washington University
Jan 6, 2025

Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.

SeminarNeuroscienceRecording

On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery

André Curtis Trudel
University of Cincinnati
Oct 9, 2024

Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.

SeminarNeuroscience

Maintaining Plasticity in Neural Networks

Clare Lyle
DeepMind
Mar 12, 2024

Nonstationarity presents a variety of challenges for machine learning systems. One surprising pathology which can arise in nonstationary learning problems is plasticity loss, whereby making progress on new learning objectives becomes more difficult as training progresses. Networks which are unable to adapt in response to changes in their environment experience plateaus or even declines in performance in highly non-stationary domains such as reinforcement learning, where the learner must quickly adapt to new information even after hundreds of millions of optimization steps. The loss of plasticity manifests in a cluster of related empirical phenomena which have been identified by a number of recent works, including the primacy bias, implicit under-parameterization, rank collapse, and capacity loss. While this phenomenon is widely observed, it is still not fully understood. This talk will present exciting recent results which shed light on the mechanisms driving the loss of plasticity in a variety of learning problems and survey methods to maintain network plasticity in non-stationary tasks, with a particular focus on deep reinforcement learning.

SeminarPsychology

Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making

Izzy Wisher
Aarhus University
Feb 25, 2024

How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.

SeminarNeuroscience

Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?

Timo Kvamme
Aarhus University
Aug 7, 2023

Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.

SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 2, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarArtificial IntelligenceRecording

Computational models and experimental methods for the human cornea

Anna Pandolfi
Politecnico di Milano
May 1, 2023

The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.

SeminarCognition

Beyond Volition

Patrick Haggard
University College London
Apr 26, 2023

Voluntary actions are actions that agents choose to make. Volition is the set of cognitive processes that implement such choice and initiation. These processes are often held essential to modern societies, because they form the cognitive underpinning for concepts of individual autonomy and individual responsibility. Nevertheless, psychology and neuroscience have struggled to define volition, and have also struggled to study it scientifically. Laboratory experiments on volition, such as those of Libet, have been criticised, often rather naively, as focussing exclusively on meaningless actions, and ignoring the factors that make voluntary action important in the wider world. In this talk, I will first review these criticisms, and then look at extending scientific approaches to volition in three directions that may enrich scientific understanding of volition. First, volition becomes particularly important when the range of possible actions is large and unconstrained - yet most experimental paradigms involve minimal response spaces. We have developed a novel paradigm for eliciting de novo actions through verbal fluency, and used this to estimate the elusive conscious experience of generativity. Second, volition can be viewed as a mechanism for flexibility, by promoting adaptation of behavioural biases. This view departs from the tradition of defining volition by contrasting internally-generated actions with externally-triggered actions, and instead links volition to model-based reinforcement learning. By using the context of competitive games to re-operationalise the classic Libet experiment, we identified a form of adaptive autonomy that allows agents to reduce biases in their action choices. Interestingly, this mechanism seems not to require explicit understanding and strategic use of action selection rules, in contrast to classical ideas about the relation between volition and conscious, rational thought. Third, I will consider volition teleologically, as a mechanism for achieving counterfactual goals through complex problem-solving. This perspective gives a key role in mediating between understanding and planning on the one hand, and instrumental action on the other hand. Taken together, these three cognitive phenomena of generativity, flexibility, and teleology may partly explain why volition is such an important cognitive function for organisation of human behaviour and human flourishing. I will end by discussing how this enriched view of volition can relate to individual autonomy and responsibility.

SeminarNeuroscienceRecording

Autopoiesis and Enaction in the Game of Life

Randall Beer
Indiana University
Mar 16, 2023

Enaction plays a central role in the broader fabric of so-called 4E (embodied, embedded, extended, enactive) cognition. Although the origin of the enactive approach is widely dated to the 1991 publication of the book "The Embodied Mind" by Varela, Thompson and Rosch, many of the central ideas trace to much earlier work. Over 40 years ago, the Chilean biologists Humberto Maturana and Francisco Varela put forward the notion of autopoiesis as a way to understand living systems and the phenomena that they generate, including cognition. Varela and others subsequently extended this framework to an enactive approach that places biological autonomy at the foundation of situated and embodied behavior and cognition. I will describe an attempt to place Maturana and Varela's original ideas on a firmer foundation by studying them within the context of a toy model universe, John Conway's Game of Life (GoL) cellular automata. This work has both pedagogical and theoretical goals. Simple concrete models provide an excellent vehicle for introducing some of the core concepts of autopoiesis and enaction and explaining how these concepts fit together into a broader whole. In addition, a careful analysis of such toy models can hone our intuitions about these concepts, probe their strengths and weaknesses, and move the entire enterprise in the direction of a more mathematically rigorous theory. In particular, I will identify the primitive processes that can occur in GoL, show how these can be linked together into mutually-supporting networks that underlie persistent bounded entities, map the responses of such entities to environmental perturbations, and investigate the paths of mutual perturbation that these entities and their environments can undergo.

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscienceRecording

No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit

Rylan Schaeffer
Fiete lab, MIT
Nov 1, 2022

Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.

SeminarNeuroscienceRecording

A multi-level account of hippocampal function in concept learning from behavior to neurons

Rob Mok
University of Cambridge
Nov 1, 2022

A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.

SeminarNeuroscienceRecording

Associative memory of structured knowledge

Julia Steinberg
Princeton University
Oct 25, 2022

A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.

SeminarNeuroscienceRecording

Learning predictive maps in the brain for spatial navigation

William de Cothi
Barry lab, UCL
Oct 11, 2022

The predictive map hypothesis provides a promising framework to model representations in the hippocampal formation. I will introduce a tractable implementation of a predictive map called the successor representation (SR), before presenting data showing that rats and humans display SR-like navigational choices on a novel open-field maze. Next, I will show how such a predictive map could be implemented using spatial representations found in the hippocampal formation, before finally presenting how such learning might be well approximated by phenomena that exist in the spatial memory system - namely spike-timing dependent plasticity and theta phase precession.

SeminarNeuroscienceRecording

A parsimonious description of global functional brain organization in three spatiotemporal patterns

Taylor Bolt
Emory University
Sep 21, 2022

Resting-state functional magnetic resonance imaging (MRI) has yielded seemingly disparate insights into large-scale organization of the human brain. The brain’s large-scale organization can be divided into two broad categories: zero-lag representations of functional connectivity structure and time-lag representations of traveling wave or propagation structure. In this study, we sought to unify observed phenomena across these two categories in the form of three low-frequency spatiotemporal patterns composed of a mixture of standing and traveling wave dynamics. We showed that a range of empirical phenomena, including functional connectivity gradients, the task-positive/task-negative anti-correlation pattern, the global signal, time-lag propagation patterns, the quasiperiodic pattern and the functional connectome network structure, are manifestations of these three spatiotemporal patterns. These patterns account for much of the global spatial structure that underlies functional connectivity analyses and unifies phenomena in resting-state functional MRI previously thought distinct.

SeminarNeuroscienceRecording

Learning static and dynamic mappings with local self-supervised plasticity

Pantelis Vafeidis
California Institute of Technology
Sep 6, 2022

Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.

SeminarPhysics of LifeRecording

Odd dynamics of living chiral crystals

Alexander Mietke
MIT
Aug 14, 2022

The emergent dynamics exhibited by collections of living organisms often shows signatures of symmetries that are broken at the single-organism level. At the same time, organism development itself encompasses a well-coordinated sequence of symmetry breaking events that successively transform a single, nearly isotropic cell into an animal with well-defined body axis and various anatomical asymmetries. Combining these key aspects of collective phenomena and embryonic development, we describe here the spontaneous formation of hydrodynamically stabilized active crystals made of hundreds of starfish embryos that gather during early development near fluid surfaces. We describe a minimal hydrodynamic theory that is fully parameterized by experimental measurements of microscopic interactions among embryos. Using this theory, we can quantitatively describe the stability, formation and rotation of crystals and rationalize the emergence of mechanical properties that carry signatures of an odd elastic material. Our work thereby quantitatively connects developmental symmetry breaking events on the single-embryo level with remarkable macroscopic material properties of a novel living chiral crystal system.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 4, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscienceRecording

A model of colour appearance based on efficient coding of natural images

Jolyon Troscianko
University of Exeter
Jul 17, 2022

An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.

SeminarNeuroscienceRecording

How communication networks promote cross-cultural similarities: The case of category formation

Douglas Guilbeault
University of California, Berkeley
Jun 1, 2022

Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations to construct divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. For this investigation, I designed an online “Grouping Game” to observe how people construct categories in both small and large populations when tasked with grouping together the same novel and ambiguous images. I replicated this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution. In particular, I discuss how participants in both cultures readily harnessed analogies when categorizing novel stimuli, and I examine the role of communication networks in promoting cross-cultural similarities in analogy-making as the key engine of category formation.

SeminarNeuroscienceRecording

The Standard Model of the Retina

Markus Meister
Caltech
May 24, 2022

The science of the retina has reached an interesting stage of completion. There exists now a consensus standard model of this neural system - at least in the minds of many researchers - that serves as a baseline against which to evaluate new claims. The standard model links phenomena from molecular biophysics, cell biology, neuroanatomy, synaptic physiology, circuit function, and visual psychophysics. It is further supported by a normative theory explaining what the purpose is of processing visual information this way. Most new reports of retinal phenomena fit squarely within the standard model, and major revisions seem increasingly unlikely. Given that our understanding of other brain circuits with comparable complexity is much more rudimentary, it is worth considering an example of what success looks like. In this talk I will summarize what I think are the ingredients that led to this mature understanding of the retina. Equally important, a number of practices and concepts that are currently en vogue in neuroscience were not needed or indeed counterproductive. I look forward to debating how these lessons might extend to other areas of brain research.

SeminarNeuroscienceRecording

Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations

Yul Kang
Lengyel lab, Cambridge University
Apr 5, 2022

To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.

SeminarPhysics of LifeRecording

Intrinsic Rhythms in a Giant Single-Celled Organism and the Interplay with Time-Dependent Drive, Explored via Self-Organized Macroscopic Waves

Eldad Afik
California Institute of Technology
Mar 27, 2022

Living Systems often seem to follow, in addition to external constraints and interactions, an intrinsic predictive model of the world — a defining trait of Anticipatory Systems. Here we study rhythmic behaviour in Caulerpa, a marine green alga, which appears to predict the day/night light cycle. Caulerpa consists of differentiated organs resembling leaves, stems and roots. While an individual can exceed a meter in size, it is a single multinucleated giant cell. Active transport has been hypothesized to play a key role in organismal development. It has been an open question in the literature whether rhythmic transport phenomena in this organism are of autonomous circadian nature. Using Raspberry-Pi cameras, we track over weeks the morphogenesis of tens of samples concurrently, while tracing at resolution of tens of seconds the variation of the green coverage. The latter reveals waves propagating over centimeters within few hours, and is attributed to chloroplast redistribution at whole-organism scale. Our observations of algal segments regenerating under 12-hour light/dark cycles indicate that the initiation of the waves precedes the external light change. Using time-frequency analysis, we find that the temporal spectrum of these green pulses contains a circadian period. The latter persists over days even under constant illumination, indicative of its autonomous nature. We further explore the system under non-circadian periods, to reveal how the spectral content changes in response. Time-keeping and synchronization are recurring themes in biological research at various levels of description — from subcellular components to ecological systems. We present a seemingly primitive living system that exhibits apparent anticipatory behaviour. This research offers quantitative constraints for theoretical frameworks of such systems.

SeminarPhysics of Life

Retinal neurogenesis and lamination: What to become, where to become it and how to move from there!

Caren Norden
Instituto Gulbenkian de Ciência
Mar 24, 2022

The vertebrate retina is an important outpost of the central nervous system, responsible for the perception and transmission of visual information. It consists of five different types of neurons that reproducibly laminate into three layers, a process of crucial importance for the organ’s function. Unsurprisingly, impaired fate decisions as well as impaired neuronal migrations and lamination lead to impaired retinal function. However, how processes are coordinated at the cellular and tissue level and how variable or robust retinal formation is, is currently still underexplored. In my lab, we aim to shed light on these questions from different angles, studying on the one hand differentiation phenomena and their variability and on the other hand the downstream migration and lamination phenomena. We use zebrafish as our main model system due to its excellent possibilities for live imaging and quantitative developmental biology. More recently we also started to use human retinal organoids as a comparative system. We further employ cross disciplinary approaches to address these issues combining work of cell and developmental biology, biomechanics, theory and computer science. Together, this allows us to integrate cell with tissue-wide phenomena and generate an appreciation of the reproducibility and variability of events.

SeminarNeuroscienceRecording

Interpersonal synchrony of body/brain, Solo & Team Flow

Shinsuke Shimojo
California Institute of Technology
Jan 27, 2022

Flow is defined as an altered state of consciousness with excessive attention and enormous sense of pleasure, when engaged in a challenging task, first postulated by a psychologist, the late M. Csikszentmihayli. The main focus of this talk will be “Team Flow,” but there were two lines of previous studies in our laboratory as its background. First is inter-body and inter-brain coordination/synchrony between individuals. Considering various rhythmic echoing/synchronization phenomena in animal behavior, it could be regarded as the biological, sub-symbolic and implicit origin of social interactions. The second line of precursor research is on the state of Solo Flow in game playing. We employed attenuation of AEP (Auditory Evoked Potential) to task-irrelevant sound probes as an objective-neural indicator of such a Flow status, and found that; 1) Mutual link between the ACC & the TP is critical, and 2) overall, top-down influence is enhanced while bottom-up causality is attenuated. Having these as the background, I will present our latest study of Team Flow in game playing. We found that; 3) the neural correlates of Team Flow is distinctively different from those of Solo Flow nor of non-flow social, 4) the left medial temporal cortex seems to form an integrative node for Team Flow, receiving input related to Solo Flow state from the right PFC and input related to social state from the right IFC, and 5) Intra-brain (dis)similarity of brain activity well predicts (dis)similarity of skills/cognition as well as affinity for inter-brain coherence.

SeminarNeuroscience

Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli

Helene Schreyer
Gollisch lab, University Medical Center Göttingen, Germany
Dec 8, 2021

Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.

SeminarNeuroscienceRecording

Challenges and opportunities for neuroscientists in the MENA region

ALBA Network
Dec 2, 2021

As part of its webinar series on region-specific diversity issues, the ALBA Network is organizing a panel discussion to explore the challenges and biases faced by neuroscientists while establishing their research groups and careers in the MENA region, from an academic and cultural perspective. This will be followed by highlights of success stories, unique region-specific opportunities for research collaborations and recommendations to improve representation of MENA neuroscientists in the global stage.

SeminarNeuroscienceRecording

Being awake while sleeping, being asleep while awake: consequences on cognition and consciousness

Thomas Andrillon
Paris Brain Institute
Nov 18, 2021

Sleep is classically presented as an all-or-nothing phenomenon. Yet, there is increasing evidence showing that sleep and wakefulness can actually intermingle and that wake-like and sleep-like activity can be observed concomitantly in different brain regions. I will here explore the implications of this conception of sleep as a local phenomenon for cognition and consciousness. In the first part of my presentation, I will show how local modulations of sleep depth during sleep could support the processing of sensory information by sleepers. I will also how, under certain circumstances, sleepers can learn while sleeping but also how they can forget. In the second part, I will show how the reverse phenomenon, sleep intrusions during waking, can explain modulations of attention. I will focus in particular on modulations of subjective experience and how the local sleep framework can inform our understanding of everyday phenomena such as mind wandering and mind blanking. Through this presentation and the exploration of both sleep and wakefulness, I will seek to connect changes in neurophysiology with changes in behaviour and subjective experience.

SeminarNeuroscienceRecording

Noise-induced properties of active dendrites

Farzada Farkhooi
Humboldt University Berlin
Nov 16, 2021

Neuronal dendritic trees display a wide range of nonlinear input integrations due to their voltage-dependent active calcium channels. We reveal that in vivo-like fluctuating input enhances nonlinearity substantially in a single dendritic compartment and shifts the input-output relation to exhibiting nonmonotonous or bistable dynamics. In particular, with the slow activation of calcium dynamics, we analyze noise-induced bistability and its timescales. We show bistability induces long-timescale fluctuation that can account for observed dendritic plateau potentials in vivo conditions. In a multicompartmental model neuron with realistic synaptic input, we show that noise-induced bistability persists in a wide range of parameters. Using Fredholm's theory to calculate the spiking rate of multivariable neurons, we discuss how dendritic bistability shifts the spiking dynamics of single neurons and its implications for network phenomena in the processing of in vivo–like fluctuating input.

SeminarNeuroscienceRecording

Of Grids and Maps

Matteo Grasso
University of Wisconsin-Madison, USA
Nov 14, 2021

Neuroscientific methods successfully account for a system’s functional properties, but leave out the subjective properties of the accompanying experience. According to IIT, phenomenology can be studied scientifically by unfolding the cause-effect structure specified by a system. To illustrate how, in this talk I compare two systems (a grid and a map) to show that they can be functionally equivalent in performing fixation, but only one can specify a cause-effect structure that accounts for the extendedness of phenomenal space.

SeminarPhysics of Life

Nonequilibrium self-assembly and time-irreversibility in living systems

Gili Bisker
Tel Aviv University
Nov 4, 2021

Far-from-equilibrium processes constantly dissipate energy while converting a free-energy source to another form of energy. Living systems, for example, rely on an orchestra of molecular motors that consume chemical fuel to produce mechanical work. In this talk, I will describe two features of life, namely, time-irreversibility, and nonequilibrium self-assembly. Time irreversibility is the hallmark of nonequilibrium dissipative processes. Detecting dissipation is essential for our basic understanding of the underlying physical mechanism, however, it remains a challenge in the absence of observable directed motion, flows, or fluxes. Additional difficulty arises in complex systems where many internal degrees of freedom are inaccessible to an external observer. I will introduce a novel approach to detect time irreversibility and estimate the entropy production from time-series measurements, even in the absence of observable currents. This method can be implemented in scenarios where only partial information is available and thus provides a new tool for studying nonequilibrium phenomena. Further, I will explore the added benefits achieved by nonequilibrium driving for self-assembly, identify distinctive collective phenomena that emerge in a nonequilibrium self-assembly setting, and demonstrate the interplay between the assembly speed, kinetic stability, and relative population of dynamical attractors.

SeminarNeuroscience

Demystifying the richness of visual perception

Ruth Rosenholtz
MIT
Oct 19, 2021

Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.

SeminarPsychology

Psychological essentialism in working memory research

Satoru Saito
Kyoto University
Oct 4, 2021

Psychological essentialism is ubiquitous. It is one of primary bases of thoughts and behaviours throughout our entire lifetime. Human's such characteristics that find an unseen hidden entity behind observable phenomena or exemplars, however, lead us to somehow biased thinking and reasoning even in the realm of science, including psychology. For example, a latent variable extracted from various measurements is just a statistical property calculated in structural equation modeling, therefore, is not necessary to be a fundamental reality. Yet, we occasionally feel that there is the essential nature of such a psychological construct a priori. This talk will demonstrate examples of psychological essentialism in psychology and examine its resultant influences on working memory related issues, e. g., working memory training. Such demonstration, examination, and subsequent discussions on these topics will provide us an opportunity to reconsider the concept of working memory.

SeminarPhysics of LifeRecording

Growing in flows: from evolutionary dynamics to microbial jets

Severine Atis
University of Chicago
Sep 26, 2021

Biological systems can self-organize in complex structures, able to evolve and adapt to widely varying environmental conditions. Despite the importance of fluid flow for transporting and organizing populations, few laboratory systems exist to systematically investigate the impact of advection on their spatial evolutionary dynamics. In this talk, I will discuss how we can address this problem by studying the morphology and genetic spatial structure of microbial colonies growing on the surface of a viscous substrate. When grown on a liquid, I will show that S. cerevisiae (baker’s yeast) can behave like “active matter” and collectively generate a fluid flow many times larger than the unperturbed colony expansion speed, which in turn produces mechanical stresses and fragmentation of the initial colony. Combining laboratory experiments with numerical modeling, I will demonstrate that the coupling between metabolic activity and hydrodynamic flows can produce positive feedbacks and drive preferential growth phenomena leading to the formation of microbial jets. Our work provides rich opportunities to explore the interplay between hydrodynamics, growth and competition within a versatile system.

SeminarPhysics of LifeRecording

Theory of activity-powered interface

Zhihong You
University of California, Santa Barbara
Aug 29, 2021

Interfaces and membranes are ubiquitous in cellular systems across various scales. From lipid membranes to the interfaces of biomolecular condensates inside the cell, these borders not only protect and segregate the inner components from the outside world, but also are actively participating in mechanical regulation and biochemical reaction of the cell. Being part of a living system, these interfaces (membranes) are usually active and away from equilibrium. Yet, it's still not clear how activity can tweak their equilibrium dynamics. Here, I will introduce a model system to tackle this problem. We put together a passive fluid and an active nematics, and study the behavior of this liquid-liquid interface. Whereas thermal fluctuation of such an interface is too weak to be observed, active stress can easily force the interface to fluctuate, overhang, and even break up. In the presence of a wall, the active phase exhibits superfluid-like behavior: it can climb up walls -- a phenomenon we call activity-induced wetting. I will show how to formulate theories to capture these phenomena, highlighting the nontrivial effects of active stress. Our work not only demonstrates that activity can introduce interesting features to an interface, but also sheds light on controlling interfacial properties using activity.

SeminarNeuroscienceRecording

Conceptual Change Induced by Analogical Reasoning Sparks “Aha!” Moments

Christine Chesebrough
Drexel University
Jul 21, 2021

Although analogical reasoning has been assumed to involve insight and its associated “aha!” experience, the relationship between these phenomena has never been directly probed empirically. In this study we investigated the relationship between representational change and the “aha!” experience during analogical reasoning. A novel set of verbal analogy stimuli were developed for use as an insight task. Across two experiments, participants reported significantly stronger aha moments and showed greater evidence of representational change on trials with more semantically distant analogies. Further, the strength of reported aha moments was correlated with the degree to which participants’ descriptions of the analogies changed over the course of each trial. Lastly, we probed the individual differences associated with a tendency to report stronger "aha" experiences, particularly related to mood, curiosity, and reward responsiveness. The findings shed light on the affective components of analogical reasoning and suggest that measuring affective responses during such tasks may elucidate novel insights into the mechanisms of creative analogical reasoning.

SeminarNeuroscienceRecording

Probabilistic Analogical Mapping with Semantic Relation Networks

Hongjing Lu
UCLA
Jun 30, 2021

Hongjing Lu will present a new computational model of Probabilistic Analogical Mapping (PAM, in collaboration with Nick Ichien and Keith Holyoak) that finds systematic correspondences between inputs generated by machine learning. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distributed representations of individual concepts (word embeddings created by Word2vec) and of relations between concepts (created by our BART model). We have used PAM to simulate a broad range of phenomena involving analogical mapping by both adults and children. Our approach demonstrates that human-like analogical mapping can emerge from comparison mechanisms applied to rich semantic representations of individual concepts and relations. More details can be found https://arxiv.org/ftp/arxiv/papers/2103/2103.16704.pdf

SeminarNeuroscience

Integrated Information Theory and Its Implications for Free Will

Giulio Tononi
University of Wisconsin-Madison
Jun 24, 2021

Integrated information theory (IIT) takes as its starting point phenomenology, rather than behavioral, functional, or neural correlates of consciousness. The theory characterizes the essential properties of phenomenal existence—which is immediate and indubitable. These are translated into physical properties, expressed operationally as cause-effect power, which must be satisfied by the neural substrate of consciousness. On this basis, the theory can account for clinical and experimental data about the presence and absence of consciousness. Current work aims at accounting for specific qualities of different experiences, such as spatial extendedness and the flow of time. Several implications of IIT have ethical relevance. One is that functional equivalence does not imply phenomenal equivalence—computers may one day be able to do everything we do, but they will not experience anything. Another is that we do have free will in the fundamental, metaphysical sense—we have true alternatives and we, not our neurons, are the true cause of our willed actions.

SeminarPsychology

Visual working memory representations are distorted by their use in perceptual comparisons

Keisuke Fukuda
University of Toronto Mississauga, University of Toronto
Jun 21, 2021

Visual working memory (VWM) allows us to maintain a small amount of task-relevant information in mind so that we can use them to guide our behavior. Although past studies have successfully characterized its capacity limit and representational quality during maintenance, the consequence of its usage for task-relevant behaviors has been largely unknown. In this talk, I will demonstrate that VWM representations get distorted when they are used for perceptual comparisons with new visual inputs, especially when the inputs are subjectively similar to the VWM representations. Furthermore, I will show that this similarity-induced memory bias (SIMB) occurs for both simple (e.g. , color, shape) and complex stimuli (e.g., real world objects, faces) that are perceptually encoded and retrieved from long-term memory. Given the observed versatility of the SIMB, its implication for other memory distortion phenomena (e.g., distractor-induced distortion, misinformation effect) will be discussed.

SeminarNeuroscience

Sleepless in Vienna - how to rescue folding-deficient dopamine transporters by pharmacochaperoning

Michael Freissmuth
Medical University of Vienna
Jun 17, 2021

Diseases that arise from misfolding of an individual protein are rare. However, collectively, these folding diseases represent a large proportion of hereditary and acquired disorders. In fact, the term "Molecular Medicine" was coined by Linus Pauling in conjunction with the study of a folding disease, i.e. sickle cell anemia. In the past decade, we have witnessed an exponential growth in the number of mutations, which have been identified in genes encoding solute carriers (SLC). A sizable faction - presumably the majority - of these mutations result in misfolding of the encoded protein. While studying the export of the GABA transporter (SLC6A1) and of the serotonin transporter (SLC6A4), from the endoplasmic reticulum (ER), we discovered by serendipity that some ligands can correct the folding defect imparted by point mutations. These bind to the inward facing state. The most effective compound is noribogaine, the metabolite of ibogaine (an alkaloid first isolated from the shrub Tabernanthe iboga). There are 13 mutations in the human dopamine transporter (DAT, SLC6A3), which give rise to a syndrome of infantile Parkinsonism and dystonia. We capitalized on our insights to explore, if the disease-relevant mutant proteins were amenable to pharmacological correction. Drosopohila melanogaster, which lack the dopamine transporter, are hyperactive and sleepless (fumin in Japanese). Thus, mutated human DAT variants can be introduced into fumin flies. This allows for examining the effect of pharmacochaperones on delivery of DAT to the axonal territory and on restoring sleep. We explored the chemical space populated by variations of the ibogaine structure to identify an analogue (referred to as compound 9b), which was highly effective: compound 9b also restored folding in DAT variants, which were not amenable to rescue by noribogaine. Deficiencies in the human creatine transporter-1 (CrT1, SLC6A8) give rise to a syndrome of intellectual disability and seizures and accounts for 5% of genetically based intellectual disabilities in boys. Point mutations occur, in part, at positions, which are homologous to those of folding-deficient DAT variants. CrT1 lacks the rich pharmacology of monoamine transporters. Nevertheless, our insights are also applicable to rescuing some disease-related variants of CrT1. Finally, the question arises how one can address the folding problem. We propose a two-pronged approach: (i) analyzing the effect of mutations on the transport cycle by electrophysiological recordings; this allows for extracting information on the rates of conformational transitions. The underlying assumption posits that - even when remedied by pharmacochaperoning - folding-deficient mutants must differ in the conformational transitions associated with the transport cycle. (ii) analyzing the effect of mutations on the two components of protein stability, i.e. thermodynamic and kinetic stability. This is expected to provide a glimpse of the energy landscape, which governs the folding trajectory.

SeminarNeuroscience

Learning under uncertainty in autism and anxiety

Timothy Sandhu
University of Cambridge, MRC CBU
Jun 15, 2021

Optimally interacting with a changeable and uncertain world requires estimating and representing uncertainty. Psychiatric and neurodevelopmental conditions such as anxiety and autism are characterized by an altered response to uncertainty. I will review the evidence for these phenomena from computational modelling, and outline the planned experiments from our lab to add further weight to these ideas. If time allows, I will present results from a control sample in a novel task interrogating a particular type of uncertainty and their associated transdiagnostic psychiatric traits.

SeminarNeuroscience

Inclusive Basic Research

Dr Simone Badal and Dr Natasha Karp
University of the West Indies, Astra Zeneca
Jun 8, 2021

Methodology for understanding the basic phenomena of life can be done in vitro or in vivo, under tightly-controlled experimental conditions designed to limit variability. However stringent the protocol, these experiments do not occur in a cultural vacuum and they are often subject to the same societal biases as other research disciplines. Many researchers uphold the status quo of biased basic research by not questioning the characteristics of their experimental animals, or the people from whom their tissue samples were collected. This means that our fundamental understanding of life has been built on biased models. This session will explore the ways in which basic life sciences research can be biased and the implications of this. We will discuss practical ways to assess your research design and how to make sure it is representative.

SeminarNeuroscience

Choosing, fast and slow: Implications of prioritized-sampling models for understanding automaticity and control

Cendri Hutcherson
University of Toronto
Apr 14, 2021

The idea that behavior results from a dynamic interplay between automatic and controlled processing underlies much of decision science, but has also generated considerable controversy. In this talk, I will highlight behavioral and neural data showing how recently-developed computational models of decision making can be used to shed new light on whether, when, and how decisions result from distinct processes operating at different timescales. Across diverse domains ranging from altruism to risky choice biases and self-regulation, our work suggests that a model of prioritized attentional sampling and evidence accumulation may provide an alternative explanation for many phenomena previously interpreted as supporting dual process models of choice. However, I also show how some features of the model might be taken as support for specific aspects of dual-process models, providing a way to reconcile conflicting accounts and generating new predictions and insights along the way.

SeminarNeuroscienceRecording

A discussion on the necessity for Open Source Hardware in neuroscience research

Andre Maia Chagas
University of Sussex
Mar 28, 2021

Research tools are paramount for scientific development, they enable researchers to observe and manipulate natural phenomena, learn their principles, make predictions and develop new technologies, treatments and improve living standards. Due to their costs and the geographical distribution of manufacturing companies access to them is not widely available, hindering the pace of research, the ability of many communities to contribute to science and education and reap its benefits. One possible solution for this issue is to create research tools under the open source ethos, where all documentation about them (including their designs, building and operating instructions) are made freely available. Dubbed Open Science Hardware (OSH), this production method follows the established and successful principles of open source software and brings many advantages over traditional creation methods such as: economic savings (see Pearce 2020 for potential economic savings in developing open source research tools), distributed manufacturing, repairability, and higher customizability. This development method has been greatly facilitated by recent technological developments in fast prototyping tools, Internet infrastructure, documentation platforms and lower costs of electronic off-the-shelf components. Taken together these benefits have the potential to make research more inclusive, equitable, distributed and most importantly, more reliable and reproducible, as - 1) researchers can know their tools inner workings in minute detail - 2) they can calibrate their tools before every experiment and having them running in optimal condition everytime - 3) given their lower price point, a)students can be trained/taught with hands on classes, b) several copies of the same instrument can be built leading to a parallelization of data collection and the creation of more robust datasets. - 4) Labs across the world can share the exact same type of instruments and create collaborative projects with standardized data collection and sharing.

SeminarNeuroscienceRecording

Thinking the Right Thoughts

Nathaniel Daw
Princeton University
Mar 3, 2021

In many learning and decision scenarios, especially sequential settings like mazes or games, it is easy to state an objective function but difficult to compute it, for instance because this can require enumerating many possible future trajectories. This, in turn, motivates a variety of more tractable approximations which then raise resource-rationality questions about whether and when an efficient agent should invest time or resources in computing decision variables more accurately. Previous work has used a simple all-or-nothing version of this reasoning as a framework to explain many phenomena of automaticity, habits, and compulsion in humans and animals. Here, I present a more finegrained theoretical analysis of deliberation, which attempts to address not just whether to deliberate vs. act, but which of many possible actions and trajectories to consider. Empirically, I first motivate and compare this account to nonlocal representations of spatial trajectories in the rodent place cell system, which are thought to be involved in planning. I also consider its implications, in humans, for variation over time and situations in subjective feelings of mental effort, boredom, and cognitive fatigue. Finally, I present results from a new study using magnetoencephalography in humans to measure subjective consideration of possible trajectories during a sequential learning task, and study its relationship to rational prioritization and to choice behavior.

SeminarNeuroscienceRecording

Interacting synapses stabilise both learning and neuronal dynamics in biological networks

Tim Vogels
IST Austria
Mar 2, 2021

Distinct synapses influence one another when they undergo changes, with unclear consequences for neuronal dynamics and function. Here we show that synapses can interact such that excitatory currents are naturally normalised and balanced by inhibitory inputs. This happens when classical spike-timing dependent synaptic plasticity rules are extended by additional mechanisms that incorporate the influence of neighbouring synaptic currents and regulate the amplitude of efficacy changes accordingly. The resulting control of excitatory plasticity by inhibitory activation, and vice versa, gives rise to quick and long-lasting memories as seen experimentally in receptive field plasticity paradigms. In models with additional dendritic structure, we observe experimentally reported clustering of co-active synapses that depends on initial connectivity and morphology. Finally, in recurrent neural networks, rich and stable dynamics with high input sensitivity emerge, providing transient activity that resembles recordings from the motor cortex. Our model provides a general framework for codependent plasticity that frames individual synaptic modifications in the context of population-wide changes, allowing us to connect micro-level physiology with behavioural phenomena.

SeminarPhysics of LifeRecording

Mixed active-passive suspensions: from particle entrainment to spontaneous demixing

Marco Polin
University Warwick
Feb 16, 2021

Understanding the properties of active matter is a challenge which is currently driving a rapid growth in soft- and bio-physics. Some of the most important examples of active matter are at the microscale, and include active colloids and suspensions of microorganisms, both as a simple active fluid (single species) and as mixed suspensions of active and passive elements. In this last class of systems, recent experimental and theoretical work has started to provide a window into new phenomena including activity-induced depletion interactions, phase separation, and the possibility to extract net work from active suspensions. Here I will present our work on a paradigmatic example of mixed active-passive system, where the activity is provided by swimming microalgae. Macro- and micro-scopic experiments reveal that microorganism-colloid interactions are dominated by rare close encounters leading to large displacements through direct entrainment. Simulations and theoretical modelling show that the ensuing particle dynamics can be understood in terms of a simple jump-diffusion process, combining standard diffusion with Poisson-distributed jumps. Entrainment length can be understood within the framework of Taylor dispersion as a competition between advection by the no-slip surface of the cell body and microparticle diffusion. Building on these results, we then ask how external control of the dynamics of the active component (e.g. induced microswimmer anisotropy/inhomogeneity) can be used to alter the transport of passive cargo. As a first step in this direction, we study the behaviour of mixed active-passive systems in confinement. The resulting spatial inhomogeneity in swimmers’ distribution and orientation has a dramatic effect on the spatial distribution of passive particles, with the colloids accumulating either towards the boundaries or towards the bulk of the sample depending on the size of the container. We show that this can be used to induce the system to de-mix spontaneously.

SeminarNeuroscienceRecording

Integration and unification in the science of consciousness

Wanja Wiese
Johannes Gutenberg University of Mainz
Jan 28, 2021

Despite undeniable progress in the science of consciousness, there is no consensus on even fundamental theoretical and empirical questions, such as whether ‘phenomenal consciousness’ is a scientifically respectable concept, whether phenomenal consciousness overflows access consciousness, or whether the neural correlates of perceptual consciousness are in the front or in the back of the cerebral cortex. Notably, disagreement also concerns proposed theories of consciousness. However, since not all theories are mutually incompatible, there have been attempts to make theoretical progress by integrating or unifying them. I shall argue that this is preferable over proposing yet another theory, but that one should not expect it to yield a complete theory of consciousness. Rather, theoretical work in consciousness research should focus on core hypotheses about consciousness that different theories of consciousness have in common. Such a ‘minimal unifying model’ of consciousness can then be used as a basis for formulating more specific hypotheses about consciousness.

SeminarNeuroscienceRecording

More than mere association: Are some figure-ground organisation processes mediated by perceptual grouping mechanisms?

Joseph Brooks
Keele University
Dec 7, 2020

Figure-ground organisation and perceptual grouping are classic topics in Gestalt and perceptual psychology. They often appear alongside one another in introductory textbook chapters on perception and have a long history of investigation. However, they are typically discussed as separate processes of perceptual organisation with their own distinct phenomena and mechanisms. Here, I will propose that perceptual grouping and figure-ground organisation are strongly linked. In particular, perceptual grouping can provide a basis for, and may share mechanisms with, a wide range of figure-ground principles. To support this claim, I will describe a new class of figure-ground principles based on perceptual grouping between edges and demonstrate that this inter-edge grouping (IEG) is a powerful influence on figure-ground organisation. I will also draw support from our other results showing that grouping between edges and regions (i.e., edge-region grouping) can affect figure-ground organisation (Palmer & Brooks, 2008) and that contextual influences in figure-ground organisation can be gated by perceptual grouping between edges (Brooks & Driver, 2010). In addition to these modern observations, I will also argue that we can describe some classic figure-ground principles (e.g., symmetry, convexity, etc.) using perceptual grouping mechanisms. These results suggest that figure-ground organisation and perceptual grouping have more than a mere association under the umbrella topics of Gestalt psychology and perceptual organisation. Instead, perceptual grouping may provide a mechanism underlying a broad class of new and extant figure-ground principles.

SeminarNeuroscienceRecording

Can subjective experience be quantified? Critically examining computational cognitive neuroscience approaches

Megan Peters
UC Irvine
Nov 5, 2020

Computational and cognitive neuroscience techniques have made great strides towards describing the neural computations underlying perceptual inference and decision-making under uncertainty. These tools tell us how and why perceptual illusions occur, which brain areas may represent noisy information in a probabilistic manner, and so on. However, an understanding of the subjective, qualitative aspects of perception remains elusive: qualia, or the personal, intrinsic properties of phenomenal awareness, have remained out of reach of these computational analytic insights. Here, I propose that metacognitive computations, and the subjective feelings that go along with them, give us a solid starting point for understanding subjective experience in general. Specifically, perceptual metacognition possesses ontological and practical properties that provide a powerful and unique opportunity for studying the studying the neural and computational correlates of subjective experience using established tools of computational and cognitive neuroscience. By capitalizing on decades of developments in formal computational model comparisons as applied to the specific properties of perceptual metacognition, we are now in a privileged position to reveal new and exciting insights about how the brain constructs our subjective conscious experiences.

SeminarNeuroscienceRecording

A Connectionist Account of Analogy-Making

Ivan Vankov
Bulgarian Academy of Sciences
Nov 4, 2020

Analogy-making is considered to be one of the cognitive processes which are hard to be accounted for in connectionist terms. A number of models have been proposed, but they are either tailed for specific analogical tasks or require complicated mechanisms which don’t fit into the mainstream connectionist modelling paradigm. In this talk I will present a new connectionist account of analogy-making based on the vector approach to representing symbols (VARS). This approach allows representing relational structures of varying complexity by numeric vectors with fixed dimensionality. I will also present a simple and computationally efficient mechanism of aligning VARS representations, which integrates both semantic similarity and structural constraints. The results of a series of simulations will demonstrate that VARS can account for basic analogical phenomena.

SeminarNeuroscience

How embodiment can solve the problem of phenomenal consciousness

Kevin O'Regan
University of Paris, France
Nov 4, 2020
SeminarPhysics of LifeRecording

Is there universality in biology?

Nigel Goldenfeld
Massachusetts General Hospital and Brigham & Women's Hospital
Oct 29, 2020

It is sometimes said that there are two reasons why physics is so successful as a science. One is that it deals with very simple problems. The other is that it attempts to account only for universal aspects of systems at a desired level of description, with lower level phenomena subsumed into a small number of adjustable parameters. It is a widespread belief that this approach seems unlikely to be useful in biology, which is intimidatingly complex, where “everything has an exception”, and where there are a huge number of undetermined parameters. I will try to argue, nonetheless, that there are important, experimentally-testable aspects of biology that exhibit universality, and should be amenable to being tackled from a physics perspective. My suggestion is that this can lead to useful new insights into the existence and universal characteristics of living systems. I will try to justify this point of view by contrasting the goals and practices of the field of condensed matter physics with materials science, and then by extension, the goals and practices of the newly emerging field of “Physics of Living Systems” with biology. Specific biological examples that I will discuss include the following: Universal patterns of gene expression in cell biology Universal scaling laws in ecosystems, including the species-area law, Kleiber’s law, Paradox of the Plankton Universality of the genetic code Universality of thermodynamic utilization in microbial communities Universal scaling laws in the tree of life The question of what can be learned from studying universal phenomena in biology will also be discussed. Universal phenomena, by their very nature, shed little light on detailed microscopic levels of description. Yet there is no point in seeking idiosyncratic mechanistic explanations for phenomena whose explanation is found in rather general principles, such as the central limit theorem, that every microscopic mechanism is constrained to obey. Thus, physical perspectives may be better suited to answering certain questions such as universality than traditional biological perspectives. Concomitantly, it must be recognized that the identification and understanding of universal phenomena may not be a good answer to questions that have traditionally occupied biological scientists. Lastly, I plan to talk about what is perhaps the central question of universality in biology: why does the phenomenon of life occur at all? Is it an inevitable consequence of the laws of physics or some special geochemical accident? What methodology could even begin to answer this question? I will try to explain why traditional approaches to biology do not aim to answer this question, by comparing with our understanding of superconductivity as a physical phenomenon, and with the theory of universal computation. References Nigel Goldenfeld, Tommaso Biancalani, Farshid Jafarpour. Universal biology and the statistical mechanics of early life. Phil. Trans. R. Soc. A 375, 20160341 (14 pages) (2017). Nigel Goldenfeld and Carl R. Woese. Life is Physics: evolution as a collective phenomenon far from equilibrium. Ann. Rev. Cond. Matt. Phys. 2, 375-399 (2011).

SeminarPhysics of Life

“Biophysics of Structural Plasticity in Postsynaptic Spines”

Padmini Rangamani
University of California, San Diego
Oct 26, 2020

The ability of the brain to encode and store information depends on the plastic nature of the individual synapses. The increase and decrease in synaptic strength, mediated through the structural plasticity of the spine, are important for learning, memory, and cognitive function. Dendritic spines are small structures that contain the synapse. They come in a variety of shapes (stubby, thin, or mushroom-shaped) and a wide range of sizes that protrude from the dendrite. These spines are the regions where the postsynaptic biochemical machinery responds to the neurotransmitters. Spines are dynamic structures, changing in size, shape, and number during development and aging. While spines and synapses have inspired neuromorphic engineering, the biophysical events underlying synaptic and structural plasticity of single spines remain poorly understood. Our current focus is on understanding the biophysical events underlying structural plasticity. I will discuss recent efforts from my group — first, a systems biology approach to construct a mathematical model of biochemical signaling and actin-mediated transient spine expansion in response to calcium influx caused by NMDA receptor activation and a series of spatial models to study the role of spine geometry and organelle location within the spine for calcium and cyclic AMP signaling. Second, I will discuss how mechanics of membrane-cytoskeleton interactions can give insight into spine shape region. And I will conclude with some new efforts in using reconstructions from electron microscopy to inform computational domains. I will conclude with how geometry and mechanics plays an important role in our understanding of fundamental biological phenomena and some general ideas on bio-inspired engineering.

SeminarNeuroscienceRecording

Modulation of C. elegans behavior by gut microbes

Michael O'Donnell
Yale University
Oct 25, 2020

We are interested in understanding how microbes impact the behavior of host animals. Animal nervous systems likely evolved in environments richly surrounded by microbes, yet the impact of bacteria on nervous system function has been relatively under-studied. A challenge has been to identify systems in which both host and microbe are amenable to genetic manipulation, and which enable high-throughput behavioral screening in response to defined and naturalistic conditions. To accomplish these goals, we use an animal host — the roundworm C. elegans, which feeds on bacteria — in combination with its natural gut microbiome to identify inter-organismal signals driving host-microbe interactions and decision-making. C. elegans has some of the most extensive molecular, neurobiological and genetic tools of any multicellular eukaryote, and, coupled with the ease of gnotobiotic culture in these worms, represents a highly attractive system in which to study microbial influence on host behavior. Using this system, we discovered that commensal bacterial metabolites directly modulate nervous system function of their host. Beneficial gut microbes of the genus Providencia produce the neuromodulator tyramine in the C. elegans intestine. Using a combination of behavioral analysis, neurogenetics, metabolomics and bacterial genetics we established that bacterially produced tyramine is converted to octopamine in C. elegans, which acts directly in sensory neurons to reduce odor aversion and increase sensory preference for Providencia. We think that this type of sensory modulation may increase association of C. elegans with these microbes, increasing availability of this nutrient-rich food source for the worm and its progeny, while facilitating dispersal of the bacteria.

SeminarNeuroscience

Contextual inference underlies the learning of sensorimotor repertoires

Daniel Wolpert
Columbia University
Oct 14, 2020

Humans spend a lifetime learning, storing and refining a repertoire of motor memories. However, it is unknown what principle underlies the way our continuous stream of sensori-motor experience is segmented into separate memories and how we adapt and use this growing repertoire. Here we develop a principled theory of motor learning based on the key insight that memory creation, updating, and expression are all controlled by a single computation – contextual inference. Unlike dominant theories of single-context learning, our repertoire-learning model accounts for key features of motor learning that had no unified explanation and predicts novel phenomena, which we confirm experimentally. These results suggest that contextual inference is the key principle underlying how a diverse set of experiences is reflected in motor behavior.

SeminarNeuroscience

Carnosine negatively modulates pro-oxidant activities of M1 peripheral macrophages and prevents neuroinflammation induced by amyloid-β in microglial cells

Giuseppe Caruso
Department of Drug Sciences, University of Catania
Sep 30, 2020

Carnosine is a natural dipeptide widely distributed in mammalian tissues and exists at particularly high concentrations in skeletal and cardiac muscles and brain. A growing body of evidence shows that carnosine is involved in many cellular defense mechanisms against oxidative stress, including inhibition of amyloid-β (Aβ) aggregation, modulation of nitric oxide (NO) metabolism, and scavenging both reactive nitrogen and oxygen species. Different types of cells are involved in the innate immune response, with macrophage cells representing those primarily activated, especially under different diseases characterized by oxidative stress and systemic inflammation such as depression and cardiovascular disorders. Microglia, the tissue-resident macrophages of the brain, are emerging as a central player in regulating key pathways in central nervous system inflammation; with specific regard to Alzheimer’s disease (AD) these cells exert a dual role: on one hand promoting the clearance of Aβ via phagocytosis, on the other hand increasing neuroinflammation through the secretion of inflammatory mediators and free radicals. The activity of carnosine was tested in an in vitro model of macrophage activation (M1) (RAW 264.7 cells stimulated with LPS + IFN-γ) and in a well-validated model of Aβ-induced neuroinflammation (BV-2 microglia treated with Aβ oligomers). An ample set of techniques/assays including MTT assay, trypan blue exclusion test, high performance liquid chromatography, high-throughput real-time PCR, western blot, atomic force microscopy, microchip electrophoresis coupled to laser-induced fluorescence, and ELISA aimed to evaluate the antioxidant and anti-inflammatory activities of carnosine was employed. In our experimental model of macrophage activation (M1), therapeutic concentrations of carnosine exerted the following effects: 1) an increased degradation rate of NO into its non-toxic end-products nitrite and nitrate; 2) the amelioration of the macrophage energy state, by restoring nucleoside triphosphates and counterbalancing the changes in ATP/ADP, NAD+/NADH and NADP+/NADPH ratio obtained by LPS + IFN-γ induction; 3) a reduced expression of pro-oxidant enzymes (NADPH oxidase, Cyclooxygenase-2) and of the lipid peroxidation product malondialdehyde; 4) the rescue of antioxidant enzymes expression (Glutathione peroxidase 1, Superoxide dismutase 2, Catalase); 5) an increased synthesis of transforming growth factor-β1 (TGF-β1) combined with the negative modulation of interleukines 1β and 6 (IL-1β and IL-6), and 6) the induction of nuclear factor erythroid-derived 2-like 2 (Nrf2) and heme oxygenase-1 (HO-1). In our experimental model of Aβ-induced neuroinflammation, carnosine: 1) prevented cell death in BV-2 cells challenged with Aβ oligomers; 2) lowered oxidative stress by decreasing the expression of inducible nitric oxide synthase and NADPH oxidase, and the concentrations of nitric oxide and superoxide anion; 3) decreased the secretion of pro-inflammatory cytokines such as IL-1β simultaneously rescuing IL-10 levels and increasing the expression and the release of TGF-β1; 4) prevented Aβ-induced neurodegeneration in primary mixed neuronal cultures challenged with Aβ oligomers and these neuroprotective effects was completely abolished by SB431542, a selective inhibitor of type-1 TGF-β receptor. Overall, our data suggest a novel multimodal mechanism of action of carnosine underlying its protective effects in macrophages and microglia and the therapeutic potential of this dipeptide in counteracting pro-oxidant and pro-inflammatory phenomena observed in different disorders characterized by elevated levels of oxidative stress and inflammation such as depression, cardiovascular disorders, and Alzheimer’s disease.

ePoster

Epiphenomenal representations of abstract rules in a connectionist model of the Delayed Match to Sample task

COSYNE 2022