Mental Models
mental models
Epigenetic rewiring in Schinzel-Giedion syndrome
During life, a variety of specialized cells arise to grant the right and timely corrected functions of tissues and organs. Regulation of chromatin in defining specialized genomic regions (e.g. enhancers) plays a key role in developmental transitions from progenitors into cell lineages. These enhancers, properly topologically positioned in 3D space, ultimately guide the transcriptional programs. It is becoming clear that several pathologies converge in differential enhancer usage with respect to physiological situations. However, why some regulatory regions are physiologically preferred, while some others can emerge in certain conditions, including other fate decisions or diseases, remains obscure. Schinzel-Giedion syndrome (SGS) is a rare disease with symptoms such as severe developmental delay, congenital malformations, progressive brain atrophy, intractable seizures, and infantile death. SGS is caused by mutations in the SETBP1 gene that results in its accumulation further leading to the downstream accumulation of SET. The oncoprotein SET has been found as part of the histone chaperone complex INHAT that blocks the activity of histone acetyltransferases suggesting that SGS may (i) represent a natural model of alternative chromatin regulation and (ii) offer chances to study downstream (mal)adaptive mechanisms. I will present our work on the characterization of SGS in appropriate experimental models including iPSC-derived cultures and mouse.
Modelling metaphor comprehension as a form of analogizing
What do people do when they comprehend language in discourse? According to many psychologists, they build and maintain cognitive representations of utterances in four complementary mental models for discourse that interact with each other: the surface text, the text base, the situation model, and the context model. When people encounter metaphors in these utterances, they need to incorporate them into each of these mental representations for the discourse. Since influential metaphor theories define metaphor as a form of (figurative) analogy, involving cross-domain mapping of a smaller or greater extent, the general expectation has been that metaphor comprehension is also based on analogizing. This expectation, however, has been partly borne out by the data, but not completely. There is no one-to-one relationship between metaphor as (conceptual) structure (analogy) and metaphor as (psychological) process (analogizing). According to Deliberate Metaphor Theory (DMT), only some metaphors are handled by analogy. Instead, most metaphors are presumably handled by lexical disambiguation. This is a hypothesis that brings together most metaphor research in a provocatively new way: it means that most metaphors are not processed metaphorically, which produces a paradox of metaphor. In this talk I will sketch out how this paradox arises and how it can be resolved by a new version of DMT, which I have described in my forthcoming book Slowing metaphor down: Updating Deliberate Metaphor Theory (currently under review). In this theory, the distinction between, but also the relation between, analogy in metaphorical structure versus analogy in metaphorical process is of central importance.
Two lessons from experimental models of generalized absence epilepsy, myelin plasticity dependent epileptogenesis, and circuits of cognitive comorbidities
Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses
There is little consensus on the level of spatial complexity at which dendrites operate. On the one hand, emergent evidence indicates that synapses cluster at micrometer spatial scales. On the other hand, most modelling and network studies ignore dendrites altogether. This dichotomy raises an urgent question: what is the smallest relevant spatial scale for understanding dendritic computation? We have developed a method to construct compartmental models at any level of spatial complexity. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models. Thus, we are able to systematically construct passive as well as active dendrite models at varying degrees of spatial complexity. We evaluate which elements of the dendritic computational repertoire are captured by these models. We show that many canonical elements of the dendritic computational repertoire can be reproduced with few compartments. For instance, for a model to behave as a two-layer network, it is sufficient to fit a reduced model at the soma and at locations at the dendritic tips. In the basal dendrites of an L2/3 pyramidal model, we reproduce the backpropagation of somatic action potentials (APs) with a single dendritic compartment at the tip. Further, we obtain the well-known Ca-spike coincidence detection mechanism in L5 Pyramidal cells with as few as eleven compartments, the requirement being that their spacing along the apical trunk supports AP backpropagation. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Consequently, when the average conductance load on distal synapses is constant, the dendritic tree can be simplified while appropriately decreasing synaptic weights. When the conductance level fluctuates strongly, for instance through a-priori unpredictable fluctuations in NMDA activation, a constant weight rescale factor cannot be found, and the dendrite cannot be simplified. We have created an open source Python toolbox (NEAT - https://neatdend.readthedocs.io/en/latest/) that automatises the simplification process. A NEST implementation of the reduced models, currently under construction, will enable the simulation of few-compartment models in large-scale networks, thus bridging the gap between cellular and network level neuroscience.
Mental Simulation, Imagination, and Model-Based Deep RL
Mental simulation—the capacity to imagine what will or what could be—is a salient feature of human cognition, playing a key role in a wide range of cognitive abilities. In artificial intelligence, the last few years have seen the development of methods which are analogous to mental models and mental simulation. In this talk, I will discuss recent methods in deep learning for constructing such models from data and learning to use them via reinforcement learning, and compare such approaches to human mental simulation. While a number of challenges remain in matching the capacity of human mental simulation, I will highlight some recent progress on developing more compositional and efficient model-based algorithms through the use of graph neural networks and tree search.
What is hippocampal sclerosis? A cell-type specific perspective
Temporal lobe epilepsy is considered a neuronal microcircuit dysfunction, yet mechanisms are poorly understood. Here we will discuss recent data on cell-type specific alterations of hippocampal microcircuit function in experimental models of temporal lobe epilepsy. We will highlight the importance of leveraging on cellular heterogeneity to better understand the complexities accompanying hippocampal sclerosis.
An in silico population of compartmental models of neurons from primary hippocampal cultures
FENS Forum 2024