Concrete
concrete
Harnessing Big Data in Neuroscience: From Mapping Brain Connectivity to Predicting Traumatic Brain Injury
Neuroscience is experiencing unprecedented growth in dataset size both within individual brains and across populations. Large-scale, multimodal datasets are transforming our understanding of brain structure and function, creating opportunities to address previously unexplored questions. However, managing this increasing data volume requires new training and technology approaches. Modern data technologies are reshaping neuroscience by enabling researchers to tackle complex questions within a Ph.D. or postdoctoral timeframe. I will discuss cloud-based platforms such as brainlife.io, that provide scalable, reproducible, and accessible computational infrastructure. Modern data technology can democratize neuroscience, accelerate discovery and foster scientific transparency and collaboration. Concrete examples will illustrate how these technologies can be applied to mapping brain connectivity, studying human learning and development, and developing predictive models for traumatic brain injury (TBI). By integrating cloud computing and scalable data-sharing frameworks, neuroscience can become more impactful, inclusive, and data-driven..
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Epilepsy genetics 2023: From research to advanced clinical genetic test interpretation
The presentation will provide an overview of the expanding role of genetic factors in epilepsy. It will delve into the fundamentals of this field and elucidate how digital tools and resources can aid in the re-evaluation of genetic test results. In the initial segment of the presentation, Dr. Lal will examine the advancements made over the past two decades regarding the genetic architecture of various epilepsy types. Additionally, he will present research studies in which he has actively participated, offering concrete examples. Subsequently, during the second part of the talk, Dr. Lal will share the ongoing research projects that focus on epilepsy genetics, bioinformatics, and health record data science.
Walk the talk: concrete actions to promote diversity in neuroscience in Latin America
Building upon the webinar "What are the main barriers to succeed in brain sciences in Latin America?" (February 2021) and the paper "Addressing the opportunity gap in the Latin American neuroscience community" (Silva, A., Iyer, K., Cirulli, F. et al. Nat Neurosci August 2022), this ALBA-IBRO Webinar is the next chapter in our journey towards fostering inclusivity and diversity in neuroscience in Latin America. The webinar is designed to go beyond theoretical discussions and provide tangible solutions. We will showcase 3-4 best practice case studies, shining a spotlight on real-life actions and campaigns implemented at the institutional level, be it within government bodies, universities, or other organisations. Our goal is to empower neuroscientists across Latin America by equipping them with practical knowledge they can apply in their own institutions and countries.
From spikes to factors: understanding large-scale neural computations
It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.
Autopoiesis and Enaction in the Game of Life
Enaction plays a central role in the broader fabric of so-called 4E (embodied, embedded, extended, enactive) cognition. Although the origin of the enactive approach is widely dated to the 1991 publication of the book "The Embodied Mind" by Varela, Thompson and Rosch, many of the central ideas trace to much earlier work. Over 40 years ago, the Chilean biologists Humberto Maturana and Francisco Varela put forward the notion of autopoiesis as a way to understand living systems and the phenomena that they generate, including cognition. Varela and others subsequently extended this framework to an enactive approach that places biological autonomy at the foundation of situated and embodied behavior and cognition. I will describe an attempt to place Maturana and Varela's original ideas on a firmer foundation by studying them within the context of a toy model universe, John Conway's Game of Life (GoL) cellular automata. This work has both pedagogical and theoretical goals. Simple concrete models provide an excellent vehicle for introducing some of the core concepts of autopoiesis and enaction and explaining how these concepts fit together into a broader whole. In addition, a careful analysis of such toy models can hone our intuitions about these concepts, probe their strengths and weaknesses, and move the entire enterprise in the direction of a more mathematically rigorous theory. In particular, I will identify the primitive processes that can occur in GoL, show how these can be linked together into mutually-supporting networks that underlie persistent bounded entities, map the responses of such entities to environmental perturbations, and investigate the paths of mutual perturbation that these entities and their environments can undergo.
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
Hebbian Plasticity Supports Predictive Self-Supervised Learning of Disentangled Representations
Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised machine learning techniques that shares a deep conceptual connection to Bienenstock-Cooper-Munro (BCM) theory and is consistent with experimentally observed plasticity rules. We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples. In response to altered visual experience, our model qualitatively captures neuronal selectivity changes observed in the monkey inferotemporal cortex in-vivo. Our work suggests a plausible learning rule to drive learning in sensory networks while making concrete testable predictions.
Probabilistic computation in natural vision
A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.
Abstraction doesn't happen all at once (despite what some models of concept learning suggest)
In the past few years, there has been growing evidence that the basic ability for relational generalization starts in early infancy, with 3-month-olds seeming to learn relational abstractions with little training. Further, work with toddlers seem to suggest that relational generalizations are no more difficult than those based on objects, and they can readily consider both simultaneously. Likewise, causal learning research with adults suggests that people infer causal relationships at multiple levels of abstraction simultaneously as they learn about novel causal systems. These findings all appear counter to theories of concept learning that posit when concepts are first learned they tend to be concrete (tied to specific contexts and features) and abstraction proceeds incrementally as learners encounter more examples. The current talk will not question the veracity of any of these findings but will present several others from my and others’ research on relational learning that suggests that when the perceptual or conceptual content becomes more complex, patterns of incremental abstraction re-emerge. Further, the specific contexts and task parameters that support or hinder abstraction reveal the underlying cognitive processes. I will then consider whether the models that posit simultaneous, immediate learning at multiple levels of abstraction can accommodate these more complex patterns.
AI UPtake: Panel discussion on collaborative research
Artificial intelligence (AI) and machine learning (ML) can facilitate new paradigms and solutions in almost every research field. Collaboration is essential to achieve tangible and concrete progress in impactful and meaningful AI and ML research, due to its transdisciplinary nature. Come and meet University of Pretoria (UP) academics that are embracing and exploring the opportunities that AI and ML offer to transcend the conventional boundaries of their disciplines. Join the discussion to debate this new frontier of opportunities and challenges that may enable you to look beyond the obvious, and discover new directions and opportunities that we may offer for tomorrow — together!
3 Reasons Why You Should Care About Category Theory
Category theory is a branch of mathematics which have been used to organize various regions of mathematics and related sciences from a radical “relation-first” point of view. Why consciousness researchers should care about category theory? " "There are (at least) 3 reasons:" "1 Everything is relational" "2 Everything is relation" "3 Relation is everything" "In this talk we explain the reasons above more concretely and introduce the ideas to utilize basic concepts in category theory for consciousness studies.
One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading
According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.
The physics of cement cohesion
Cement is the main binding agent in concrete, literally gluing together rocks and sand into the most-used synthetic material on Earth. However, cement production is responsible for significant amounts of man- made greenhouse gases—in fact if the cement industry were a country, it would be the third largest emitter in the world. Alternatives to the current, environmentally harmful cement production process are not available essentially because the gaps in fundamental understanding hamper the development of smarter and more sustainable solutions. The ultimate challenge is to link the chemical composition of cement grains to the nanoscale physics of the cohesive forces that emerge when mixing cement with water. Cement nanoscale cohesion originates from the electrostatics of ions accumulated in a water-based solution between like-charged surfaces but it is not captured by existing theories because of the nature of the ions involved and the high surface charges. Surprisingly enough, this is also the case for unexplained cohesion in a range of colloidal and biological matter. About one century after the early studies of cement hydration, we have quantitatively solved this notoriously hard problem and discovered how cement cohesion develops during hydration. I will discuss how 3D numerical simulations that feature a simple but molecular description of ions and water, together with an analytical theory that goes beyond the traditional continuum approximations, helped us demonstrate that the optimized interlocking of ion-water structures determine the net cohesive forces and their evolution. These findings open the path to scientifically grounded strategies of material design for cements and have implications for a much wider range of materials and systems where ionic water-based solutions feature both strong Coulombic and confinement effects, ranging from biological membranes to soils. Construction materials are central to our society and to our life as humans on this planet, but usually far removed from fundamental science. We can now start to understand how cement physical-chemistry determines performance, durability and sustainability.
Global AND Scale-Free? Spontaneous cortical dynamics between functional networks and cortico-hippocampal communication
Recent advancements in anatomical and functional imaging emphasize the presence of whole-brain networks organized according to functional and connectivity gradients, but how such structure shapes activity propagation and memory processes still lacks asatisfactory model. We analyse the fine-grained spatiotemporal dynamics of spontaneous activity in the entire dorsal cortex. through simultaneous recordings of wide-field voltage sensitive dye transients (VS), cortical ECoG, and hippocampal LFP in anesthetized mice. Both VS and ECoG show cortical avalanches. When measuring avalanches from the VS signal, we find a major deviation of the size scaling from the power-law distribution predicted by the criticality hypothesis and well approximated by the results from the ECoG. Breaking from scale-invariance, avalanches can thus be grouped in two regimes. Small avalanches consists of a limited number of co-activation modes involving a sub-set of cortical networks (related to the Default Mode Network), while larger avalanches involve a substantial portion of the cortical surface and can be clustered into two families: one immediately preceded by Retrosplenial Cortex activation and mostly involving medial-posterior networks, the other initiated by Somatosensory Cortex and extending preferentially along the lateral-anterior region. Rather than only differing in terms of size, these two set of events appear to be associated with markedly different brain-wide dynamical states: they are accompaniedby a shift in the hippocampal LFP, from the ripple band (smaller) to the gamma band (larger avalanches), and correspond to opposite directionality in the cortex-to-hippocampus causal relationship. These results provide a concrete description of global cortical dynamics, and shows how cortex in its entirety is involved in bi-directional communication in the hippocampus even in sleep-like states.
Panorama de tecnologías abiertas para ciencia y educación en América Latina
Open science hardware (OSH) as a concept usually refers to artifacts, but also to a practice, a discipline and a collective of people pushing for open access to the design of science tools. Since 2016, the Global Open Science Hardware (GOSH) movement gathers actors from academia, education, the private sector and civic organisations to advocate for OSH to be ubiquitous by 2025. In Latin America, GOSH advocates have fundraised and gathered around the development of annual "residencies" for building hardware for science and education. The community is currently defining its regional strategy and identifying other regional actors working on science and technology democratization. In this presentation I will give an overview of the open hardware movement for science, with a focus on the activities and strategy of the Latin American chapter and concrete ways to engage.
Is Rule Learning Like Analogy?
Humans’ ability to perceive and abstract relational structure is fundamental to our learning. It allows us to acquire knowledge all the way from linguistic grammar to spatial knowledge to social structures. How does a learner begin to perceive structure in the world? Why do we sometimes fail to see structural commonalities across events? To begin to answer these questions, I attempt to bridge two large, yet somewhat separate research traditions in understanding human’s structural abstraction: rule learning (Marcus et al., 1999) and analogical learning (Gentner, 1989). On the one hand, rule learning research has shown humans’ domain-general ability and ease—as early as 7-month-olds—to abstract structure from a limited experience. On the other hand, analogical learning works have shown robust constraints in structural abstraction: young learners prefer object similarity over relational similarity. To understand this seeming paradox between ease and difficulty, we conducted a series of studies using the classic rule learning paradigm (Marcus et al., 1999) but with an analogical (object vs. relation) twist. Adults were presented with 2-minute sentences or events (syllables or shapes) containing a rule. At test, they had to choose between rule abstraction and object matches—the same syllable or shape they saw before. Surprisingly, while in the absence of object matches adults were perfectly capable of abstracting the rule, their ability to do so declined sharply when object matches were present. Our initial results suggest that rule learning ability may be subject to the usual constraints and signatures of analogical learning: preference to object similarity can dampen rule generalization. Humans’ abstraction is also concrete at the same time.