Infants
infants
From Spiking Predictive Coding to Learning Abstract Object Representation
In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.
Dyslexia, Rhythm, Language and the Developing Brain
Recent insights from auditory neuroscience provide a new perspective on how the brain encodes speech. Using these recent insights, I will provide an overview of key factors underpinning individual differences in children’s development of language and phonology, providing a context for exploring atypical reading development (dyslexia). Children with dyslexia are relatively insensitive to acoustic cues related to speech rhythm patterns. This lack of rhythmic sensitivity is related to the atypical neural encoding of rhythm patterns in speech by the brain. I will describe our recent data from infants as well as children, demonstrating developmental continuity in the key neural variables.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Development of multisensory perception and attention and their role in audiovisual speech processing
Representations of abstract relations in infancy
Abstract relations are considered the pinnacle of human cognition, allowing analogical and logical reasoning, and possibly setting humans apart from other animal species. Such relations cannot be represented in a perceptual code but can easily be represented in a propositional language of thought, where relations between objects are represented by abstract discrete symbols. Focusing on the abstract relations same and different, I will show that (1) there is a discontinuity along ontogeny with respect to the representations of abstract relations, but (2) young infants already possess representations of same and different. Finally, (3) I will investigate the format of representation of abstract relations in young infants, arguing that those representations are not discrete, but rather built by juxtaposing abstract representations of entities.
Role of Oxytocin in regulating microglia functions to prevent brain damage of the developing brain
Every year, 30 million infants worldwide are delivered after intra-uterine growth restriction (IUGR) and 15 million are born preterm. These two conditions are the leading causes of ante/perinatal stress and brain injury responsible for neurocognitive and behavioral disorders in more than 9 million children each year. Both prematurity and IUGR are associated with perinatal systemic inflammation, a key factor associated with neuroinflammation and identified to be the best predictor of subsequent neurological impairments. Most of pharmacological candidates have failed to demonstrate any beneficial effect to prevent perinatal brain damage. In contrast, environmental enrichment based on developmental care, skin-to-skin contact and vocal/music intervention appears to confer positive effects on brain structure and function. However, mechanisms underlying these effects remain unknown. There is strong evidence that an adverse environment during pregnancy and the perinatal period can influence hormonal responses of the newborn with long-lasting neurobehavioral consequences in infancy and adulthood. Excessive cortisol release in response to perinatal stress induces pro-inflammatory and brain-programming effects. These deleterious effects are known to be balanced by Oxytocin (OT), a neuropeptide playing a key role during the perinatal period and parturition, in social behavior and regulating the central inflammatory response to injury in the adult brain. Using a rodent model of IUGR associated with perinatal brain damage, we recently reported that Carbetocin, a brain permeable long-lasting OT receptor (OTR) agonist, was associated with a significant reduction of activated microglia, the primary immune cells of the brain. Moreover this reduced microglia reactivity was associated to a long-term neuroprotection. These findings make OT a promising candidate for neonatal neuroprotection through neuroinflammation regulation. However, the causality between the endogenous OT and central inflammation response to injury has not been established and will be further studied by the lab.
Assessing consciousness in human infants
In a few months, human infants develop complex capacities in numerous cognitive domains. They learn their native language, recognize their parents, refine their numerical capacities and their perception of the world around them but are they conscious and how can we study consciousness when no verbal report is possible? One way to approach this question is to rely on the neural responses correlated with conscious perception in adults (i.e. a global increase of activity in notably frontal regions with top-down amplification of the sensory levels). We can thus study at what age the developing anatomical architecture might be mature enough to allow this type of responses, but moreover we can use similar experimental paradigms than in adults in which we expect to observe a similar pattern of functional responses.
Preschoolers' Comprehension of Functional Metaphors
Previous work suggests that children’s ability to understand metaphors emerges late in development. Researchers argue that children’s initial failure to understand metaphors is due to an inability to reason about shared relational structures between concepts. However, recent work demonstrates that preschoolers, toddlers, and even infants are already capable of relational reasoning. Might preschoolers also be capable of understanding metaphors, given more sensitive experimental paradigms? I explore whether preschoolers (N = 200, ages 4-5) understand functional metaphors, namely metaphors based on functional similarities. In Experiment 1a, preschoolers rated functional metaphors (e.g. “Roofs are hats”; “Clouds are sponges”) as “smarter” than nonsense statements. In Experiment 1b, adults (N = 48) also rated functional metaphors as “smarter” than nonsense statements (e.g. “Dogs are scissors”; “Boats are skirts”). In Experiment 2, preschoolers preferred functional explanations (e.g. “Both hold water”) over perceptual explanations (e.g. “Both are fluffy”) when interpreting a functional metaphor (e.g. “Clouds are sponges”). In Experiment 3, preschoolers preferred functional metaphors over nonsense statements in a dichotomous-choice task. Overall, this work demonstrates preschoolers’ early-emerging ability to understand functional metaphors.
Becoming Human: A Theory of Ontogeny
Humans are biologically adapted for cultural life in ways that other primates are not. Humans have unique motivations and cognitive skills for sharing emotions, experience, and collaborative actions (shared intentionality). These motivations and skills first emerge in human ontogeny at around one year of age, as infants begin to participate with other persons in various kinds of collaborative and joint attentional activities, including linguistic communication. Our nearest primate relatives understand important aspects of intentional action - especially in competitive situations - but they do not seem to have the motivations and cognitive skills necessary to engage in activities involving collaboration, shared intentionality, and, in general, things cultural.
Infant Relational Learning - Interactions with Visual and Linguistic Factors
Humans are incredible learners, a talent supported by our ability to detect and transfer relational similarities between items and events. Spotting these common relations despite perceptual differences is challenging, yet there’s evidence that this ability begins early, with infants as young as 3 months discriminating same and different (Anderson et al., 2018; Ferry et al., 2015). How? To understand the underlying mechanisms, I examine how learning outcomes in the first year correspond with changes in input and in infant age. I discuss the commonalities in this process with that seen in older children and adults, as well as differences due to interactions with other maturing processes like language and visual attention.
The developing visual brain – answers and questions
We will start our talk with a short video of our research, illustrating methods (some old and new) and findings that have provided our current understanding of how visual capabilities develop in infancy and early childhood. However, our research poses some outstanding questions. We will briefly discuss three issues, which are linked by a common focus on the development of visual attentional processing: (1) How do recurrent cortical loops contribute to development? Cortical selectivity (e.g., to orientation, motion, and binocular disparity) develops in the early months of life. However, these systems are not purely feedforward but depend on parallel pathways, with recurrent feedback loops playing a critical role. The development of diverse networks, particularly for motion processing, may explain changes in dynamic responses and resolve developmental data obtained with different methodologies. One possible role for these loops is in top-down attentional control of visual processing. (2) Why do hyperopic infants become strabismic (cross-eyes)? Binocular interaction is a particularly sensitive area of development. Standard clinical accounts suppose that long-sighted (hyperopic) refractive errors require accommodative effort, putting stress on the accommodation-convergence link that leads to its breakdown and strabismus. Our large-scale population screening studies of 9-month infants question this: hyperopic infants are at higher risk of strabismus and impaired vision (amblyopia and impaired attention) but these hyperopic infants often under- rather than over-accommodate. This poor accommodation may reflect poor early attention processing, possibly a ‘soft sign’ of subtle cerebral dysfunction. (3) What do many neurodevelopmental disorders have in common? Despite similar cognitive demands, global motion perception is much more impaired than global static form across diverse neurodevelopmental disorders including Down and Williams Syndromes, Fragile-X, Autism, children with premature birth and infants with perinatal brain injury. These deficits in motion processing are associated with deficits in other dorsal stream functions such as visuo-motor co-ordination and attentional control, a cluster we have called ‘dorsal stream vulnerability’. However, our neuroimaging measures related to motion coherence in typically developing children suggest that the critical areas for individual differences in global motion sensitivity are not early motion-processing areas such as V5/MT, but downstream parietal and frontal areas for decision processes on motion signals. Although these brain networks may also underlie attentional and visuo-motor deficits , we still do not know when and how these deficits differ across different disorders and between individual children. Answering these questions provide necessary steps, not only increasing our scientific understanding of human visual brain development, but also in designing appropriate interventions to help each child achieve their full potential.
Associations between maternal pre-pregnancy BMI and white matter integrity in infants
FENS Forum 2024
The role of the mean diffusivity of the amygdala in the perception of emotional faces in 8-month-old infants
FENS Forum 2024