Neurocomputational Models
neurocomputational models
Xavier Hinaut
This project aims to explore the adaptation of large language models (LLMs), such as ChatGPT, to study their potential in understanding human language and identifying associated pathologies. By focusing on advanced neurocomputational models and the use of functional MRI, this work aims to decipher linguistic representations and their individual variations, particularly in pathological contexts such as dyslexia.
The Social Brain: From Models to Mental Health
Given the complex and dynamic nature of our social relationships, the human brain needs to quickly learn and adapt to new social situations. The breakdown of any of these computations could lead to social deficits, as observed in many psychiatric disorders. In this talk, I will present our recent neurocomputational and intracranial work that attempts to model both 1) how humans dynamically adapt beliefs about other people and 2) how individuals can exert influence over social others through model-based forward thinking. Lastly, I will present our findings of how impaired social computations might manifest in different disorders such as addiction, delusion, and autism. Taken together, these findings reveal the dynamic and proactive nature of human interactions as well as the clinical significance of these high-order social processes.
The When, Where and What of visual memory formation
The eyes send a continuous stream of about two million nerve fibers to the brain, but only a fraction of this information is stored as visual memories. This talk will detail three neurocomputational models that attempt an understanding how the visual system makes on-the-fly decisions about how to encode that information. First, the STST family of models (Bowman & Wyble 2007; Wyble, Potter, Bowman & Nieuwenstein 2011) proposes mechanisms for temporal segmentation of continuous input. The conclusion of this work is that the visual system has mechanisms for rapidly creating brief episodes of attention that highlight important moments in time, and also separates each episode from temporally adjacent neighbors to benefit learning. Next, the RAGNAROC model (Wyble et al. 2019) describes a decision process for determining the spatial focus (or foci) of attention in a spatiotopic field and the neural mechanisms that provide enhancement of targets and suppression of highly distracting information. This work highlights the importance of integrating behavioral and electrophysiological data to provide empirical constraints on a neurally plausible model of spatial attention. The model also highlights how a neural circuit can make decisions in a continuous space, rather than among discrete alternatives. Finally, the binding pool (Swan & Wyble 2014; Hedayati, O’Donnell, Wyble in Prep) provides a mechanism for selectively encoding specific attributes (i.e. color, shape, category) of a visual object to be stored in a consolidated memory representation. The binding pool is akin to a holographic memory system that layers representations of select latent representations corresponding to different attributes of a given object. Moreover, it can bind features into distinct objects by linking them to token placeholders. Future work looks toward combining these models into a coherent framework for understanding the full measure of on-the-fly attentional mechanisms and how they improve learning.